id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2305.04271 | 6G for Connected Sky: A Vision for Integrating Terrestrial and
Non-Terrestrial Networks | In this paper, we present the vision of our project 6G for Connected Sky
(6G-SKY) to integrate terrestrial networks (TNs) and non-terrestrial networks
(NTNs) and outline the current research activities in 6G research projects in
comparison with our project. From the perspectives of industry and academia, we
identify key use case segments connecting both aerial and ground users with our
6G-SKY multi-layer network architecture. We explain functional views of our
holistic 6G-SKY architecture addressing the heterogeneity of aerial and space
platforms. Architecture elements and communication links are identified. We
discuss 6G-SKY network design and management functionalities by considering a
set of inherent challenges posed by the multi-layer 3-dimensional networks,
which we termed as combined airspace and NTN (combined ASN). Finally, we
investigate additional research challenges for 6G-SKY project targets. | Mustafa Ozger, Istvan Godor, Anders Nordlow, Thomas Heyn, Sreekrishna Pandi, Ian Peterson, Alberto Viseras, Jaroslav Holis, Christian Raffelsberger, Andreas Kercek, Bengt Mölleryd, Laszlo Toka, Gergely Biczok, Robby de Candido, Felix Laimer, Udo Tarmann, Dominic Schupke, Cicek Cavdar | 2023-05-07T13:22:18Z | http://arxiv.org/abs/2305.04271v1 | # 6G for Connected Sky: A Vision for Integrating Terrestrial and Non-Terrestrial Networks
###### Abstract
In this paper, we present the vision of our project 6G for Connected Sky (6G-SKY) to integrate terrestrial networks (TNs) and non-terrestrial networks (NTNs) and outline the current research activities in 6G research projects in comparison with our project. From the perspectives of industry and academia, we identify key use case segments connecting both aerial and ground users with our 6G-SKY multi-layer network architecture. We explain functional views of our holistic 6G-SKY architecture addressing the heterogeneity of aerial and space platforms. Architecture elements and communication links are identified. We discuss 6G-SKY network design and management functionalities by considering a set of inherent challenges posed by the multi-layer 3-dimensional networks, which we termed as combined airspace and NTN (combined ASN). Finally, we investigate additional research challenges for 6G-SKY project targets.
terrestrial networks, non-terrestrial networks, 3D network architecture, use case segments, 3D network design.
## I Introduction
Digital airspace is transforming rapidly with major societal change programs such as EU SES and SESAR where Europe aims to create European Digital Sky by 2040 [1]. These programs have increased the need for connectivity, sustainability, a higher degree of autonomous operations, automation of Air Traffic Management (ATM)/Universal Traffic Management (UTM), and new business models. 6G will play a crucial role as a key technology enabler to realize digital airspace.
Advancements in aviation and space industries have increased the number of flying vehicles (FVs) at different altitudes in the sky. For instance, unmanned aerial vehicles (UAVs) and flying taxis are foreseen as new vehicles in the sky. Furthermore, an increasing demand is observed for air travel and cargo. There are also intensive efforts to deploy new satellite constellations. Another segment of FVs is high altitude platforms (HAPS) and HAP stations as International Mobile Telecommunications (IMT) base stations (HIBS), which have received significant interest from telecom operators to complement their terrestrial networks (TNs).
3D network architectures combining terrestrial, HAPS/HIBS and satellite networks have the full potential to provide ubiquitous broadband connectivity with a managed latency to both terrestrial and aerial users. Novel methods must be studied for 6G to secure higher spectrum utilization efficiency and network operation with lower power consumption by exploiting solar power and green hydrogen. Connected sky complementing 6G terrestrial network will bring required data service for everyone everywhere.
There are increasing efforts from industry and academia via research projects to design next-generation networks. 5D-AeroSafe project [2] develops UAV-based services to ensure the safety and security of airports and waterways. ETHER [3] focuses on a 3D multi-layered architecture with a design focus on the antenna, waveform, handover and network management. 6G-NTN [4] integrates NTN components in 6G to deliver ultra-reliable low latency communication (URLLC) and enhanced mobile broadband (eMBB), emergency and high-accuracy location services. Hexa-X [5] proposes new radio access technologies at high frequencies, high-resolution localization, and intelligent network design. DEDICAT6G [6] develops 6G for human-centric applications. 6G BRAINS [7] develops AI-driven multi-agent deep reinforcement learning solutions for resource allocation in machine-type communication networks for future industrial networks. AI@EDGE [8] aims at developing secure and trustworthy AI solutions to automatically manage heterogeneous mobile edge computing resources. MARSAL [9] aims at developing a complete framework to manage and orchestrate network resources in beyond 5G networks with optical and wireless infrastructure. DAE-MON [10] develops novel approaches for network intelligence design to enable high-performance, sustainable and extremely reliable zero-touch network systems. REINDEER [11] aims at achieving perceived zero latency and uninterrupted availability in time and location via developing smart communication technologies.
Despite notable efforts in the mentioned projects, holistic designs of 6G wireless networks integrating TNs and NTNs to provide connectivity to both ground and aerial users are absent. Most of these projects focus on designing machine learning (ML)-driven solutions to provide energy efficiency, zero perceived delay, network management and orchestration for beyond 5G and 6G networks. However, the primary focus of most of them is TNs disregarding FVs and satellites. On the other hand, ETHER and 6G-NTN projects focus on integrating NTN elements for certain use cases such as connecting underserved areas. Furthermore, Hexa-X develops
6G radio technologies for ground users' communication. In our project 6G for Connected Sky (6G-SKY) [12], we design a holistic adaptive AI and cloud-native network architecture to unlock potentials of 3D communications with a focus on diverse use cases such as urban air mobility (UAM) and also regulations on the spectrum. The main difference between the mentioned projects and the 6G-SKY project is that it enables reliable and robust connectivity for aerial and ground users via flexible and adaptive network architectures adopting multiple technologies such as satellite and direct air-to-ground communication (DA2GC). Furthermore, novel radio technologies will be proposed to support high capacity, reliable and secure DA2GC and air-to-air communication (A2AC) links as well as low delay and reliable command and control links. The devised novel radio technologies and resource allocation schemes will address challenges posed by wireless communication and networking and regulations with respect to safety, airspace management, and frequency usage. As compared to the other mentioned projects, the 6G-SKY project extends the architecture scope into the third dimension together with a holistic and integrated network architecture to support a diverse set of quality of services (QoS) for both aerial and ground users. The 6G-SKY project will take into account different characteristics of FVs and exploit UAVs, HAPS/HIBS, and satellites for connectivity. Fig. 1 illustrates our holistic architecture. We also propose a new term: combined airspace and NTNs, abbreviated as combined ASN to name our holistic network architecture. This architecture extends the terrestrial cellular architecture through communication services that can be provided to and/or by flying platforms in airspace such as airplanes, HAPs, and UAVs - including DA2GC - and by satellites.
The multi-layered architecture in 6G-SKY necessitates the joint interplay between the telecommunication, aviation, and space industries. The project innovations address key elements, to demonstrate 5G Advanced and 6G features that underpin the architecture, to de-risk technical challenges at an early stage, and to ensure corresponding interoperability standards, commencing in 3GPP in \(\sim\)5 years for 6G. De-risking addresses principal showstoppers such as the use of spectrum between ground/air/space entities and technical issues such as communications performance of mobile ground/air/space entities to eventually provide 6G services.
## II Use Cases and Scenarios
6G-SKY operates in a highly dynamic environment with many use cases and services to be provided. 6G-SKY connectivity services are divided into two categories. The first one is _airspace communication_ referring to connecting FVs via DA2GC, A2AC, etc. This service category covers data link and command and control (C2) link services for FVs. The other service category is _NTN connectivity service_ to connect terrestrial users. This connectivity service is for either _capacity extension_ or _coverage extension_. To cater to all these use cases, use case segments have been defined to cover attached services. The use case segments are defined from a commercial perspective as seen in Table I with connectivity services.
A number of use case segments are identified under airspace communications connectivity for FVs. _Segment 1_ captures the vision of communication towards having the same user experience on an airplane as on the ground level, which includes different types of operations in airplanes such as passenger communication. _Segment 2_ aims to enable UAM that will impact 6G innovation through smart transportation, cities and logistics via TN and NTN integration forming a digital society. The connectivity services we target in _Segment 2_ are data and C2 link services for FVs. _Segment 3_ captures the use cases beyond UAM such as public safety applications with beyond visual line of sight (BVLOS) operation. As this segment aims to connect FVs, the main connectivity services are data and C2 communication services. _Segment 4_ interacts with all aerial platforms and different management systems such as UTM, in which the data and C2 services are key to digitalizing airspace. On the other hand, certain use case segments capture NTN connectivity for terrestrial users. _Segment 5_ provides the same level of network performance in rural/remote areas as in densely populated areas with 3GPP services to extend ground communication capacity. _Segment 6_ is for use cases employing wireless networks as a connection point and transport of satellite/HAPS/HIBS-related data traffic for non-3GPP services. This segment is for the capacity extension for ground users. _Segment 7_ is mainly for IoT connectivity to support power-efficient communication, where the attached services are coverage extensions.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Connectivity Cases** & **\#** & **Use Case Segments** & **Scenarios** \\ \hline Airspace communication & 1 & Commercial Airplane Traffic & Passenger operation (inflight entertainment connectivity) \\ & 2 & Urban Air Mobility (UAM) & Aircraft operation (communication between ATC and pilots) \\ & 3 & Verticals beyond UAM & Crew operation and ground operation \\ \cline{2-4} & 4 & Digital Airspace and TN integration towards management systems & Public safety, peacekeeping, and defense applications \\ \hline \multirow{4}{*}{NTN} & Capacity & Connectivity for rural and remote areas & Air traffic management (ATM), UTM, national security systems \\ & Extension & 6 & Satellite/HAPS/HIBS for backhauling - non-3GPP services & Satellite and HAPS platforms for earth observation, disaster management and critical infrastructure monitoring \\ \cline{2-4} & Coverage Extension & 7 & Internet of Things & power efficient communication ranging from logistical tracking, telemetry, remote monitoring, geo fencing, security, etc. \\ \hline \end{tabular}
\end{table} TABLE I: Use case segments for all aerial platforms on all altitude levels, ground and space.
## III 6G-SKY Architecture
To provide limitless connectivity to all aerial platforms at all altitude levels and users on the ground, the 6G-SKY network architecture will impact all functional levels of 3D wireless systems. Fig. 2 presents different functionality to stress the need for this integration and architecture functions from a 3D perspective. The main considerations of this architecture are: all wireless concepts must be reviewed from a 3D perspective, all functional levels such as service management and network orchestration must be considered, communication options increase both for digital airspace and TN for same use cases, and there is a need for the architecture to support both legacy standards (4G, 5G, 5G Advanced) and new standards (6G). Based on the different levels in the functional view of our architecture, the service management function activates, changes and closes a service from a service life cycle perspective. It is also a borderline for external service activation across wireless systems. Service management interfaces customer intent, service level specifications and key performance indicators (KPIs) with use case segments. To this end, application interaction and support functions are for wireless systems activities related to service management and support requests from the application layer. Data exposure makes the network data available to extend services. These services drive 3D requirements from the network. The function interacting with the service management is 3D end-to-end (E2E) and domain-oriented orchestration to provision resources according to the requirements. This function includes slicing to guarantee required services for provisioning. Other functions in the 6G-SKY architecture are joint sensing and communication, edge computing, trustworthy AI, safety and security.
Each of NTNs components such as satellite and HAPS/ HIBS is operated and designed independently mostly in competition with one another to provide connectivity to both terrestrial and aerial users. TNs up to 3GPP Release 16 are also designed independently to provide connectivity and coverage for different connectivity services resulting in the over-provisioning of resources with very high deployment costs. They all have different pros and cons, with no single technology yet that fits to support a large diversity of requirements raised by aerial and terrestrial users. However, the first time that NTN networks are supported in 3GPP is the current Release 17. 6G-SKY project aims to continue and extend this approach and design the integration of different NTNs with TNs combining the strengths of each technology in a joint solution minimizing the cost and satisfying a diverse set of QoS requirements from terrestrial and aerial users. The 6G-SKY project focuses on developing an autonomously adaptive "5G and beyond" network in 3D to shape itself considering the heterogeneity of the following three important aspects:
**Communication links:** Satellite communication links (different delay and throughput options depending on the satellite technology); DA2GC links between the ground BSs and aerial
Fig. 1: Combined airspace and NTN (combined ASN) for 6G limitless connectivity to all aerial platforms and terrestrial users.
Fig. 2: 6G-SKY architecture functional view.
users; A2AC links among same and different types of aerial vehicles in the 3D space; HAPS communication links.
**Aerial users with different QoS requirements:** UAVs have different throughput, delay and reliability requirements depending on use cases. BVLOS operations, swarm coordination, etc., require ultra-reliable and robust connectivity, while passenger communications in commercial airplanes require high-capacity links. Although UAVs act as flying sensors (for inspection, traffic control, etc.), requiring massive payload data often high-capacity links beside low latency QoS for UAV coordination. UAVs (in single, teaming, or swarming mode) may themselves be part of the communication infrastructure among aerial and terrestrial users. In this case, the UAVs act as relays where swarm coordination has to optimize e.g., certain communication coverage and QoS requirements of the target application. In this way, a highly dynamic and adaptive ad-hoc network can be realized for particular applications like a temporary enhanced communication need in critical traffic situations, autonomous truck platooning, and emergency situations. Again, low latency, as well as high-capacity links, are required at the same time. In addition, flying taxis as elements of UAM impose a new set of requirements to satisfy their safe and secure operations similar to Air Traffic Management (ATM). URLLC and enhanced mobile broadband (eMBB) communications are enablers of these applications.
**Terrestrial users located at regions hard to access:** Sensors located in such regions may require massive machine-type communication (mMTC) coverage. On the other hand, ground users require high throughput connectivity such as eMBB. Industrial applications or future person and goods mobility based on automated vehicles and their tight coupling to an intelligent infrastructure with related services (emergency-traffic-, information, route planning, etc.) in remote areas with critical machine-type communications (cMTC) may require URLLC. This, for instance, will particularly be the case when the level of automation in future mobility (goods and personal) rises and requires machine-type communication (V2V-Vehicle-to-Vehicle and V2I-Vehicle-to-Infrastructure). This will be supported by both, ground-based and aerial communication infrastructure as proposed in this project.
To address the heterogeneity in different aspects of the 6G-SKY project, the first objective is to build an integrated 3D network, i.e., combined ASN, as seen in Fig 1. It combines distinct communication technologies such as DA2GC, satellite communication, and A2AC to satisfy the connectivity requirements of both aerial and ground users. It should be also robust to the changes in the networks such as variations in link capacities via utilizing multiple communication technologies. It supports constructing flexible network topologies to adapt network densities and traffic variations for numerous use cases and scenarios via complying with spectrum and airspace regulations. It aims at providing the safety and security of FVs via newly designed solutions such as sense and avoid mechanisms. To the best of our knowledge, there is no effort to provide a holistic network architecture design as in Fig. 1, which addresses the heterogeneity of the FVs, communication technologies, and communication requirements of different users with machine learning tools for adaptive and robust communications. Additionally, a special consideration addresses the exploitation of the new integrated sensing capabilities of 6G within the architecture for network operations and sensing services.
Due to the heterogeneous requirements of both ground and aerial users, the second objective is to design connectivity links. Commercial airplanes at high altitudes require high-capacity links. UAVs require URLLC for BVLOS and swarming applications. Also, UAVs require broadband connectivity for their reconnaissance or inspection applications or when acting as communication relays within the 6G-SKY architecture. Flying taxis have highly reliable connectivity to avoid any casualties. To satisfy the diverse requirements of aerial users, 6G communication technologies, which are essentially unexplored in the NTN context, will be utilized such as Terahertz communication, millimeter wave communications and reflective intelligent surfaces. Satellite, DA2GC and HAPS links are the main connectivity options as seen in Fig. 1 to address the connectivity demands. Our objective is not limited to the design of connectivity links for aerial users. We also target to provide connectivity to ground users in rural areas via satellite, HAPS, and even UAV technologies. The latter can provide (temporarily) enhanced connectivity for (personal and goods) transport in rural areas. This will especially be the case for the improvement of V2V and V2I connectivity when automated vehicles and intelligent infrastructure will emerge. This also requires a particular swarm coordination of UAVs which we will develop in this project. Moreover, UAVs can serve as flying eyes to provide infrastructure and vehicles with up-to-date traffic and emergency information. The use cases for the ground users are mMTC, cMTC and eMBB.
The third objective is the integration of multiple links in 3D from multiple technologies to design 3D networks for aerial and ground users. To this end, we will propose novel solutions with adaptive network topology shaping to respond to the changes in the 3D environment. Cell-free network design to provide adaptive user-centric network formation as an enabling technology of 6G networks will be crucial to integrate multiple frequencies and different communication technologies. This aims at autonomous control of handovers among different network technologies as flying UEs move in 3D space as well as coordinate them accordingly (e.g. coordination of UAVs based on self-organization and swarm intelligence). Due to the use of high frequency, beam tracking and beamforming algorithms will be proposed, which require precise localization of FVs in 3D space. For ground users in hard-to-access areas such as rural areas, NTNs can provide connectivity and coverage when terrestrial networks are not capable to meet different connectivity requirements. Covering these ground users with certain types of communications such as eMBB for video transmissions, sensor data, URLLC for critical IoT applications and extended coverage for massive MTC applications are possible via networking solutions through satellites, UAVs, and HAPS.
## IV 6G-SKY Network Design
### _Network Design and Management Functionality_
Despite the wide level of heterogeneity of current communication network services, the communication needs of various types of users being either humans, machines, or even larger infrastructures, current networks mainly consider a flat, 2D type network architecture. These flat networks focus on BSs serving terrestrial terminals in mobile networks or satellite-based communication for some selected groups including aerial users like airplanes as well as terrestrial users on the ground like smartphones and stationary terminals.
An emerging challenge is to complement existing TNs forming a flexible 6G hybrid architecture integrating terrestrial, HAPS and satellite layers with an additional aerial layer with users such as UAVs. As in terrestrial communications, the service requirements are diverse in aerial applications and dynamically changing such that a combined ASN design should be able to adapt its topology and resources according to the needs of both terrestrial and aerial users.
The main focus of radio resource management in the combined ASN is to control and harmonize the network side domain towards the optimal resource allocation and load balancing between the connection domains including the cost, QoS demands as well as energy efficiency aspects of connections. Moreover, such multi-domain networks might be operated by different operators, so multiple operator policies have to be mapped to the service control logic. As the network complexity increases, when terrestrial and aerial users are served by a multi-layer network architecture, the optimization of the radio resource management by AI is advantageous.
In the multi-layer 3D network architecture, the users might be connected to multiple serving layers (e.g., dual or multi-connectivity to traditional terrestrial networks and satellites). Hence, the mobility handling (handover) depends not only on their altitude, but special attention has to be taken to the typical LOS conditions of aerial users and the resulting potential interference between aerial and terrestrial communication links as well as largely overlapping service areas (aka cells in traditional terrestrial communications). Thus, mobility management greatly relies on the accurate reported or predicted localization information of the UEs and knowledge of the individual link qualities.
Localization should be a 6G service for users independently from the instantaneous availability of connection domains. Not only the networks but also each UE device can be considered as a sensor. Inherent possibilities of joint communication and sensing technologies, beam-forming, and ultra-wideband scanning technologies available to precisely locate a given UE, especially in LOS environments are envisaged to be typical for aerial users in the future, as well.
### _Aerial Mesh Networks_
One of the novel elements in the combined ASN is the incorporation of direct D2D communication, specifically A2AC, into the 6G architecture. The motivation is to extend the network coverage even further in places where no other means of communication are available for the users. However, relying solely on A2AC links for network coverage and use cases such as drone swarms would be inadequate as it is limited to a single hop of communication. To address this issue, the 6G-SKY project aims to incorporate a managed multi-hop network, commonly referred to as a mesh network, to combine multiple A2AC links into a single network. The mesh network should also seamlessly interface with other parts of the heterogeneous network architecture, such as terrestrial, HAPS/HIBS, and satellite links, in a way that is transparent to the user equipment, allowing for seamless connectivity and efficient use of resources. The transparency of the overall system considers that only a limited amount of communication agents may possess interfaces to other systems and serve as seamless gateways between the networks.
One of the key use cases that an aerial mesh network will enable is the operation of UAV swarms. A heterogeneous network must be designed that incorporates mesh networks to support this scenario. Traditional mesh networks are optimized for maximizing throughput, but for UAV swarms, a different approach is needed to prioritize resilience and low latency for command and control data. This is critical for modern precise distributed swarm control algorithms, which rely on real-time communication between the UAVs. Furthermore, the development of more sophisticated swarming algorithms is needed to adaptively react to the changing radio environment.
The mesh network developed should not only utilize the multi-hop aspect but also leverage multi-path to transport high-priority data for improved latency and reliability performance. Another approach is the utilization of network slicing within the mesh network, allowing for different traffic streams to have distinct QoS specifications that can be centrally managed in the control plane. This will enable the network to adapt and optimize the resources for different use cases and traffic types, thus providing a more efficient and reliable network.
## V Additional Challenges Addressed in 6G-Sky
**Spectrum:** ITU regulates the general use of frequencies in the world through its Radio Regulations and has overall international responsibility for satellite network coordination, notification as well as bringing into use aspects. Hence, ITU determines the allocation of spectrum available for satellites. Satellite components are in space and are therefore outside the jurisdiction of national regulatory authorities (NRA), like PTS in Sweden. However, assignments and licensing of spectrum for communication to satellites from Earth stations are decided by NRAs.
At World Radio Conferences (WRC), regulatory advice regarding spectrum sharing has been agreed upon, in which recommendations are to maintain a minimum separation angle as well as guidelines covering maximum transmitted powers for different bands and limits of power-flux densities (PFD) from space stations. More research is required, to verify the coordination, sharing and compatibility between new and old
technologies supported at different altitudes and orbits and used in the same or adjacent frequency bands.
ITU regulations, the only spectrum band where HAPS can currently act as a cellular base station is \(2.1\) GHz. However, WRC-23 agenda item 1.4 is looking to consider HAPS mobile services in certain frequency bands already identified for IMT: \(694-960\) MHz; \(1710-1885\) MHz and \(2500-2690\) MHz. 6G-SKY project aims to establish a path toward sustainable 6G that results in higher energy and spectral efficiency compared to previous mobile network generations.
**Security and Safety:** From the safety aspect, the greatest challenge regarding safe urban airspace operations is the coordination of flight missions comprising both manned and unmanned aircraft. To this end, a complete single-source picture of the sky is indispensable. Coming U-space [13] and FAA [14] regulations enforcing geofencing will prevent UAVs from flying unintentionally at restricted locations, but those regulations will not stop non-compliant UAVs and pilots with malicious intent from entering restricted or controlled airspace. Detecting and locating UAVs not broadcasting or providing their location (non-cooperative) is required to ensure a complete, single-source picture of the sky in urban areas. Another challenge in this context is then how to convey, in real-time, to manned aircraft that UAVs are present in the airspace. To address the information deficit among airspace users, we foresee the emergence of drone detection and positioning systems and ground-based low-power Automatic Dependent Surveillance-Broadcast (ADS-B) transmitters and ADS-B receivers.
From the security aspect, the proposed novel holistic architecture, making use of heterogeneous links, comes with its own security requirements. On top of the inherent challenges in wireless security, we foresee that thwarting spoofing attacks will become a focus point of related security efforts. Such attacks could potentially cripple two major functionalities of the novel architecture utilizing unauthenticated communication protocols: i) advanced localization (via global navigation satellite system (GNSS) spoofing) and ii) the above-mentioned drone detection (via ADS-B spoofing). In addition, UAVs introduce an entirely new set of security challenges: they can be operated either by remote control or autonomously using onboard computers; accordingly, the UAV system is vulnerable to attacks that target either the cyber and/or physical elements, the interface between them, the wireless link, or even a combination of multiple components.
**Trustworthy AI:** Many industries including aviation consider AI in critical operational contexts [15]. AI can act on different layers such as physical and network layers in 6G. On one hand, aviation can potentially profit from AI-optimized 6G networks to achieve performance parameters such as transmission reliability and data rate. On the other hand, implications for 6G services by AI-based networks that are used for critical aviation tasks need to be risk assessed. If possible, the network itself ensures the fulfillment of service parameters (e.g., defined in a service level agreement (SLA)). A network fails in fulfillment of service parameters (e.g., a longer disconnect during a flight), hence, trustworthy AI becomes critical in the 6G-SKY domain. For example, if our cases are based on time-series data, we can re-use some of the methods used in another use case that deals with this kind of data as well. This justifies the need of developing a Trustworthy AI platform that allows us to evaluate and develop AI models that can be trusted to be taken into the real world.
## VI Conclusion
In this paper, we present an overview of the 6G-SKY vision to integrate terrestrial and non-terrestrial networks (NTNs). We propose a new terminology as combined airspace and NTN (combined ASN) for 6G-SKY multi-layer networks. Our project 6G-SKY will lay foundations of a holistic architecture adopting multiple technologies such as satellite and direct air-to-ground communication to realize different use case segments. We outline network design and management challenges in the combined ASN such as diverse service requirements, dynamically changing network topology, and heterogeneous network elements in the sky and on the ground with a combination of industrial and academic perspectives. Additional challenges such as spectrum regulations in 3D environments, trustworthy AI algorithms in aviation applications and the safety and security of flying vehicles are the distinctive aspects of the 6G-SKY project.
## Acknowledgment
This work was supported in part by the CELTIC-NEXT Project, 6G for Connected Sky (6G-SKY), with funding received from the Federal Ministry for Economic Affairs and Climate Action under the contract number 01MJ22010B, Vinnova, Swedish Innovation Agency, the Austrian Federal Ministry for Climate Action, Environment, Energy, Mobility Innovation and Technology via the Austrian Research Promotion Agency (FFG) and Hungarian National Research, Development and Innovation Office, under the agreement no. 2020-1.2.3-EUREKA-2021-000006. The views expressed herein can in no way be taken to reflect the official opinion of the German ministry.
|
2306.12143 | Influence of Specific Energy Inhomogeneity on the CO2 Splitting
Performance in a High-Power Plasma Jet | Plasma-based CO2 conversion is a promising pathway towards greenhouse gas
recycling. In the corresponding research field, various types of plasma
reactors are applied for carbon dioxide dissociation. So far, spatial
inhomogeneities of the specific energy (SEI) distribution in plasma generators,
e.g., induced by non-uniform heating or an inhomogeneous mass distribution, are
not the focus of the investigations. In this work, the spatial inhomogeneity of
mass-specific enthalpy in the plasma jet of the inductive plasma generator IPG4
at the Institute of Space Systems (IRS) is examined. For this, the mean
mass-specific enthalpy as well as the radial distribution of the local enthalpy
are measured using plasma probes. Moreover, the influence of the determined
specific enthalpy inhomogeneity on the CO2 splitting performance is quantified.
It is shown that an inhomogeneous radial distribution of the specific energy
can significantly lower the carbon dioxide conversion, compared to a
homogeneous case. With regards to IPG4, the performance reduction is 16 %. | Hendrik Burghaus, Clemens F. Kaiser, Stefanos Fasoulas, Georg Herdrich | 2023-06-21T09:40:37Z | http://arxiv.org/abs/2306.12143v2 | Influence of Specific Energy Inhomogeneity on the CO\({}_{2}\) Splitting Performance in a High-Power Plasma Jet
###### Abstract
Plasma-based CO\({}_{2}\) conversion is a promising pathway towards greenhouse gas recycling. In the corresponding research field, various types of plasma reactors are applied for carbon dioxide dissociation. So far, spatial inhomogeneities of the specific energy (SEI) distribution in plasma generators, e.g., induced by non-uniform heating or an inhomogeneous mass distribution, are not the focus of the investigations. In this work, the spatial inhomogeneity of mass-specific enthalpy in the plasma jet of the inductive plasma generator IPG4 at the Institute of Space Systems (IRS) is examined. For this purpose, the mean mass-specific enthalpy as well as the radial distribution of the local enthalpy are measured using plasma probes. Moreover, the influence of the determined specific enthalpy inhomogeneity on the CO\({}_{2}\) splitting performance is quantified. It is shown that an inhomogeneous radial distribution of the specific energy can significantly lower the carbon dioxide conversion, compared to a homogeneous case. With regards to IPG4, the performance reduction is 16 %.
keywords: inductively coupled plasma, plasma diagnostics, CO\({}_{2}\) splitting, plasma technology, plasma wind tunnel +
Footnote †: journal: Vacuum
## 1 Introduction
The fundamental role of anthropogenic greenhouse gas (GHG) emissions in the relentlessly progressing climate change on Earth is undeniable. Therefore, the Intergovernmental Panel on Climate Change (IPCC) sees a massive reduction in GHG emissions as the only way to limit global warming to 1.5\({}^{\circ}\)C or even 2.0\({}^{\circ}\)C. In the IPCC's sixth assessment report, Carbon Capture and Utilization (CCU) and Carbon Capture and Storage (CCS) are defined as necessary tools for the mitigation of carbon dioxide emissions [1]. To this end, plasma technology might be a promising way of converting CO\({}_{2}\) into value-added compounds [2]. One fundamental process under investigation is the plasma-based conversion of pure carbon dioxide into the syngas carbon monoxide and oxygen, called CO\({}_{2}\) splitting [3]:
\[CO_{2}\to CO+\frac{1}{2}O_{2}\quad\Delta H_{298}^{0}=2.93\,\mathrm{eV/molecule} \tag{1}\]
Major advancements have been made in the application of non-thermal plasmas (NTP) for CO\({}_{2}\) splitting, with and without catalyst [4]. Besides NTP, also the dissociation of CO\({}_{2}\) in thermodynamic equilibrium is increasingly subject of research, as it could be demonstrated that thermal reactions dominate the CO\({}_{2}\) conversion in microwave (MW) and gliding arc (GA) plasmas [2]. Here, several ways to overcome the energy efficiency limit of around 50 % [3] have been identified, such as super-ideal chemical quenching by O-CO\({}_{2}\) association [5] or thermal quenching by fast expansion through a de Laval nozzle [6; 7].
A great part of the research is focused on the reaction kinetics, especially the excitation of vibrational modes of carbon dioxide. In this context of plasma-based CO\({}_{2}\) splitting, the specific energy input (SEI) has been identified as a key parameter, which is commonly expressed in units of eV/molecule [3]:
\[SEI[\mathrm{eV/molecule}]=\frac{P_{\mathrm{cal}}}{m_{\mathrm{CO2}}}\cdot\frac{ M_{\mathrm{CO2}}}{e\cdot N_{\mathrm{A}}} \tag{2}\]
Here, \(P_{\mathrm{cal}}\) is the plasma power, \(\dot{m}_{\mathrm{CO2}}\) the mass flow rate of carbon dioxide, \(M_{\mathrm{CO2}}\) is the molar mass of CO\({}_{2}\), \(e\) is the elementary charge and \(N_{\mathrm{A}}\) is the Avogadro's constant. In the shown formulation, Eq. 2 is applicable for a pure carbon dioxide gas. While current research concentrates on the optimization of the specific energy input, not much can be found on spatial inhomogeneities of the SEI in a plasma jet in literature. Wolf et al. numerically simulate non-uniform heating in a MW plasma generator and stress the importance of local SEI measurements in contrast to the commonly used global values [8]. To date, local effects are barely understood, especially from the experimental side. Nevertheless, the occurrences of SEI inhomogeneities are described in a variety of studies. Possible origins of the inhomogeneities are a spatially non-uniform power distribution in the discharge region, heterogeneous distribution of the mass flux, expansion of the plume, wall cooling effects, magnetohydrodynamic (MHD) effects, or, as is usually the case, the combination of several of these. In general, most types of plasma generators are expected to show spatial SEI inhomogeneities due to a complex superposition of mass flux and power distribution. In this regard, Bogaerts and Centi state that in GA plasmas only a fraction of the gas passes the discharge, which limits the conversion [2]. Moreover, contraction phenomena are known to be a limiting factor in MW plasmas [2; 9] with respect to carbon dioxide conversion.
At the Institute of Space Systems (IRS), the plasma wind tunnel PWK3, powered by the inductive plasma generator IPG4, is
used for experimental investigations on thermal CO\({}_{2}\) splitting at high powers. Specific energy inhomogeneities in the plasma jet of PWK3 are known and measured for two decades already (e.g., [10; 11; 12; 13; 14]). Moreover, mean specific enthalpies are reported for pure oxygen flows in PWK3 by Herdrich [15]. Only recently with the investigations on CO\({}_{2}\) splitting at IRS ([16; 14]), the correlation between radially distributed local specific energy and integral values in PWK3 moved into focus.
It has to be noted that in the field of plasma wind tunnels, the energy per particle in the plasma is labeled as total mass-specific enthalpy \(h_{\mathrm{tot}}\). This measure is the same as the specific energy input, but usually specified in different units:
\[h_{\mathrm{tot}}[\mathrm{J/kg}]=SE[\mathrm{eV/molecule}]\frac{e\cdot N_{\mathrm{ A}}}{M_{\mathrm{CO2}}}=\frac{P_{\mathrm{cal}}}{\dot{m}_{\mathrm{CO2}}} \tag{3}\]
In accordance with Eq. 2), \(P_{\mathrm{cal}}\) is the plasma power and \(\dot{m}_{\mathrm{CO2}}\) the mass flow rate of carbon dioxide. In the course of this work, the terms specific energy input, specific energy, mass-specific enthalpy and specific enthalpy are used interchangeably. The parameters _SEI_ and \(h_{\mathrm{tot}}\) are applied for the expression of the same quantity in units of [\(\mathrm{eV/molecule}\)] and [\(\mathrm{J/kg}\)], respectively.
In this paper, the influence of spatial (radial) specific energy inhomogeneities in the plasma jet of the inductive plasma generator IPG4 on the carbon dioxide splitting performance is examined by comparing integral measurements of the mean (bulk) enthalpy and locally measured mass-specific enthalpy values.
In Section 2, the experimental facility as well as the plasma probes used in the course of this work are introduced. Furthermore, a software tool for thermal carbon dioxide splitting is described. Section 3 contains the measurement results of mean and local enthalpies. Moreover, the influence of specific energy inhomogeneities on the CO\({}_{2}\) splitting performance is investigated by a parameter study.
## 2 Experimental setup and tools
At the Institute of Space Systems (IRS) three plasma wind tunnels are operated to experimentally simulate the entry of objects into planetary atmospheres [17]. All experiments in the course of this work are conducted in the plasma wind tunnel PWK3, powered by the inductively coupled plasma generator IPG4. In the following, the experimental setup of PWK3 and the applied intrusive plasma diagnostics are described. Moreover, the concept of carbon dioxide splitting in thermodynamic equilibrium is introduced.
### Experimental facility
The plasma wind tunnel PWK3 consists of a stainless-steel vacuum chamber with a diameter of 1.6 m and a length of 2 m, as well as an inductive plasma generator (IPG3). In the case of CO\({}_{2}\) operation a convergent nozzle is attached to the plasma source, then called IPG4. A schematic of the facility is shown in Fig. 1.
The vacuum chamber is connected to a centralized vacuum system [17]. Several optical access points allow for non-intrusive diagnostics and visual monitoring. Moreover, a water-cooled probe and material sample-holder can be mounted on a numerically controlled two-axis table in the tank center, e.g., for intrusive measurements and material testing. The reference coordinate system of PWK3 is marked in Fig. 1. Its origin (\(x=0\) mm) lies in the generator exit plane, but for reasons of readability it is shifted to the right in the figure.
The plasma generator IPG4, connected to the tank lid, is powered by an external power supply. Together with a resonant circuit consisting of up to seven 6 nF capacitors and the IPG inductance coil the power supply delivers an anode power of 180 kW maximum. A 2.0 mm thick quartz tube, surrounded by a copper coil of 5.5 turns, forms the discharge channel of IPG4. The convergent nozzle attached to IPG4 has a length of 35 mm and a throat diameter of 50 mm. Thus, the nozzle exit plane lies at \(x=35\) mm. The coil, quartz tube and nozzle are cooled by high-pressure water.
Figure 2 shows a schematic of the inductive plasma generator, as well as a photograph (Sony A6400) of a heat flux measurement in a pure CO\({}_{2}\) jet in PWK3.
The electrodeless plasma generation in IPG4 allows for the operation of PWK3 with a large variety of gases, including pure oxygen and carbon dioxide. The high plasma purity enables investigations on gas-surface interactions, e.g., for thermal protection system (TPS) materials ([18; 19; 13]). In this work, a range of operating conditions of PWK3 with carbon dioxide is studied. A summary can be found in Tab. 1.
In the context of this paper, the maximum operational range of IPG4 with regards to the mean mass-specific enthalpy is characterized. This is achieved by variation of the anode power between 135 kW and 160 kW, and variation of the mass flow rate between 2.2 g/s and 4.0 g/s. The limitations on that range stem from laboratory safety regulations, as well as the generator discharge stability. Except for the mass flow rate and anode power, the facility parameters are kept constant. The changes in anode voltage and injector pressures are directly connected to the change in specific energy. The tank pressure is adjusted actively by the injection of molecular nitrogen at the back of the
Figure 1: Schematic of plasma wind tunnel PWK3 with probe installed.
vacuum tank. Detailed information on the investigated combinations of mass flow rate and anode power will be given in Tab. 2 in Section 3.1.
### Plasma diagnostics
As mentioned in Section 1, this paper deals with spatial inhomogeneities in the mass-specific enthalpy of a carbon dioxide plasma jet, meaning the distribution of energy per particle over the radius. Consequently, the analysis is based on the comparison of the mean (bulk) enthalpy of the whole plume and locally measured values. For this purpose, two types of intrusive plasma probes, the cavity calorimeter for integral measurements and the heat flux-Pitot double probe for radially resolved measurements, are applied. Both setups and the underlying working principles of the probes are explained in the following.
#### 2.2.1 Heat flux-Pitot double probe
To determine a radial profile of local mass-specific enthalpies in the CO\({}_{2}\) plasma jet of IPG4 the so-called heat flux-Pitot double probe is used. The measurements are performed at an axial distance of \(x=156\,\mathrm{mm}\) from the generator exit, which equals a distance of \(121\,\mathrm{mm}\) to the nozzle exit. This test position is chosen for reasons of comparison to measurements performed in the past [14]. A schematic of the double probe is shown in Fig. 3.
The two-sided probe is capable of measuring the heat flux and the Pitot pressure at the test position, depending on which side of the probe is facing the plasma. Both probe heads have an outer diameter of \(50\,\mathrm{mm}\). On the left side (Fig. 3) the heat flux on a thermally insulated copper oxide surface with a diameter of \(14\,\mathrm{mm}\) is measured calorimetrically. The heat flux insert is water-cooled and the water volume flow is monitored by a Siemens Sirtans FM MAG 1100/5000 sensor system. Moreover, the temperature difference of the in- and out-flowing water \(\Delta T_{\mathrm{w}}\) is measured by two Pt100 sensors inside the probe. Together with the known heat capacity \(c_{\mathrm{p,w}}\), the water mass density \(\rho_{\mathrm{w}}\) and the calorimeter surface area \(A_{\mathrm{cal}}\), the calorimetric heat flux on the copper surface \(\dot{q}_{\mathrm{cal}}\) can be determined as:
\[\dot{q}_{\mathrm{cal}}=\frac{\rho_{\mathrm{w}}\dot{V}_{\mathrm{w}}}{A_{\mathrm{ cal}}}c_{\mathrm{p,w}}\Delta T_{\mathrm{w}} \tag{4}\]
Opposite of the calorimeter probe head is a Pitot tube with a diameter of \(26.5\,\mathrm{mm}\). The Pitot pressure is measured with a pressure gauge (MKS 122AAX-00100DBS) connected to the Pitot side of the double probe. The same gauge is used for the determination of the tank pressure at all test conditions in this work. Together, the local stagnation pressure and the calorimetrically determined heat flux allow for an approximation of the
\begin{table}
\begin{tabular}{l l r} \hline \hline Parameter & Symbol & Value \\ \hline Anode power & \(P_{\mathrm{A}}\) & \(135-160\,\mathrm{kW}\) \\ Anode voltage & \(U_{\mathrm{A}}\) & \(6600-6950\,\mathrm{V}\) \\ Number of capacitors & \(n_{\mathrm{k}}\) & \(6600-6950\,\mathrm{V}\) \\ Coil turns & \(n_{\mathrm{coil}}\) & \(5.5\) \\ Operational frequency & \(f\) & \(520\,\mathrm{kHz}\) \\ Quartz tube thickness & \(th_{\mathrm{tube}}\) & \(2.0\,\mathrm{mm}\) \\ Attached nozzle & - & convergent \\ Ambient pressure & \(p_{\mathrm{amb}}\) & \(30-100\,\mathrm{Pa}\) \\ Injector pressure & \(p_{\mathrm{inj}}\) & \(2855-3720\,\mathrm{Pa}\) \\ Mass flow rate [CO\({}_{2}\)] & \(\dot{m}_{\mathrm{CO2}}\) & \(2.2-4.0\,\mathrm{g/s}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Range of operating conditions of PWK3/IPG4 used in this work
Figure 3: Sectional view of the heat flux-Pitot double probe with the \(14\,\mathrm{mm}\) copper calorimeter insert on the left and the \(26.5\,\mathrm{mm}\) Pitot tube on the right.
Figure 2: Schematic of the inductive plasma generator IPG4 (left) and photograph of its CO\({}_{2}\) plasma plume during heat flux measurements at \(\dot{m}_{\mathrm{CO2}}=2.2\,\mathrm{g/s}\), \(P_{\mathrm{A}}=160\,\mathrm{kW}\) and \(p_{\mathrm{amb}}=100\,\mathrm{Pa}\) in PWK3 (right).
local mass-specific enthalpy \(h_{\rm tot}\), as formulated by Marvin and Pope [20]:
\[h_{\rm tot}-h_{\rm w}=\frac{1}{K_{\rm i}}\frac{\dot{q}_{\rm fc}}{\sqrt{(\frac{P_{ \rm heat}}{R_{\rm eff}})}} \tag{5}\]
Here, \(K_{\rm i}\) denotes a gas specific constant, which is adopted from Zoby as \(0.4337\,\rm kWkg/(m^{3/2}Pa^{1/2}M\rm J)\) for CO\({}_{2}\)[21; 22]. The parameter \(\dot{q}_{\rm fc}\) represents the fully catalytic heat flux and \(R_{\rm eff}\) the effective nose radius, which is 2.3 times the body radius \(R_{\rm b}=25\,\rm mm\) for the double probe used in the course of this work [20]. In this analysis, the wall enthalpy \(h_{\rm w}\) is neglected and the measured heat flux is assumed to be fully catalytic, based on past investigations at IRS by Marynowski et al. [12].
#### 2.2.2 Cavity calorimeter
In order to measure the bulk enthalpy of the plasma jet in PWK3, a so-called cavity calorimeter was developed at the Institute of Space Systems [15]. The basic idea behind this probe is to trap the whole plasma jet inside the calorimeter. Constant water cooling of the probe induces full relaxation of the gas temperature, chemical potential and flow velocity, allowing for the calorimetric determination of the plasma power. The cavity calorimeter is mounted to the probe holder in PWK3 and placed at a distance of \(x=100\,\rm mm\) to the generator exit. This value was determined to be optimal for capturing the entire plasma jet, with no significant residual plasma flow around the probe, which would falsify the measurements, and without disturbing the discharge in the generator [15]. Once positioned in front of the generator exit, the calorimeter was not moved anymore and the ignition was done with the probe in place already. In Fig. 4 a schematic of the cavity calorimeter is shown.
The cone-shaped calorimeter with a length of \(900\,\rm mm\) is made out of copper. Its inlet, a circular orifice, has a diameter of \(120\,\rm mm\), which is more than double the size of the IPG4 nozzle exit. The calorimeter cone is followed by a cylindrical tube of \(40\,\rm mm\) diameter at the end of which the cooled gas exits the probe. All calorimeter surfaces are cooled by water in spiral copper tubes on the inside and outside. The water flow rate is monitored by a Siemens Sitrans FM MAG 1100/5000 sensor system. Moreover, the inflow and outflow temperatures of the cooling water are measured with two Pt100 sensors (not shown in Figure 4). In addition to the original test setup (see [15]) the cavity calorimeter is complemented by a Type K thermocouple (TC) and a Pitot tube, which are placed at the calorimeter end, inside the exiting gas stream. Later it will be shown that these additional diagnostics allow for the determination of the residual enthalpy content of the exiting gas and, thus, for a more accurate determination of the plasma jet power.
The methodology behind the data analysis is based on Herdrich [23; 15] and has been advanced in the course of this work. Generally, the cavity calorimeter is able to measure the total mean enthalpy of the plasma, introduced by the generator:
\[\tilde{h}_{\rm tot}=\int_{300\,\rm K}^{T}c_{\rm p}\rm d\!T+\left(h_{\rm chem} -\Delta H_{\rm f}^{0}\right)+\frac{1}{2}u_{\infty}^{2} \tag{6}\]
Here, \(c_{\rm p}\) and \(T\) are the specific heat capacity and the gas temperature of the plasma, respectively. The chemical potential is denoted by \(h_{\rm chem}\). This is corrected by the standard enthalpy of formation of the gas \(\Delta H_{\rm f}^{0}\) in order to get the enthalpy that is coupled by the plasma generator, without considering the production of CO\({}_{2}\) itself. This way, the mass-specific enthalpy is zero for a carbon dioxide gas at \(300\,\rm K\), i.e., the feed gas of the generator. As the last contribution to the total enthalpy, the mass-specific kinetic energy is represented by the flow velocity \(u_{\infty}\).
From the principle of operation, the cavity calorimeter measures power, not enthalpy. In this work, the total calorimetric plasma power \(P_{\rm cal}\) is determined by combining the calorimeter, TC and Pitot tube measurements. Radiation losses of the calorimeter are assumed to be one percent, in accordance with [15], to stay consistent with research conducted in the past.
\[P_{\rm cal}=1.01\cdot P_{\rm cavity}+P_{\rm heat,exit}+P_{\rm kin,exit} \tag{7}\]
In addition to the plasma power measured by the cavity calorimeter \(P_{\rm cavity}\), the thermal power \(P_{\rm heat,exit}\) and the kinetic power \(P_{\rm kin,exit}\) left in the gas at the calorimeter exit are considered. The calorimeter power is calculated similarly to the heat flux of the double probe (Eq. 4), but for the entire plasma jet at once rather than locally:
\[P_{\rm cavity}=\rho_{\rm W}c_{\rm p,W}V_{\rm W}(T_{\rm out}-T_{\rm in}) \tag{8}\]
To estimate the thermal power at the exit, the mass flow rate \(\dot{m}_{\rm exit}\) and the mass-specific enthalpy of the cooled gas \(h_{\rm exit}\) must be known:
\[P_{\rm heat,exit}=\dot{m}_{\rm exit}h_{\rm exit}(p_{\rm amb},T_{\rm exit}) \tag{9}\]
For this, thermodynamic equilibrium is assumed at the calorimeter exit. The software toolkit Cantera [24] is used to calculate the specific enthalpy based on the measured gas temperature at the exit. More information on these type of simulations will be given in Section 2.3. Furthermore, the static pressure at the exit is set equal to the ambient pressure in the tank. Since the exit mass flow rate is not measured, it is assumed to be equal to the initial mass flow rate in IPG4. The underlying assumption here is that the plasma jet is captured completely inside the calorimeter.
The kinetic power of the exiting gas completes the analysis and is determined as follows:
\[P_{\rm kin,exit}=\frac{1}{2}\dot{m}_{\rm exit}u_{\rm exit}^{2} \tag{10}\]
Figure 4: Schematic of the cavity calorimeter, including the placement of the thermocouple (TC) and the Pitot tube at the exit.
Here, the flow velocity of the exiting gas \(u_{\rm exit}\) is calculated combining the measured gas temperature and the Mach number at the exit. The Mach number itself is calculated via the Rayleigh Pitot formula [25], using the stagnation pressure measured with the Pitot tube. For subsonic conditions the simpler isentropic flow equation must be applied.
During the operation of plasma wind tunnel PWK3 significant power losses occur due to cooling of the quartz tube of the generator and the convergent nozzle attached to it. The IPG4 quartz tube cooling power \(Q_{\rm tube}\) is monitored constantly for each test performed. Moreover, measurements of the nozzle cooling power \(Q_{\rm nozzle}\) for all conditions investigated in the course of this work were performed. Both cooling losses are measured calorimetrically. Together with the measured calorimetric plasma power, important generator/facility efficiencies can be derived. The definitions of the efficiencies are based on Dropmann and Herdrich [26; 23] and summarized in the following.
The coupling efficiency describes how much power is coupled into the working gas, compared to the power applied to the anode. This includes the calorimetric plasma power, the tube cooling and the nozzle cooling:
\[\eta_{\rm couple}=\frac{P_{\rm cal}+Q_{\rm tube}+Q_{\rm nozzle}}{P_{\rm A}} \tag{11}\]
The tube cooling contains a small amount of heat generated in the IPG4 copper coil that is not coupled to the plasma and should be excluded in Eq. 11. However, due to the negligible size of that coil heating (order of Watts), no correction is applied. The thermal efficiency states how much of the coupled power is lost due to cooling of the generator discharge channel and the convergent nozzle:
\[\eta_{\rm th}=\frac{P_{\rm cal}}{P_{\rm cal}+Q_{\rm tube}+Q_{\rm nozzle}} \tag{12}\]
Most importantly, the total efficiency is the ratio of the calorimetric plasma power, i.e., the power actually present in the gas in the vacuum chamber, to the anode power:
\[\eta_{\rm tot}=\eta_{\rm couple}\cdot\eta_{\rm th}=\frac{P_{\rm cal}}{P_{\rm A}} \tag{13}\]
Extensive studies on the PWK3 efficiencies for pure oxygen plasmas by Herdrich can be found in [10]. The current paper represents the first measurement of the plasma power and the facility efficiencies for a carbon dioxide plasma in PWK3 (see Section 3.1).
### CO\({}_{2}\) splitting in thermodynamic equilibrium
Heating carbon dioxide up to temperatures of several thousand degrees leads to decomposition of the otherwise stable molecule. This splitting process in thermochemical equilibrium is the simplest route of CO\({}_{2}\) dissociation, but is generally limited with regards to energy efficiency as mentioned in Section 1. In this work, a code has been developed, based on the software toolkit Cantera [24], to simulate the process of thermal carbon dioxide splitting at a given pressure. This includes the splitting performance parameters, like specific energy input (SEI), the CO\({}_{2}\) conversion \(\chi\) and the energy efficiency \(\eta\). The definitions of the CO\({}_{2}\) splitting performance parameters follow Snoeckx et al. [3], but with adaptation to the applications in this paper. In the context of this work, ideal quenching without heat recovery is assumed. The conversion of carbon dioxide is defined as the ratio of converted to the initial CO\({}_{2}\) mass:
\[\chi=\frac{m_{\rm CO2,0}-m_{\rm CO2}}{m_{\rm CO2,0}}=\frac{m_{\rm CO2,convet }}{m_{\rm CO2,0}} \tag{14}\]
with the initial and current carbon dioxide masses being \(m_{\rm CO2,0}\) and \(m_{\rm CO2}\), respectively. In the case of thermodynamic equilibrium CO\({}_{2}\) splitting, "current" refers to the state after heating. Consequently, the energy efficiency is defined as:
\[\eta=\chi\cdot\frac{\Delta H_{298}^{0}}{SEI}=\chi\cdot\frac{\Delta H_{298}^{0} }{h_{\rm tot}}\frac{e\cdot N_{\rm A}}{M_{\rm CO2}} \tag{15}\]
with the CO\({}_{2}\) conversion \(\chi\) and the standard reaction enthalpy for carbon dioxide splitting \(\Delta H_{298}^{0}=2.93\,\rm eV/molecule\)[3]. In Fig. 5 the composition and splitting performance of a CO\({}_{2}\) gas in thermodynamic equilibrium are plotted. The pressure in the simulation is 2900 Pa, representative for the injector pressure of IPG4 at the lowest mass flow rate of 2.2 g/s and an anode power of 160 kW.
In the figure, the specific energy input in eV/molecule is plotted as a secondary x-axis for reasons of convenience. The gas composition is depicted as black lines. At approx. 16 MJ/kg the dissociation of carbon dioxide is completed, which is reflected in the conversion (blue) approaching 100 %. Consequently, at higher enthalpies carbon monoxide decomposition and ionization, mostly of atomic carbon, takes place. The gas temperature, which is very sensitive to the static pressure, reaches approx. 3500 K at full CO\({}_{2}\) dissociation and rises to 9000 K at an enthalpy of 60 MJ/kg. The energy efficiency reaches its maximum of approx. 51 % at an enthalpy of 8.4 MJ/kg. Again, it has to be emphasized that the CO\({}_{2}\) splitting performance is calculated under the assumption of ideal quenching without heat recovery.
Figure 5: Composition and splitting performance of a CO\({}_{2}\) gas in thermodynamic equilibrium as a function of specific enthalpy at 2900 Pa. The gas temperature is shown in magenta, the conversion in blue and the energy efficiency in green.
## 3 Results and discussion
As this paper strives to investigate the influence of a spatially inhomogeneous distribution of the specific energy (mass-specific enthalpy) on the CO\({}_{2}\) splitting performance, the mandatory first step is to show and quantify the spatial inhomogeneity in PWK3. In the following, this is done by using the plasma diagnostic probes introduced in Section 2. Subsequently, the influence of the measured inhomogeneity on the carbon dioxide splitting performance in IPG4 is analyzed in Section 3.2. The majority of the data analysis is performed in Python using the NumPy [27], Matplotlib [28], SciPy [29] and Pandas [30] packages.
### Plasma jet characterization
The characterization of the IPG4 plasma jet is split into two parts. First, the mean enthalpy of the entire jet, as well as the generator efficiencies, are determined with the cavity calorimeter. Five different test conditions in a wide operational range of IPG4 are analyzed. Second, for one of these conditions (CO2#01b) the radial distribution of local mass-specific enthalpy values is measured with the heat flux-Pitot double probe.
#### 3.1.1 Bulk enthalpy & generator efficiencies
In the past, extensive studies on the bulk enthalpy of oxygen plasmas in PWK3 were performed by Herdrich using the cavity calorimeter [15; 10]. The experiments conducted in the frame of this paper represent the first attempt at measuring the mean enthalpy for carbon dioxide. Moreover, the determination of the nozzle cooling power for IPG4 is a novelty. The operating parameters of the corresponding test conditions are summarized in Tab. 2. Parameters that are not part of the table are the same as in Tab. 1 for all experiments. As stated before, the anode voltage and the injector pressure are not target parameters, but their changes are the result of altering the mass flow rate and the anode power.
The five test conditions are labeled as CO2#01a/b-CO2#04, differing in anode power, mass flow rate and/or tank pressure. The higher the number, the lower the ratio of applied anode power \(P_{\mathrm{A}}\) and mass flow rate \(\tilde{m}_{\mathrm{CO2}}\), defined as \(\tilde{h}_{\mathrm{A}}\):
\[\tilde{h}_{\mathrm{A}}=\frac{P_{\mathrm{A}}}{\tilde{m}_{\mathrm{CO2}}} \tag{16}\]
The specific anode power \(\tilde{h}_{\mathrm{A}}\) is by definition a representation of the facility operating conditions. Consequently, it does not account for losses and is to be distinguished from the mean specific energy coupled into the plasma:
\[\tilde{h}_{\mathrm{tot}}=\eta_{\mathrm{tot}}\cdot\tilde{h}_{\mathrm{A}} \tag{17}\]
The first two conditions are the same with regards to the generator operating parameters, but differ in the tank pressure, and are distinguished by an additional lowercase letter. The tank pressure at CO2#01a is at minimum (30 Pa), while for the other conditions molecular nitrogen is injected at the back of the vacuum tank (cp. Section 2.1). The target pressure for CO2#01b - CO2#04 was 100Pa, but due to a limited test time an adjustment of the nitrogen mass flow for each condition was not possible. Moreover, capturing the plasma jet inside the cavity calorimeter influences the ambient pressure measured at the tank wall and it is not fully representative of the free-stream conditions. Nevertheless, since the ambient pressures were high enough to restrict the plasma jet to diameters smaller than the calorimeter opening, the slightly lower tank pressure is not believed to have significant influence on the cavity calorimeter measurements. In Fig. 6, the measured calorimetric plasma powers as well as the tube and nozzle cooling losses for all five conditions are illustrated.
The power directly measured with the cavity calorimeter (Eq. 8) is shown in blue, while the thermal (Eq. 9) and the kinetic (Eq. 10) powers of the gas exiting the probe are shown in green and magenta, respectively. The tube and nozzle cooling powers are plotted in black. With regards to the nozzle cooling, the first two test conditions are treated as one, since no difference could be observed. The measured calorimetric plasma powers are in the range of approx. \(30-35\,\mathrm{kW}\) for all conditions. It is evident that the cavity calorimeter is able to capture most of the inherent plasma power, and only little power is left in the gas exiting the device. Nevertheless, neglecting the remaining enthalpy in the
\begin{table}
\begin{tabular}{l r r r r} \hline Condition & \(P_{\mathrm{A}}\) [\(\,\mathrm{kW}\)] & \(\tilde{m}_{\mathrm{CO2}}\) [\(\,\mathrm{g/s}\)] & \(\tilde{h}_{\mathrm{A}}\) [\(\,\mathrm{M}\)]/\(\mathrm{kg}\) & \(p_{\mathrm{amb}}\) [Pa]1 \\ \hline CO2\#01a & 160 & 2.2 & 72.73 & 27 \\ CO2\#01b & 160 & 2.2 & 72.73 & 83 \\ CO2\#02 & 160 & 3.0 & 53.33 & 92 \\ CO2\#03 & 150 & 3.5 & 42.86 & 97 \\ CO2\#04 & 135 & 4.0 & 33.75 & 102 \\ \hline \end{tabular}
\end{table}
Table 2: Summary of test conditions for the CO2 cavity calorimeter experiments in PWK3 at varying specific anode powers \(\tilde{h}_{\mathrm{A}}\).
Figure 6: Measured calorimetric plasma powers for the investigated conditions. Partitions of the thermal and kinetic exit powers are shown in green and magenta, respectively. The corresponding tube and nozzle cooling powers are indicated with black stars and diamonds, respectively.
outflowing gas causes an error of up to 7 % in the case of high mass flow rates, which justifies the improvements made to the test setup. Comparing CO2#01a and CO2#01b shows the influence of raising the ambient pressure. Not only is the measured power higher at elevated pressure, proving that capturing the whole plasma jet is not possible at minimum pressure, but also the partitions change. While the kinetic exit power decreases with increasing ambient pressure, the temperature at the exit increases. This is most likely due to the lower expansion at elevated pressure and, thus, less conversion of heat into kinetic energy. The condition CO2#02 shows the highest calorimetric plasma power. With further decreasing anode power, the calorimetric plasma power decreases, as expected. The cooling losses are significantly lower for higher mass flow rates. Reason for this is the lower energy per particle, leading to lower temperatures in the discharge. Moreover, during the experiments it could be observed that the plasma jet decouples from the nozzle with higher mass flow rates. The exact correlation between injector pressure and jet expansion is not investigated in this paper, but can be seen as a factor for the nozzle cooling reduction.
Following the methodology in Section 2.2.2, Eqs. 11-13 are applied to determine relevant efficiencies of the plasma generator IPG4 and the PWK3 facility as a whole. The coupling, thermal and total efficiencies of all five test conditions are plotted over the specific anode power in Fig. 7.
The thermal efficiency lies between 60 and 80 percent for all conditions. The coupling efficiency is rather low (\(<32\,\%\)) and decreases with higher specific energies. Consequently, the total efficiency of IPG4 for pure CO\({}_{2}\) plasma flows ranges from approx. 20 to 25 percent, also decreasing with specific energy. The two adjacent points at the highest \(\tilde{h}_{\mathrm{A}}\), i.e., CO2#01a and CO2#01b, show a jump in efficiency when increasing the ambient pressure from 30 Pa to 83 Pa. This justifies the use of elevated pressure in all experiments in order to capture the whole plasma jet inside the cavity calorimeter and avoid residual flow around the device. In contrast, the thermal efficiency is not affected by the tank pressure, indicating that the discharge in the generator itself is unaltered. The measurement results of the cavity calorimeter experiments are summarized in Tab. 3. Here, the bulk enthalpy \(\tilde{h}_{\mathrm{tot}}\) is introduced as the ratio of calorimetrically measured plasma power and the total mass flow rate, assuming that the entire mass flux passes through the cavity calorimeter.
The measured bulk enthalpies cover an interesting range regarding thermal CO\({}_{2}\) splitting (Fig. 5). While the high power condition CO2#01b is expected to just about reach full conversion (97 %) at a medium high energy efficiency of 42 %, condition CO2#04 with a bulk enthalpy of 7.89 MJ\(/\)kg is near the optimum efficiency of 51 % at lower conversion of 63 %.
For oxygen plasma in PWK3, Herdrich determined a total efficiency of 22 % at \(\tilde{h}_{\mathrm{A}}=47.7\) MJ\(/\)kg using a 5.5 turn IPG coil and four capacitors in the resonant circuit. [15]. This corresponds well to the measured efficiencies for carbon dioxide in the course of this work. Moreover, Herdrich observed the same trend of higher total efficiencies with lower specific anode powers, reaching up to 35 % at \(\tilde{h}_{\mathrm{A}}=19.3\) MJ\(/\)kg [15].
#### 3.1.2 Radial specific enthalpy distribution
In order to quantify the spatial specific energy inhomogeneity in the IPG4 CO\({}_{2}\) plasma jet, local measurements of the mass-specific enthalpy were conducted at an axial position of \(x=156\) mm for one condition, i.e., CO2#01b, with the heat flux-Pitot double probe. The tank pressure was set to 100 Pa for all tests. In Fig. 8 the measured heat flux, referred to the 50 mm probe geometry, as well as the Pitot pressure over the radius are shown. Due to their small size, the error bars are not drawn for reasons of readability. On the centerline, the relative errors are 4 % for the heat flux and 2 % for the Pitot pressure.
Slightly left of the plasma jet centerline, a maximum heat flux of \(1117\,\mathrm{kW}/\mathrm{m}^{2}\) is measured by the calorimetric insert. The small offset of the peak position to \(r=0\) mm is due to unavoidable inaccuracies in manual probe placement. Towards the plasma edge, the heat flux decreases steadily, with a notable
Figure 8: Radial profiles of local heat flux and Pitot pressure for the condition CO2#01b at \(x=156\) mm. A double Gaussian distribution (green line) is fitted to the Pitot values. The tank pressure of 100 Pa is indicated by a horizontal line.
Figure 7: Measured coupling, thermal and total efficiencies plotted over the specific anode power \(\tilde{h}_{\mathrm{A}}\), i.e., the ratio of anode power and mass flow rate, for test conditions CO2#01 (right) to CO#04 (left).
plateau between \(r=40\,\)mm and \(r=60\,\)mm. The Pitot measurement shows a prominent off-centered global peak of \(466\,\)Pa at \(60\,\)mm and a smaller local peak on the plasma jet centerline. It is believed that the off-centered global peak originates from the tangential gas injection in IPG4, which stabilizes the plasma, leading to an increased static pressure at the discharge chamber wall [15]. The radial position of the peak is strongly influenced by the jet expansion and depends on the axial measurement position and the ambient pressure in the tank. The Pitot profile can be described by a double Gaussian distribution, which is drawn as a green line and follows the equation:
\[f(r)=a_{1}\exp\left(-\frac{(r-b_{1})^{2}}{2c_{1}^{2}}\right)+a_{2}\exp\left(- \frac{(r-b_{2})^{2}}{2c_{2}^{2}}\right)+d \tag{18}\]
with \(a_{1}=2.72\cdot 10^{2}\,\)Pa, \(b_{1}=7.9\cdot 10^{-4}\,\)m, \(c_{1}=4.06\cdot 10^{-2}\,\)m, \(a_{2}=2.79\cdot 10^{2}\,\)Pa, \(b_{2}=6.17\cdot 10^{-2}\,\)m, \(c_{2}=1.2\cdot 10^{-2}\,\)m and \(d=100\,\)Pa. This characteristic shape will be used in Section 3.2 for the determination of the local mass flux profile.
Based on the measurements of heat flux and Pitot pressure at condition CO2#01b, the radial distribution of the mass-specific enthalpy can be calculated, using Eq. 5. Figure 9 presents the locally measured values. Here, the uncertainty region is indicated by a gray shadow.
The mass-specific enthalpy shows a bell-shaped distribution with a maximum value of \(32.2\,\mathrm{MJ}/\mathrm{kg}\) near the centerline and negligible enthalpies at the plasma edge. Based on the simulation of thermal splitting (Fig. 5) full dissociation of CO\({}_{2}\) is expected for the inner plasma jet region, i.e., radial positions of \(r<60\mathrm{mm}\). The highest energy efficiencies of approx. \(50\,\%\) are estimated to be around \(r=70\mathrm{mm}\). The mean mass-specific enthalpy of \(\tilde{h}_{\mathrm{tot}}=14.76\,\mathrm{MJ}/\mathrm{kg}\), measured with the cavity calorimeter for CO2#01b, is depicted as a horizontal blue line. Moreover, a Gaussian function, drawn in magenta, is fitted to the enthalpy data. The fit function is in good agreement with the measurements and lies withing the uncertainty region for nearly all points. The fit function follows the equation:
\[f(r)=a\exp\left(-\frac{(r-b)^{2}}{2c^{2}}\right)+d \tag{19}\]
with \(a=3.42\cdot 10^{7}\,\)J/kg, \(b=-6.53\cdot 10^{-3}\,\)m, \(c=4.93\cdot 10^{-2}\,\)m and \(d=-2.01\cdot 10^{6}\,\)J/kg.
The probe measurements show a strong spatial inhomogeneity in specific enthalpy at the test position. On the plasma jet centerline, as well as in the outer parts of the plume, the enthalpy deviates significantly from the mean value. In the following, the influence of this spatial inhomogeneity in specific energy input on the total CO\({}_{2}\) splitting performance will be investigated.
### The influence of inhomogeneity on CO\({}_{2}\) splitting performance
In this section, the influence of spatial enthalpy inhomogeneity on the carbon dioxide conversion and, thus, the energy efficiency of CO\({}_{2}\) splitting is examined by the example of the test condition CO2#01b. To do so, two cases are constructed: First, a theoretical scenario where the mass-specific enthalpy is constant over radius, i.e., at all positions in the plasma jet the enthalpy equals the mean value of \(\tilde{h}_{\mathrm{tot}}=14.76\,\mathrm{MJ}/\mathrm{kg}\). Second, the real scenario, where mass flux and enthalpy are inhomogeneously distributed over the plasma jet radius, leading to the mass-specific enthalpy profile in Fig. 9. Although the probe measurements are conducted in the vacuum tank after plasma expansion through the convergent nozzle, it is assumed that the measured specific energy distribution is representative of the inhomogeneities during plasma generation, and thus CO\({}_{2}\) conversion, in the discharge region of the plasma source. This is supported by numerical simulations by Vasil'evskii et al. of an air flow in an inductively coupled plasma (ICP) source similar to the one used in the course of this work [31]. The simulations show that the inhomogeneities in mass-specific enthalpy are introduced during plasma generation in the discharge region
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline Condition & \(Q_{\mathrm{tube}}\) [\(\mathrm{kW}\)] & \(Q_{\mathrm{nozzle}}\) [\(\mathrm{kW}\)] & \(P_{\mathrm{cal}}\) [\(\mathrm{kW}\)] & \(\tilde{h}_{\mathrm{tot}}\) [\(\mathrm{MJ}/\mathrm{kg}\)] & \(\eta_{\mathrm{tot}}\) [\(-\)] \\ \hline CO2\#01a & 11.31 & 4.17 & \(30.98\pm 1.52\) & \(14.08\pm 0.70\) & 0.193 \\ CO2\#01b & 11.43 & 4.17 & \(32.48\pm 1.59\) & \(14.76\pm 0.74\) & 0.202 \\ CO2\#02 & 10.14 & 2.18 & \(35.57\pm 1.72\) & \(11.86\pm 0.58\) & 0.222 \\ CO2\#03 & 9.46 & 1.86 & \(34.49\pm 1.66\) & \(9.85\pm 0.48\) & 0.231 \\ CO2\#04 & 8.88 & 1.38 & \(31.56\pm 1.51\) & \(7.89\pm 0.38\) & 0.234 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Summary of cavity calorimeter results for CO\({}_{2}\) at varying specific energy input.
Figure 9: Radial distribution of the mass-specific enthalpy (black triangles) for the condition CO2#01b at \(x=156\,\)mm, with uncertainty region in gray. The mean enthalpy is indicated as a horizontal blue line. A Gaussian function (magenta) is fitted to the enthalpy data.
already, before being expanded in the vacuum tank. Thus, the measurements with the double probe in PWK3 are a measure of the inhomogeneous plasma generation process in the generator itself.
The CO\({}_{2}\) conversion is in both cases, homogeneous and inhomogeneous, calculated using Cantera under the assumption of thermodynamic equilibrium (cp. Section 2.3). For IPG4, this is a good approximation, at least for the plasma generation phase inside the generator, as shown in recent work [14]. In the homogeneous case, the carbon dioxide splitting performance can be directly read from Fig. 5. In the case of inhomogeneous enthalpy distribution, the mass distribution in the plasma jet must be known in order to calculate the integral splitting performance. Hence, the mass flux distribution at the test position in the CO\({}_{2}\) plasma jet is determined in the following.
#### 3.2.1 Mass flux determination
To calculate the integral splitting performance in the inhomogeneous case, the mass flux (mass flow rate density) is required, since the measured enthalpy is mass-specific and does not reveal the enthalpy distribution per mass. The mass flux, in cylindrical coordinates \((r,\varphi)\), is defined as
\[j_{\rm m}(r,\varphi)=\frac{\mathrm{d}\dot{m}}{\mathrm{d}A} \tag{20}\]
with the mass flow rate \(\dot{m}\) and the plasma jet cross-sectional area \(A\). Unfortunately, there is no way to directly measure the local mass flux, but the distribution can be reconstructed using the plasma probe measurements presented earlier. In particular, three primary criteria must be met by a suitable mass flux profile:
1. At the plasma edge (\(R_{\rm max}\)) the mass flux is zero: \[j_{\rm m}(R_{\rm max},\varphi)=0\] (21)
2. The mass flux integrated over the plasma jet cross section equals the total mass flow rate: \[\dot{m}=\int_{0}^{2\pi}\int_{0}^{R_{\rm max}}j_{\rm m}(r,\varphi)r\,\mathrm{d }r\,\mathrm{d}\varphi\] (22)
3. The product of mass flux and specific enthalpy integrated over the plasma jet cross-sectional area is equal to the total calorimetric plasma power: \[P_{\rm cal}=\int_{0}^{2\pi}\int_{0}^{R_{\rm max}}h_{\rm tot}(r,\varphi)j_{\rm m }(r,\varphi)r\,\mathrm{d}r\,\mathrm{d}\varphi\] (23)
For CO2#01b, the total mass flow rate is known to be \(\dot{m}=2.2\,\mathrm{g/s}\). Moreover, the calorimetric plasma power \(P_{\rm cal}=~{}32.48\,\mathrm{kW}\) was measured with the cavity calorimeter (Tab. 3). In addition, the radial distribution of the mass specific enthalpy \(h_{\rm tot}(r,\varphi)\) was determined with the double probe (Fig. 9). In the course of this analysis, the Gaussian fit function of the measured enthalpies \(h_{\rm tot,fit}\) is used instead of the single data points, serving as a continuous function in the calculations. Moreover, rotational symmetry is assumed inside the IPG4 plasma jet:
\[h_{\rm tot}\neq f(\varphi),\;j_{\rm m}\neq f(\varphi) \tag{24}\]
With the assumption of rotational symmetry, the Eqs. 21-23 and the probe measurement data, a mass flux profile with three degrees of freedom (DOF), such as simplified Gaussian profiles, can be clearly identified. However, for centered and off-centered Gauss profiles no solution of the mass flux can be found. Thus, it is further assumed that the shape of the radial mass flux distribution is similar to the Pitot pressure profile (Fig. 8), since stagnation pressure and mass flux are closely related to one another. Consequently, the mass flux profile is believed to follow a double Gaussian distribution (Eq. 18). The ratio of the profile widths \(c_{2}/c_{1}\), as well as the height ratio of the two peaks (value of central peak divided by value of off-centered peak) are adopted from the Pitot profile to be 0.3 and 0.735, respectively. Under these conditions, a mass flux distribution can be clearly identified. For CO2#01b the resulting profile is plotted in Fig. 10.
At \(x=156\,\mathrm{mm}\) a plasma edge position of \(R_{\rm max}=117\,\mathrm{mm}\) was determined by examining the double probe results. This value is indicated in the figure. The calculated mass flux profile is plotted as a blue line. The profile follows a double Gaussian distribution with \(a_{1}=8.92\cdot 10^{-2}\,\mathrm{kg/m^{2}/s}\), \(b_{1}=0\,\mathrm{m}\), \(c_{1}=4.94\cdot 10^{-2}\,\mathrm{m}\), \(a_{2}=7.82\cdot 10^{-2}\,\mathrm{kg/m^{2}/s}\), \(b_{2}=6.30\cdot 10^{-2}\,\mathrm{m}\), \(c_{2}=1.48\cdot 10^{-2}\,\mathrm{m}\) and \(d=-5.50\,\mathrm{kg/m^{2}/s}\) (cp. Eq. 18). The Gaussian fit of the enthalpy measurement is added in magenta, shifted horizontally to be axially symmetric. The light blue shadow serves as a sensitivity analysis towards the assumed values of \(c_{2}/c_{1}\), the peak ratio and the plasma radius \(R_{\rm max}\). The colored region includes all solutions of the mass flux profile for \(0.15\leq c_{2}/c_{1}\leq 0.45\), \(0.32\leq\mathrm{peak~{}ratio}\leq 1.0\) and \(107\,\mathrm{mm}\leq R_{\rm max}\leq 127\,\mathrm{mm}\). While in theory an infinite number of double Gaussian mass flux profiles can be found, most of them represent non-physical or unrealistic solutions, like a singularity at the position of mean enthalpy or a significantly higher mass flux on the centerline than at the off-centered peak position. Thus, it is probable that the sensitivity analysis covers a wide range of possible mass flux profiles. Nevertheless, other
Figure 10: Reconstructed mass flux profile (blue line) for the condition CO2#01b at \(x=156\,\mathrm{mm}\) under the assumption of similarity to the Pitot profile. The Gaussian fit to the enthalpy data is plotted in magenta and the identified plasma edge is printed as a gray vertical line.
shapes than double Gaussian functions, e.g., with more degrees of freedom, could lead to valid solutions, but are not included in this analysis. It has to be noted that the global mass flux peak is close to the position of the mean value of the mass-specific enthalpy for CO2#01b, which supports the choice of the profile shape. This might be due to a superposition of the tangential gas injection in IPG4 and the power coupling close the quartz tube wall due to the skin effect [10].
#### 3.2.2 Co\({}_{2}\) splitting under inhomogeneity
In a last step, the mass flux distribution profile and the measured local mass-specific enthalpy values can be combined to calculate the integral CO\({}_{2}\) conversion and energy efficiency. A comparison to the splitting performance in the case of a homogeneous distribution of enthalpy over plasma radius quantifies the influence of the inhomogeneity. To determine the inhomogeneous splitting performance, the converted mass in the inhomogeneous case, assuming rotational symmetry, is calculated:
\[\chi_{\rm inhom}=\frac{2\pi}{\dot{m}}\int_{0}^{Rmax}\chi(r)j_{\rm m}(r)r\,{\rm d }r \tag{25}\]
which leads to the energy efficiency in the inhomogeneous case:
\[\eta_{\rm inhom}=\chi_{\rm inhom}\cdot\frac{\Delta H^{0}_{298}}{\bar{h}_{\rm tot }}\frac{e\cdot N_{\rm A}}{M_{\rm CO2}} \tag{26}\]
The local CO\({}_{2}\) conversion values \(\chi(r)\) are determined by feeding the local mass-specific enthalpy \(h_{\rm tot,fit}(r)\) into the thermodynamic equilibrium simulation tool based on Cantera (cp. Section 2.3). This methodology is used in a parameter study, where not only the measured enthalpy profile of CO2#01b \(h_{\rm tot,fit}\), but additional artificial Gaussian enthalpy distributions, varying in their Full Width at Half Maximum (_FWHM_), i.e., their spatial inhomogeneity, are investigated. The smaller the _FWHM_ of the enthalpy distribution, the higher is the inhomogeneity. The normalized enthalpy profiles used in the parameter study are plotted in Fig. 11.
For each profile, the peak height and the _FWHM_ are chosen so that the bulk enthalpy equals \(\bar{h}_{\rm tot}=14.76\,{\rm MJ/kg}\), which is the calorimetrically measured value for CO2#01b. Moreover, the mass flux distribution is kept constant and equals the earlier determined profile (Fig. 10) for the study. This theoretical parameter study would in reality resemble an experiment, where the mass injection into the generator remains unchanged, while the radial distribution (not the total value) of the applied power is altered. Consequently, the homogeneous comparison case for CO\({}_{2}\) splitting is the same for each distribution. The conversion and energy efficiency in the homogeneous case can be directly extracted from Fig. 5 to be \(\chi_{\rm hom}=97.1\,\%\) and \(\eta_{\rm hom}=42.2\,\%\), respectively. For the inhomogeneous enthalpy distributions, Eqs. 25 and 26 are applied to calculate the CO\({}_{2}\) splitting performance in the case of spatial inhomogeneity. The results of the parameter study are plotted in Fig. 12.
In the figure, the ratio of energy efficiency in the case of inhomogeneous specific energy distribution versus the homogeneous case is plotted as a function of the _FWHM_ of the enthalpy distribution, normalized with \(R_{\rm max}\), which is a direct measure of inhomogeneity. It has to be emphasized again that all investigated profiles, in combination with the mass flux distribution, have the same mean enthalpy. Above all, the analysis shows that spatial inhomogeneity in specific enthalpy significantly lowers the CO\({}_{2}\) splitting performance in a plasma jet. This is due to the nonlinear correlation of SEI and the carbon dioxide conversion (cp. Fig. 5). Moreover, it can be seen that the higher the inhomogeneity, the lower the splitting performance for the same mean mass-specific enthalpy, i.e., the same SEI. At large _FWHM_, the Gaussian specific enthalpy profile approaches a constant radial distribution, which results in an asymptotic behavior towards the homogeneous case. For CO2#01b, the _FWHM_ is in the order of the plasma jet ra
Figure 11: Normalized Gaussian radial enthalpy profiles with varying _FWHM_ used in the parameter study. The measured specific enthalpy distribution for CO2#01b is colored in magenta.
Figure 12: Ratio of inhomogeneous energy efficiency to the homogeneous comparison case as a function of the _FWHM_ of the mass-specific enthalpy profile, normalized with \(R_{\rm max}\). Towards the left, the inhomogeneity increases. The ratio of the measured enthalpy distribution (CO2#01b) is colored in magenta. Two horizontal blue lines represent the parameter range of the sensitivity analysis.
dius. The corresponding conversion and energy efficiency are \(\chi_{\rm 01b}=81.5\,\%\) and \(\eta_{\rm 01b}=35.4\,\%\), respectively. As a result, the inhomogeneous distribution of mass-specific enthalpy in IPG4 lowers the carbon dioxide splitting performance by \(16\,\%\).
The blue horizontal lines in Fig. 12 indicate the sensitivity of the CO2#01b splitting performance to changes in the mass flux profile. The plotted result space corresponds to the blue-shaded region in Fig. 10. Although the mass flux profile is altered strongly in the sensitivity analysis, the influence on the splitting performance is quite limited (approx. \(10\,\%\)). The reason for this is that a large fraction of the mass flux is situated in outer regions of the plasma jet, where the mass flux variation is comparably small (cp. Fig. 10). Moreover, the mean enthalpy in the study of \(\bar{h}_{\rm tot}=14.76\,\rm M/kg\) is in a near linear region regarding the energy efficiency (cp. Fig. 5). This region is characterized by robustness to inhomogeneities in the mass-specific enthalpy.
The inhomogeneities encountered in IPG4 are caused by the superposition of gas injection near the quartz tube edge and the radially inhomogeneous power distribution due to the skin effect [15]. Therefore, the decrease in splitting performance is mostly defined by geometric parameters, like the injector placement, and the operational frequency. Since all conditions introduced in this paper (CO2#01-CO2#04) use the same plasma generator, operated at the same frequency, it is expected that the inhomogeneities are similar. Consequently, by changing the operating parameters the overall CO\({}_{2}\) splitting performance can be optimized, i.e., by adjusting the mean mass-specific enthalpy, but the inhomogeneity, leading to lower efficiencies, is not believed to be affected strongly. It has to be stated that this is only true as long as the discharge mode (inductive'mode 3', cp. [32]) remains unchanged.
## 4 Conclusion and outlook
In this work, the specific energy inhomogeneity in the high-power carbon dioxide plasma jet of the inductive plasma generator IPG4 was determined experimentally. Its influence on the CO\({}_{2}\) splitting performance was quantified, showing a negative correlation between specific enthalpy inhomogeneity and CO\({}_{2}\) conversion, and thus energy efficiency.
Using a cavity calorimeter, calorimetric CO\({}_{2}\) plasma powers were measured for five different operating conditions in plasma wind tunnel PWK3. Moreover, total generator efficiencies of IPG4 in the range of \(19.3-23.4\,\%\) were determined. A heat flux-Pittot double probe was applied to resolve the radial distribution of the mass-specific enthalpy for the condition CO2#01b (\(P_{\rm A}=160\,\rm kW\), \(\dot{m}_{\rm CO2}=2.2\,\rm g/s\)). The measurements showed a strong inhomogeneity in specific energy at the test position of \(x=156\,\rm mm\).
Based on the probe measurements, a radial mass flux profile was calculated. Here, a double Gaussian distribution, similar to the Pitot pressure, was assumed. In a parameter study, the influence of inhomogeneity in the mass-specific enthalpy distribution on the carbon dioxide splitting performance in thermodynamic equilibrium was quantified. The analysis revealed that specific energy inhomogeneities lower the CO\({}_{2}\) conversion, and consequently the energy efficiency, significantly, compared to a fully homogeneous distribution over plasma radius. Although this work only considers thermal CO\({}_{2}\) splitting, it is expected that the presented results qualitatively apply for non-thermal carbon dioxide dissociation as well, as long as some form of nonlinearity between SEI and the CO\({}_{2}\) conversion occurs.
The shown influence of inhomogeneity on plasma-based CO\({}_{2}\) splitting should be considered in future designs of plasma sources, but also for the application of diagnostics and in plasma modeling, because splitting performance can be, and is, lost due to inhomogeneities. As a consequence, only local measurements are able to reveal the true potential of a plasma source with regards to plasma-based CO\({}_{2}\) conversion.
In future work, the determination of the local species composition in the plasma jet in PWK3 by optical emission spectroscopy (OES) is planned. This is to support or disprove the assumption of thermal CO\({}_{2}\) conversion in the plasma generator IPG4.
## Funding
This work was supported by the German Academic Scholarship Foundation (Studienstiftung des Deutschen Volkes); and the Friedrich und Elisabeth Boysen-Stiftung.
|
2310.06808 | Odds are the sign is right | This article introduces a new condition based on odds ratios for sensitivity
analysis. The analysis involves the average effect of a treatment or exposure
on a response or outcome with estimates adjusted for and conditional on a
single, unmeasured, dichotomous covariate. Results of statistical simulations
are displayed to show that the odds ratio condition is as reliable as other
commonly used conditions for sensitivity analysis. Other conditions utilize
quantities reflective of a mediating covariate. The odds ratio condition can be
applied when the covariate is a confounding variable. As an example application
we use the odds ratio condition to analyze and interpret a positive association
observed between Zika virus infection and birth defects. | Brian Knaeble, Julian Chan | 2023-10-10T17:30:48Z | http://arxiv.org/abs/2310.06808v1 | # Odds are the sign is right
###### Abstract
This article introduces a new condition based on odds ratios for sensitivity analysis. The analysis involves the average effect of a treatment or exposure on a response or outcome with estimates adjusted for and conditional on a single, unmeasured, dichotomous covariate. Results of statistical simulations are displayed to show that the odds ratio condition is as reliable as other commonly used conditions for sensitivity analysis. Other conditions utilize quantities reflective of a mediating covariate. The odds ratio condition can be applied when the covariate is a confounding variable. As an example application we use the odds ratio condition to analyze and interpret a positive association observed between Zika virus infection and birth defects.
## 1 Introduction
Simpson's paradox has been defined as a surprising situation that may occur when two populations are compared with respect to the incidence of some attribute: if the populations are separated in parallel into a set of descriptive categories, the population with higher overall incidence may yet exhibit a lower incidence within each such category (Wagner, 1982). With two categories Simpson's paradox occurs when \(P(\,Y\!=1|X\!=1)>P(\,Y\!=1|X\!=0)\) while both \(P(\,Y\!=1|X\!=1,\,W\!=1)<P(\,Y\!=1|X\!=0,\,W\!=1)\) and \(P(\,Y\!=1|X\!=1,\,W\!=0)<P(\,Y\!=1|X\!=0,\,W\!=0)\). We assume dichotomous variables and \(P(\,Y\!=1|X\!=1)>P(\,Y\!=1|X\!=0)\) throughout this article. For this paper \(\,Y\!\) is the response or outcome, \(X\!\) records the treatment or exposure, and \(\,W\!\) is the covariate thought to be a confounding variable.
Given a \(2\times 2\times 2\) contingency table and a uniform distribution on a simplex of cell probabilities that sum to unity, Pavlides and Perlman (2009) show that approximately one in sixty contingency tables exhibit Simpson's paradox. When Simpson's
paradox occurs different conclusions are drawn depending on whether the aggregate or disaggregate data is interpreted. Pearl (2014) argues that the paradox has been resolved through the use of causal diagrams. When \(W\) is a confounding variable, with a causal impact on both \(X\) and \(Y\), then the disaggregate data should be used.
It is common for researchers to have data on \(X\) and \(Y\) while the confounding \(W\) remains unobserved. Properties of \(W\) capturing how it relates to \(X\) and \(Y\) can be useful during sensitivity analysis even when these properties are not directly measured. For example, Cornfield et al. (1959) write "The magnitude of the excess lung cancer risk (_Y_) amongst cigarette smokers (_X_) is so great that the results can not be interpreted as arising from an indirect association of cigarette smoking with some other agent or characteristic (_W_), since this hypothetical agent would have to be at least as strongly associated with lung cancer as cigarette use; no such agent has been found or suggested." The argument relies in part on the ability of experts to assess possible magnitudes of association in the absence of data.
The logical condition, "this hypothetical agent would have to be at least as strongly associated with lung cancer as cigarette use," is part of what is now referred to as Cornfield's condition. Ding and Vanderweele (2014, 2016) have generalized Cornfield's condition, leading to the E-value for categorical sensitivity analysis. The E-value is a quantity supplemental to the p-value and related to evidence for causality in observational studies (Ding and VanderWeele, 2017). Alternatively, there is an odds ratio condition, derived here in this article from an optimal condition for continuous sensitivity analysis (Knaeble and Dutter, 2017, Figure 1, Remark A.1).
The purpose of this article is to introduce this odds ratio condition. We conduct simulations to show that this odds ratio condition is as reliable as other commonly used conditions for basic categorical sensitivity analysis. Relevant definitions are provided in Section 2, where our methodology is described. The results of simulations are displayed in Section 3. The utility of the odds ratio condition is demonstrated in Section 4, with an example case study highlighting how the odds ratio condition reflects the causal structure of confounding, not mediation. A detailed discussion occurs in Section 6. Supporting proofs are found in the appendix, where we derive the odds ratio condition and show it to be necessary for Simpson's paradox.
## 2 Methods
Henceforth we refer to Simpson's paradox as defined in the introduction as Strong Simpson's Paradox. Weak Simpson's Paradox occurs when either \(P(\,Y=1|X=1,\,W=1)<P(\,Y=1|X=0,\,W=1)\) or \(P(\,Y=1|X=1,\,W=0)<P(\,Y=1|X=1,\,W=0)<P(\,Y=1|X=1,\,W=0)\)
\(0\), \(W=0\)) but not both occur. When the adjusted risk difference,
\[P(\,W=1)\left(P(\,Y=1|X=1,\,W=1)-P(\,Y=1|X=0,\,W=1))\,+\right.\] \[P(\,W=0)\left(P(\,Y=1|X=1,\,W=0)-P(\,Y=1|X=0,\,W=0)\right),\]
is less than zero we say that an Adjusted Risk Difference Reversal has occurred. An Adjusted Risk Difference Reversal occurs if and only if the adjusted relative risk (see \(RR^{\text{true}}\) from Ding and Vanderweele (2016)) is less than one. Logically, Strong Simpson's Paradox implies Adjusted Risk Difference Reversal, and Adjusted Risk Difference Reversal implies Weak Simpson's Paradox.
To detect reversal phenomena we have conditions we have various conditions specified as follows with \(RR\) denoting the relative risk, \(RD\) denoting the risk difference, and \(OR\) denoting the odds ratio, and for each we use subscripts to specify the relevant variables, with the first subscript explanatory. The Cornfield Condition is \((RR_{XW}\wedge RR_{WY})>RR_{XY}\). The Risk Ratio Condition is \(RR_{XW}RR_{WY}/(RR_{XW}+RR_{WY}-1)>RR_{XY}\)(Ding and Vanderweele, 2016, Result 1), and the Risk Difference Condition is \((\min\{RD_{XW},RD_{WY}\}>RD_{XY})\wedge(\max\{RD_{XW},RD_{WY}\}>\sqrt{RD_{XY}})\)(Ding and Vanderweele, 2014, Theorem 1). The Pearson Correlation Condition is \(r(X,\,W)r(\,W,\,Y)>r(X,\,Y)\)(Cohen et al., 2003, Equation 3.24). We refer to
\[\left\{(\sqrt{OR_{XW}RR_{WY}}+1)/(\sqrt{OR_{XW}}+\sqrt{RR_{WY}})\right\}^{2}> RR_{XY}\]
(Bross, 1966, 1967; Lee, 2011) as the Mixed Condition because it utilizes an odds ratio and a relative risk factor. The Odds Ratio Condition is given by
\[\left\{(\sqrt{OR_{WX}}-1)/(\sqrt{OR_{WX}}+1)\right\}\left\{(\sqrt {OR_{WY}}-1)/(\sqrt{OR_{WY}}+1)\right\}\] \[>r(X,\,Y).\]
The performance of a condition is measured by estimating the probability for a reversal or a non reversal given that the condition has occurred or not occurred. These estimates are proportions computed from simulated \(2\times 2\times 2\) contingency tables. Following Pavlides and Perlman (2009) each contingency table is randomly selected using a uniform distribution on the 7-simplex \(S=\{(p_{1},...,p_{8})\in\mathbb{R}^{8}:p_{1}+...+p_{8}=1\}\), where \(p_{1},...,p_{8}\) are cell probabilities. We used the R command of runif.rcomp from the R package compositions(van den Boogaart, 2013) to uniformly and randomly select points in \(S\). Given a point \((p_{1},...,p_{8})\in S\) we calculate \(m=\min\{p_{1},...,p_{8}\}\) and for \(i=1,...,8\) the \(i\)th cell count of a random contingency table is determined as the smallest integer greater than \(p_{i}/m\). Each estimate \(\hat{p}\) is obtained from a sample of \(n>30,000\) contingency tables, and the formula \(\sqrt{\hat{p}(1-\hat{p})/n}\) is used to compute standard errors.
Results
Tables 1 and 2 show that the absence of stronger reversals can be reliably inferred from the absence of any condition, and the presence of weaker reversals can be reliably inferred from the presence of some of the conditions. None of the conditions is sufficient for predicting stronger reversals, and the absence of a single condition is insufficient to rule out the weakest form of reversal.
The Pearson Correlation Condition is the best indicator of an Adjusted Risk Difference Reversal. However, with W unmeasured and categorical it may be difficult to estimate \(r(\mathit{W},\mathit{X})\) and \(r(\mathit{W},\mathit{Y})\). The absence of the Odds Ratio Condition is the best indicator of the absence of an Adjusted Risk Difference Reversal. Note that during application of the Odds Ratio Condition that \(r(\mathit{X},\mathit{Y})\) can be computed unambiguously from the data.
When the Odds Ratio Condition fails to hold there is only a 2% chance of an Adjusted Risk Difference Reversal. The adjusted risk difference is an unbiased estimate for the average causal effect, if we assume that \(\mathit{W}\) suffices to completely control for confounding (Ding and Vanderweele, 2016). Residual confounding in practice may be more concerning than uncertainty about a reversal. The tables below may be used to select a condition with an associated probability sufficiently high so reversal uncertainty is tolerable in relation to the level of uncertainty inherent from residual confounding.
## 4 Application
The United States Centers for Disease Control and Prevention (CDC) has published "Outcomes of Pregnancy with Laboratory Evidence of Possible Zika Virus Infection in the United States and the US Territories" (CDCa, 2017). They report that 91 out of \(1,784\) pregnancies with evidence of Zika virus infection have resulted in a live born infant with a birth defect. This observed risk of \(91/1784\approx.051\) is significantly higher than \(1/33\approx.030\), which is the background risk for birth defects in the United States (CDCb, 2017). With \(X\) indicating possible Zika virus infection on a population of pregnant mothers and \(Y\) indicating a liveborn infant with a birth defect, the relative risk can be expressed as \(RR_{XY}=0.051/0.030=1.7\). However, in CDCa (2017) the CDC writes "we cannot determine whether individual defects were caused by Zika virus infection or other factors".
With a large representative sample of controls (pregnant mothers without evidence of Zika virus infection, perhaps matched with cases on measurable covariate characteristics) we could have data in the form of a contingency table as shown in Table 3, where the observed birth defect risk for the controls is \(1/33\) as expected. The numbers 501 and 16533 were chosen for demonstrational purposes. The positive association observed in Table 3 may be spurious, due to a confounding factor \(W\) thought to cause both \(X\) and \(Y\). For instance, \(W\) could indicate insufficient dietary intake of folic acid (Weinhold, 2009). According to Malone et al. (2016) individuals infected with Zika virus may be asymptomatic, making it difficult to estimate \(RR_{\textit{XW}}\) or \(RD_{\textit{XW}}\), and in this categorical context correlation is difficult to estimate. With our case-control setup \(OR_{\textit{WY}}\) is easier to estimate than \(RR_{\textit{WY}}\), because \(OR_{\textit{WY}}\!=OR_{\textit{YW}}\) and \(OR_{\textit{YW}}\) can be estimated retrospectively. \(OR_{\textit{WX}}\) and \(OR_{\textit{WY}}\) reflect the causal structure of confounding. We therefore use the Odds Ratio Condition in the following sensitivity analysis.
Recall the probabilities from Table 1, obtained from a random uniform distribu
tion on the 7-simplex. Not all such selected tables are relevant here. We modify the simulations and now select a \(2\times 2\times 2\) table using a discrete uniform distribution on the space of all tables that collapse to Table 3. The resulting probabilities are shown in Tables 4 and 5. We may reasonably bound \(OR_{\mathit{WY}}\) below 1.4 (Ionescu-Ittu et al., 2009). We compute \(r(\mathit{X},\mathit{Y})=0.0328\) from Table 3. If we substitute \(OR_{\mathit{WY}}=1.4\) and \(r(\mathit{X},\mathit{Y})=0.0328\) into the Odds Ratio Condition we then see that \(OR_{\mathit{WX}}>5\) is required for a reversal. Such a large odds ratio may be unreasonable (Katona, 2008). Both Table 2 and Table 5 then suggest that a reversal due to adjustment for confounding by folic acid deficiency is unlikely.
Discussion
The Odds Ratio Condition is derived in the appendix from the principle of least-squares. Multiple regression minimizes mean square error within the class of linear models, and regression with categorical data can be reasonable in some situations (Angrist and Pischke, 2009, Chapter 3). Using indicator variables we say that a Least-Squares Reversal occurs when \(\operatorname{sign}(\beta_{X|W})\neq\operatorname{sign}(\beta_{X})\). See Knaeble and Dutter (2017) or the appendix for further details. Probabilities for Least-Squares Reversals are not shown in the tables. Strong Simpson's Paradox is sufficient for a Least-Squares Reversal. The Pearson Correlation Condition and the Odds Ratio Condition are both necessary for a Least-Squares Reversal. A Least-Squares Reversal is not necessary nor sufficient for an Adjusted Risk Difference Reversal.
The Cornfield Condition is necessary for an Adjusted Risk Difference Reversal if we assume both \(RR_{xy}>1\) and \(RR_{xw}>1\). More complicated Risk Ratio Conditions (Ding and Vanderweele, 2016, Result 1; with their preceding definitions) and the Risk Difference Condition (Ding and Vanderweele, 2014, Theorem 1; lines 5 and 6) are necessary for an Adjusted Risk Difference Reversal, but the improvement requires specification of trivariate quantities: \(RR_{WY}\) conditional on both levels of \(X\). This is close to outright specification of \(P(\,Y=1)\) conditional on \(X\) and \(W\), from which with adequate specification of frequencies the adjusted risk difference can be computed directly. Here our simulations compared only those conditions built from simpler bivariate quantities. Our analysis has been limited to the simplest case of categorical confounding by a single dichotomous covariate. Result 1 of Ding and Vanderweele (2016) is more general, and foundational for the newly introduced \(E\)-value (Ding and VanderWeele, 2017). Our simulations suggest the possibility of an improved foundation.
Our analysis has taken a possible interaction effect into account. In the absence of an interaction effect Weak Simpsons Paradox can not occur, Adjusted Risk Difference Reversal and Strong Simpson's Paradox coincide, and since Strong Simpson's Paradox implies a Least-Squares Reversal, we have that the Pearson Correlation Condition and the Odds Ratio Condition are both necessary for an Adjusted Risk Difference Reversal. With an interaction effect we can have an Adjusted Risk Difference Reversal without a Least-Squares Reversal, but according to Tables 1 and 3 this is rare, and possibly less concerning than uncertainty about residual confounding. When an interaction effect is suspected we recommend simulations conditional on an observed data set as described in Section 4.
In conclusion, we have shown how odds ratios can be used during sensitivity analysis on \(2\times 2\) contingency tables. The Odds Ratio Condition has been shown
to be as reliable as other conditions that are based on risk ratios, risk differences, an odds ratio and a risk ratio, and correlation coefficients. When the unmeasured binary third variable W is a confounder affecting both \(X\) and \(Y\), as opposed to a mediating variable affecting \(Y\) but affected by \(X\), then the odds ratio condition should be considered: \(r(\textit{X},\textit{Y})\) can be computed from the data, while \(OR_{\textit{WX}}\) and \(OR_{\textit{WY}}\) reflect the causal structure of confounding.
## Appendix: Proofs
When \(Y\) is regressed onto \(X\) we have \(\beta_{\textit{X}}\) as the least-squares slope coefficient for \(X\). We have assumed \(\beta_{\textit{X}}>0\). When \(Y\) is regressed onto \(X\) and \(W\) we have \(\beta_{\textit{X}|\textit{W}}\) as the least-squares slope coefficient for \(X\), \(\beta_{\textit{W}|\textit{X}}\) as the least-squares slope coefficient for \(W\), and \(\beta_{0}\) as the intercept. We say that a Least-Squares Reversal has occurred when \(\beta_{\textit{X}|\textit{W}}<0\).
**Proposition 5.1**.: Strong Simpson's Paradox \(\implies\) Least-Squares Reversal__
Proof.: Define \(P_{00}=P(\textit{Y}=1|\textit{X}=0,\textit{W}=0)\), \(P_{10}=P(\textit{Y}=1|\textit{X}=1,\textit{W}=0)\), \(P_{01}=P(\textit{Y}=1|\textit{X}=0,\textit{W}=1)\), and \(P_{11}=P(\textit{Y}=1|\textit{X}=1,\textit{W}=1)\). Strong Simpson's Paradox implies \(P_{00}>P_{10}\) and \(P_{01}>P_{11}\). Consider \(\beta_{0}\), \(\beta_{\textit{X}|\textit{W}}\), and \(\beta_{\textit{W}|\textit{X}}\) as variables. Define \(\hat{P}_{00}=\beta_{0}\), \(\hat{P}_{10}=\beta_{0}+\beta_{\textit{X}|\textit{W}}\), \(\hat{P}_{01}=\beta_{0}+\beta_{\textit{W}|\textit{X}}\), and \(\hat{P}_{11}=\beta_{0}+\beta_{\textit{X}|\textit{W}}+\beta_{\textit{W}| \textit{X}}\). Let \(n\) stand for the total number of observations. The sum of the squares is a function of \((\beta_{0},\beta_{\textit{X}|\textit{W}},\beta_{\textit{W}|\textit{X}})\), and it can be expressed as
\[\begin{split} SS&=nP(\textit{X}=0,\textit{W}=0)(P_{00 }(1-\beta_{0})^{2}+(1-P_{00})\beta_{0}^{2})\\ &+nP(\textit{X}=1,\textit{W}=0)(P_{10}(1-\beta_{0}-\beta_{ \textit{X}|\textit{W}})^{2}+(1-P_{10})(\beta_{0}+\beta_{\textit{X}|\textit{W }})^{2})\\ &+nP(\textit{X}=0,\textit{W}=1)(P_{01}(1-\beta_{0}-\beta_{ \textit{W}|\textit{X}})^{2}+(1-P_{01})(\beta_{0}+\beta_{\textit{W}|\textit{X }})^{2})\\ &+nP(\textit{X}=1,\textit{W}=1)(P_{11}(1-\beta_{0}-\beta_{ \textit{W}|\textit{X}}-\beta_{\textit{X}|\textit{W}})^{2}+(1-P_{11})(\beta_{0 }+\beta_{\textit{W}|\textit{X}}+\beta_{\textit{X}|\textit{W}})^{2}).\end{split}\]
Within \(SS\) each term is of the form \(k(P(1-\hat{P})^{2}+(1-P)\hat{P}^{2})=k(\hat{P}-P)^{2}+k(P-P^{2})\), where \(k>0\). (As described in Section 2 our cell counts are nonzero with probability one.) Therefore the fit improves whenever some of the distances \(\{|\hat{P}_{00}-P_{00}|,|\hat{P}_{10}-P_{10}|,|\hat{P}_{01}-P_{01}|,|\hat{P}_{11}-P _{11}|\}\) decrease, as long as the others remain unchanged.
Suppose \(\beta_{\textit{X}|\textit{W}}\geq 0\). \(\beta_{0}\) and \(\beta_{\textit{W}|\textit{X}}\) can be independently raised or lowered so as to improve the fit until \(\beta_{0}\in[P_{10},P_{00}]\) or \(\beta_{0}+\beta_{\textit{X}|\textit{W}}\in[P_{10},P_{00}]\), and \(\beta_{0}+\beta_{\textit{W}|\textit{X}}\in[P_{11},P_{01}]\) or \(\beta_{0}+\beta_{\textit{X}|\textit{W}}+\beta_{\textit{W}|\textit{X}}\in[P_{11}, P_{01}]\). Case 1: if \(\beta_{0}\in[P_{10},P_{00}]\) and \(\beta_{0}+\beta_{\textit{W}|\textit{X}}\in[P_{11},P_{01}]\) then \(\exists\epsilon>0\) such that \((\beta_{0},\beta_{\textit{X}|\textit{W}},\beta_{\textit{W}|\textit{X}})\mapsto( \beta_{0},\beta_{\textit{X}|\textit{W}}-\epsilon,\beta_{\textit{W}|\textit{X}})\) improves the fit. Case 2: if \(\beta_{0}\not\in[P_{10},P_{00}]\) (which implies \(\beta_{0}+\beta_{\textit{W}|\textit{X}}\in[P_{10},P_{00}]\)) and \(\beta_{0}+\beta_{\textit{W}|\textit{X}}\in[P_{11},P_{01}]\) then
\(\exists\epsilon>0\) such that \((\beta_{0},\beta_{X|W},\beta_{W|X})\mapsto(\beta_{0}+\epsilon,\beta_{X|W}- \epsilon,\beta_{W|X}-\epsilon)\) improves the fit. Case 3: if \(\beta_{0}\not\in[P_{10},P_{00}]\) (which implies \(\beta_{0}+\beta_{W|X}\in[P_{10},P_{00}]\)) and \(\beta_{0}+\beta_{W|X}\not\in[P_{11},P_{01}]\) (which implies \(\beta_{0}+\beta_{X|W}+\beta_{W|X}\in[P_{11},P_{01}]\)) then \(\exists\epsilon>0\) such that \((\beta_{0},\beta_{X|W},\beta_{W|X})\mapsto(\beta_{0}+\epsilon,\beta_{X|W}- \epsilon,\beta_{W|X})\) improves the fit. In all three cases an improved fit is possible. Our assumption \(\beta_{X|W}\geq 0\) must therefore be faulty.
**Proposition 5.2**.: Least-Squares Reversal \(\iff\) Pearson Correlation Condition.
Proof.: From Cohen et al. (2003, Equation 3.24) we have
\[\frac{s_{X}}{s_{Y}}\beta_{X|W}=\frac{r(X,\,Y)-r(\,W,\,X)r(\,W,\,Y)}{1-r(\,W,\, X)^{2}}, \tag{1}\]
where \(s_{X}\) and \(s_{Y}\) are standard deviations of \(X\) and \(Y\) respectively. Since \(\frac{s_{X}}{s_{Y}}>0\) and \((1-r(\,W,X)^{2})>0\) we see from (1) that
\[\beta_{X|W}<0\iff r(X,\,W)r(\,W,\,Y)>r(X,\,Y). \tag{2}\]
The right hand side of (2) is the Pearson Correlation Condition.
**Lemma 5.1**.: _Given a correlation \(r>0\) and an associated odds ratio \(OR>1\) we have_
\[r<(\sqrt{OR}-1)/(\sqrt{OR}+1).\]
Proof.: Define \(\mathbf{x}=[0^{a},1^{b},0^{c},1^{d}]\) and \(\mathbf{y}=[1^{a},1^{b},0^{c},0^{d}]\) from a \(2\times 2\) contingency table with nonzero cell frequencies \(a\), \(b\), \(c\), and \(d\). The exponential notation indicates repeated entries. The odds ratio is \(OR=(bc)/(ad)\). We have \(bc>ad\). For any permutation \(\sigma\) of vector entries \(r(\mathbf{x},\mathbf{y})=r(\sigma\mathbf{x},\sigma\mathbf{y})\). The correlation coefficient can be expressed as
\[r(\mathbf{x},\mathbf{y})=\frac{n\sum_{i=1}^{n}x_{i}y_{i}-\sum_{i=1}^{n}x_{i} \sum_{i=1}^{n}y_{i}}{\sqrt{n\sum_{i=1}^{n}x_{i}^{2}-(\sum_{i=1}^{n}x_{i})^{2}} \sqrt{n\sum_{i=1}^{n}y_{i}^{2}-(\sum_{i=1}^{n}y_{i})^{2}}}.\]
In terms of \(a\), \(b\), \(c\), and \(d\), we have \(r(\mathbf{x},\mathbf{y})=\)
\[\frac{(a+b+c+d)b-(b+d)(a+b)}{\sqrt{(a+b+c+d)(b+d)-(b+d)^{2}}\sqrt{( a+b+c+d)(a+b)-(a+b)^{2}}}\] \[=\frac{bc-ad}{\sqrt{(a+c)(b+d)(a+b)(c+d)}}\] \[=\frac{(bc)/(ad)-1}{\sqrt{(1+c/a)(1+b/d)(1+b/a)(1+c/d)}}\] \[=\frac{OR-1}{\sqrt{(1+OR+c/a+b/d)(1+OR+b/a+c/d)}}\] \[=\frac{OR-1}{\sqrt{(1+OR)^{2}+(c/a+b/d+b/a+c/d)(1+OR)+(c/a+b/d)(b/ a+c/d)}}\] \[\leq\frac{OR-1}{\sqrt{(1+OR)^{2}+4\sqrt{(bc)/(ad)}(1+OR)+4(bc)/( ad)}}\] \[=\frac{OR-1}{\sqrt{(1+OR)^{2}+4\sqrt{OR}(1+OR)+4OR}}\] \[=\frac{OR-1}{\sqrt{(OR+2\sqrt{OR}+1)^{2}}}=\frac{OR-1}{OR+2\sqrt{ OR}+1}=\frac{(\sqrt{OR}+1)}{(\sqrt{OR}+1)}\frac{(\sqrt{OR}-1)}{(\sqrt{OR}+1)}= \frac{\sqrt{OR}-1}{\sqrt{OR}+1}.\]
The inequality follows from repeated application of \(u^{2}+v^{2}\geq 2uv\).
**Proposition 5.3**.: Pearson Correlation Condition \(\implies\) Odds Ratio Condition.
Proof.: We may without loss of generality swap the labels on the categories for \(W\) so as to ensure a positive association between \(W\) and \(Y\). If the Pearson Correlation Condition holds then \(r(\textit{X},\textit{W})\) must be positive as well. Observe then that the odds ratios \(OR_{\textit{XW}}\) and \(OR_{\textit{WY}}\) are both greater than one. Lemma 4.3 thus applies and we have
\[r(\textit{X},\textit{W})r(\textit{W},\textit{Y})>r(\textit{X}, \textit{Y})\implies\] \[\left\{(\sqrt{OR_{\textit{XW}}}-1)/(\sqrt{OR_{\textit{XW}}}+1) \right\}\left\{(\sqrt{OR_{\textit{WY}}}-1)/(\sqrt{OR_{\textit{WY}}}+1) \right\}>r(\textit{X},\textit{Y})\]
The preceding propositions together prove the following theorem.
**Theorem 5.1**: (a necessary condition for Simpson's paradox)**.**
Strong Simpson's Paradox \(\implies\) Odds Ratio Condition. |
2301.06191 | Elizabethan vortices | Radial solutions to the elliptic sinh-Gordon and Tzitzeica equations can be
interpreted as Abelian vortices on certain surfaces of revolution. These
surfaces have a conical excess angle at infinity (in a way which makes them
similar to Elizabethan ruff collars). While they can not be embedded in the
Euclidean 3-space, we will show that they can be globally embedded in the
hyperbolic space. The existence of these hyperbolic embeddings follows from the
asymptotic analysis of a Painleve III ODE. | Maciej Dunajski, Nora Gavrea | 2023-01-15T21:11:59Z | http://arxiv.org/abs/2301.06191v3 | # Elizabeth Vortices
###### Abstract.
Radial solutions to the elliptic Sinh-Gordon and Tzitzeica equations can be interpreted as Abelian vortices on certain surfaces of revolution. These surfaces have a conical excess angle at infinity (in a way which makes them similar to Elizabeth ruff collars). While they can not be embedded in the Euclidean 3-space, we will show that they can be globally embedded in the hyperbolic space. The existence of these hyperbolic embeddings follows from the asymptotic analysis of a Painleve III ODE.
_Dedicated to Nick Manton on the occasion of his 70th birthday._
## 1. Introduction
The Abelian Higgs model at critical coupling admits static soliton solutions called vortices [10]. These vortices can be constructed on any surface \(\Sigma\) with a Riemannian metric \(g\), and arise from solutions of the Taubes equation [15]
\[\Delta u=e^{u}-1. \tag{1.1}\]
Here \(u\) is a function on \(\Sigma\), and \(\Delta\) is the Laplacian of \(g\). This equation is valid outside the small discs enclosing the vortex positions, where \(u\) has logarithmic singularities. If \(\Sigma\) is non-compact, then \(u\) is required to tend to zero asymptotically, away from its singularities.
The Taubes equation is non-integrable on the plane \(\Sigma=\mathbb{R}^{2}\) equipped with the flat metric. It was pointed out by Witten [17], that (1.1) is integrable on the hyperbolic space with constant Gaussian curvature equal to \(-1/2\), as then it can be transformed into the Liouville equation. There are other vortex-type equations [9] (see also [4, 12, 7]), where the RHS of (1.1) is replaced by \(C_{0}-C_{1}e^{u}\) for some constants \((C_{0},C_{1})\). These equations also reduce to the Liouville equation on spaces of constant curvature, which now depends on the combination of \((C_{0},C_{1})\). Like Witten's hyperbolic ansatz, the constant curvature integrable cases all originate from symmetry reductions of self-dual Yang-Mills equations on a four manifold \(\Sigma\times S\), where \(S\) is a surface of constant Gaussian curvature equal to minus the Gaussian curvature of \(\Sigma\)[4].
There are two more occurrences of integrability in the Taubes equation, which go beyond the Liouville equation, but instead transform (1.1) into the elliptic Sinh-Gordon equation, or the elliptic Tzitzeica equation [5]. To see how these equations arise, consider the metric \(g\) in the complex isothermal coordinates, where \(g=\Omega g_{0}\) with \(g_{0}=dzd\overline{z}\), and
allow the conformal factor \(\Omega:\Sigma\to\mathbb{R}^{+}\) to depend on \(u\). Choosing \(\Omega=e^{-u/2}\) yields the Sinh-Gordon equation and \(\Omega=e^{-2u/3}\) gives the Tzitzeica equation
\[\Delta_{0}\Big{(}\frac{u}{2}\Big{)}=\begin{cases}\sinh\Big{(}\frac{u}{2}\Big{)} \\ e^{u/3}-e^{-2u/3}\end{cases}\quad\text{ where }\quad\Delta_{0}=4\frac{\partial^{2}}{ \partial z\partial\overline{z}}. \tag{1.2}\]
If \(u\) satisfies appropriate boundary conditions, then the corresponding vortex can be interpreted as a surface, with the Higgs field and the Abelian magnetic field encoded in the intrinsic Riemannian data of \(g\).
The purpose of this paper is to visualise these vortices by constructing isometric embeddings of \((\Sigma,g)\). We shall show (see SS3) that for any non-negative integer \(N\), there exist solutions of the SG and Tzitzeica equation corresponding to circularly symmetric \(N\)-vortex solutions. The corresponding surfaces can be embedded in \(\mathbb{R}^{3}\) as surfaces of revolution with conical singularities at the position of the vortex. To resolve these conical singularities we shall construct ramified covers of \(\Sigma\), where the vortex metric is regular everywhere with the asymptotic behaviour
\[g\sim\begin{cases}dR^{2}+R^{2}d\psi^{2}\quad\text{as}\quad R\to 0\\ dr^{2}+c_{N}^{2}r^{2}d\psi^{2}\quad\text{as}\quad r\to\infty\end{cases}\]
where \(r\) and \(R\) are radial coordinates related by \(r=R^{1+N/2}\) in the SG case, and \(r=R^{1+2N/3}\) in the Tzitzeica case, and \(c_{N}\) is a constant equal to \(1+N/2\) in the SG case, and \(1+2N/3\) in the Tzitzeica case. The metric is regular at the position \(R=0\) of the vortex, and is locally asymptotically flat with the excess angle depending on \(N\). Surfaces with such asymptotic behaviour do not admit isometric embeddings in \(\mathbb{R}^{3}\), at least as surfaces of revolution but it may be possible to construct global embeddings in \(\mathbb{R}^{3}\) such that the extrinsic curvature only admits a discrete isometry group. The resulting surfaces asymptotically look like the Elizabeth ruffs (Figure 1a) which can be locally unfolded into a flat region of a plane, but contain too much material to do it globally: the circumference of circles exceeds \(2\pi\) times the radius.
**Figure 1.**_Asymptotic excess angle in the Elizabeth ruffs (1a), and green algae (1b)._
Motivated by this analogy we shall refer to the surfaces resulting from (1.2) as Elizabeth vortices. Other examples where the asymptotic circumference growth prevents the existence of circularly symmetric embeddings are certain types of algae (Figure 1b). The algae are in fact a better analogy to what happens with Elizabeth vortices, as (unlike the hollow Elizabeth ruffs) their excess angle changes continuously with the radius. In SS4.2 we shall show that the Elizabeth vortices embed as regular surfaces of revolution around a hyperbolic geodesic in the hyperbolic three-space. The existence and regularity of these embeddings are established in Theorems 4.3 and 6.2 which are our main results.
The mathematical theory of Abelian vortices and their moduli spaces has, over the last three decades, been advanced and developed by Nick Manton and his students and collaborators. It gives us a pleasure to dedicate this paper to Nick on the occasion of his seventieth birthday.
### Acknowledgements
We are grateful to Rob Kusner, Nick Manton, and Marc Troyanov for useful discussions and correspondence. Nora Gavrea is grateful to have been supported by a CMS Bursary grant from the Faculty of Mathematics, University of Cambridge.
## 2. The Taubes equation
Let \((\Sigma,g)\) be a two-dimensional Riemannian manifold with the orientation given by the Kahler two-form \(\omega_{\Sigma}\), and let \(\mathcal{L}\to\Sigma\) be a Hermitian line bundle equipped with a \(U(1)\) connection \(A\), and a global \(C^{\infty}\) section \(\phi\) called the Higgs field. We shall be interested in pairs \((A,\phi)\) which are global minimizers of the Ginsburg-Landau energy functional at the critical coupling
\[E[A,\phi]=\frac{1}{2}\int_{\Sigma}\Big{(}|D\phi|^{2}+|F|^{2}+\frac{1}{4}(1-| \phi|^{2})^{2}\Big{)}\omega_{\Sigma} \tag{2.1}\]
where \(F=dA\), and \(D\phi=d\phi-iA\phi\). Completing the square in (2.1) shows that \(E\geq\pi N\) where the integer
\[N=\frac{1}{2}\int_{\Sigma}F\]
is the vortex number equal to the first Chern number of \(\mathcal{L}\), and the inequality is saturated iff \((A,\phi)\) satisfy the 1st order Bogomolny equations
\[\overline{\mathcal{D}}\phi=0,\quad F=\frac{1}{2}(1-|\phi|^{2})\omega_{\Sigma}\]
where \(\overline{\mathcal{D}}\) is the anti-holomorphic part of the covariant derivative \(D\). The Bogomolny equations, and the energy functional (2.1) are invariant under gauge transformations
\[\phi\to e^{i\alpha}\phi,\quad A\to A+d\alpha,\]
where \(\alpha:\Sigma\to\mathbb{R}\).
If \(z=x+iy\) is a holomorphic isothermal coordinate, such that the metric on \(\Sigma\) is
\[g=\Omega(z,\overline{z})dzd\overline{z}\]
and, in a trivialisation of \(\mathcal{L}\) the connection is given by \(A=A_{z}dz+\overline{A_{z}}d\overline{z}\), then the first Bogomolny equation can be solved to give \(\overline{A_{z}}=-i\phi^{-1}\partial_{\overline{z}}\phi\). Setting \(\phi=e^{u/2+i\chi}\), where \(u\) and \(\chi\) are real functions on \(\Sigma\), with \(u\) being gauge-invariant, reduces the second Bogomolny equation to a single 2nd order PDE called the Taubes equation [15]
\[\Delta_{0}u-\Omega(e^{u}-1)=0,\quad\text{where}\quad\Delta_{0}=4\partial_{z} \partial_{\overline{z}}. \tag{2.2}\]
This equation is valid outside small discs enclosing the logarithmic singularities of \(u\) corresponding to the positions of vortices where the Higgs field vanishes. If \(|\phi|\) vanishes at \(z_{0}\) with multiplicity \(N_{0}\) then near \(z_{0}\) the function \(u\) has as an expansion of the form
\[u=2N_{0}\ln|z-z_{0}|+\text{const}+\frac{1}{2}\overline{b}\cdot(z-z_{0})+\frac {1}{2}b\cdot(\overline{z}-\overline{z_{0}})+\ldots\]
where the coefficients \(b,\overline{b}\) depend on \(z_{0}\) and \(\overline{z_{0}}\). The seminal result of Taubes is that the moduli space of solutions to (2.2) with vortex number \(N\) is a manifold of real dimension \(2N\). In other words, specifying the positions of vortices, and their multiplicities determines the solution uniquely.
Assume that the underlying surface admits a \(U(1)\) isometry, so that \(\Omega=\Omega(R)\) where \(R^{2}\equiv|z|^{2}\). If the solution of the Taubes equation only depends on \(R\) in a way that \(\phi\) vanishes at \(R=0\) with multiplicity \(N\), then (2.2) reduces to an ODE
\[\frac{d^{2}u}{dR^{2}}+\frac{1}{R}\frac{du}{dR}+\Omega(1-e^{u})=0. \tag{2.3}\]
The recursion relations then give
\[u=2N\ln R+b_{0}+b_{1}R+b_{2}R^{2}+\ldots\,\quad\text{where}\quad b_{1}=0,\quad b _{2}=-\frac{\Omega(0)}{4}. \tag{2.4}\]
## 3. The Sinh-Gordon vortex
Taking \(\Omega=\exp{(-u/2)}\) in the Taubes equation (2.2) yields the elliptic Sinh-Gordon equation
\[\Delta_{0}\Big{(}\frac{u}{2}\Big{)}=\sinh{\Big{(}\frac{u}{2}\Big{)}}. \tag{3.1}\]
This gives an interpretation of the metric \(g\) as an isolated vortex. The magnetic field and the Higgs field have an intrinsic geometric interpretation as the Riemann curvature two-form and the (inverse of) conformal factor with a complex phase \(\chi\), i.e.
\[g=e^{-u/2}dzd\overline{z},\quad|\phi|^{2}=e^{u},\quad A=\frac{i}{2}(\partial- \overline{\partial})u+d\chi.\]
Choosing a spin-frame \(\mathbf{e}^{1}=e^{-u/4}dx,\mathbf{e}^{2}=e^{-u/4}dy\) such that \(g=\delta_{ij}\mathbf{e}^{i}\otimes\mathbf{e}^{j}\), the connection and curvature forms of \(g\) can be read off from the Cartan structure equations
\[d\mathbf{e}^{i}+\Gamma^{i}_{\ j}\wedge\mathbf{e}^{j}=0,\quad\text{and}\quad R ^{i}_{\ j}=d\Gamma^{i}_{\ j},\]
and are given by
\[\Gamma^{1}{}_{2} = \frac{1}{4}(u_{x}dy-u_{y}dx)=-A+d\chi,\] \[R^{1}{}_{2} = \frac{1}{2}(u_{xx}+u_{yy})dx\wedge dy=-F.\]
To reinterpret the surface \((\Sigma,g)\) as a vortex we need to construct a solution to (3.1) which satisfies the boundary conditions (2.4). To do that we shall look for radial solutions of the form \(u=u(r)\). In this case the ODE
\[u^{\prime\prime}+\frac{1}{r}u^{\prime}=e^{u/2}-e^{-u/2} \tag{3.2}\]
resulting from (3.1) is equivalent to the Painleve III equation with parameters \((0,0,1,1)\) (see Appendix). In [5] the Painleve connection formulae of [11] relating the asymptotic solution to Painleve III at \(r=0\) and \(r=\infty\) have been used to show that
\[u(r) \sim 4\sigma\ln r+4\ln\beta-\frac{1}{4(1-\sigma)^{2}\beta^{2}}r^{2-2 \sigma}\quad\mbox{for}\quad r\to 0\] \[\sim -8\kappa K_{0}(r)\quad\mbox{for}\quad r\to\infty\]
with the connection formulae, valid for \(0\leq\kappa\leq\pi^{-1}\)
\[\sigma(\kappa)=\frac{2}{\pi}\arcsin\left(\pi\kappa\right),\quad\beta(\kappa) =2^{-3\sigma}\frac{\Gamma((1-\sigma)/2)}{\Gamma((1+\sigma)/2)} \tag{3.4}\]
and so \(0\leq\sigma\leq 1\). To construct an \(N\)-vortex solution on a regular surface for an arbitrary \(N\) take1
Footnote 1: See [3] for another idea based on the Baptista superposition rule [1].
\[\sigma=\frac{N}{N+2}. \tag{3.5}\]
Note that near \(r=0\) the power series solution (3.3) does not take the form (2.4). This is because the conformal factor \(\Omega=e^{-u/2}\sim r^{-2\sigma}\) is not regular at \(r=0\). To resolve this singularity define the new coordinate \(R\) by
\[r=R^{1+\frac{N}{2}}. \tag{3.6}\]
Now the asymptotic behaviour of \(u\) and the metric near \(R=0\) are
\[u=2N\ln R+4\ln\beta-\frac{(N+2)^{2}}{16\beta^{2}}R^{2}+\ldots \tag{3.7}\]
and
\[g\sim B_{N}\Big{(}dR^{2}+\Big{(}\frac{2}{N+2}\Big{)}^{2}R^{2}d\theta^{2}\Big{)},\]
where \(B_{N}\) is a constant dependent on \(N\). This is in agreement with (2.4). This metric has a conical singularity with the deficit angle \(2\pi N/(N+2)\). To obtain a regular surface
we pass to a ramified covering surface \(\Sigma_{N}\) of \(\Sigma\) taking the period of \(\theta\) to be \((2+N)\pi\), so that
\[\psi=\frac{2\theta}{2+N}\]
is periodic with period \(2\pi\). Now, near \(R=0\) the metric on \(\Sigma_{N}\) is regular but near \(R=\infty\) it has an excess angle as then \(u\to 0\) and up to exponentially small corrections
\[g_{N} \sim B_{N}(dR^{2}+R^{2}d\psi^{2})\quad\text{as}\quad R\to 0\] \[g_{N} \sim dr^{2}+\Big{(}\frac{N+2}{2}\Big{)}^{2}r^{2}d\psi^{2}\quad\text{ as}\quad r\to\infty. \tag{3.8}\]
The metric \(g_{N}\) is curved between \(R=0\) and \(R=\infty\) with the Gaussian curvature \(K=(e^{u}-1)/4\) interpolating between \(-1/4\) and \(0\).
The following Lemma uses the maximum principle to establish the monotonicity of \(u\).
**Lemma 3.1**.: _Let \(u=u(r)\) satisfy the radial Sinh-Gordon equation (3.2). Then \(u^{\prime}(r)>0\) for \(r\in(0,\infty)\)._
**Proof.** Near \(r=0\) we have \(u^{\prime}\sim 4N/(r(N+2))>0\), so for the statement of the Lemma not to be true there must exist \(r_{0}\) s.t. \(u^{\prime}(r_{0})=0\). Let us assume that \(r_{0}\) is the smallest value of \(r\) such that \(u^{\prime}(r)=0\). Let us also assume that \(u^{\prime\prime}(r_{0})\neq 0\). If \(u(r_{0})>0\) then \(u\) must admit a local maximum with \(u^{\prime\prime}(r_{0})<0\) and we get a contradiction as the RHS of (3.2) is positive, but the LHS is negative. If \(u(r_{0})<0\) then there must also exist an \(r_{1}\) such that \(u(r_{1})\) is a local minimum (as otherwise the function could not tend to \(0\)). Therefore \(u^{\prime\prime}(r_{1})>0\), and now the RHS (3.2) is negative but the LHS is positive. If \(u^{\prime\prime}(r_{0})=0\) then we also reach a contradiction regardless of the sign of \(u(r_{0})\) as long as \(u(r_{0})\neq 0\). Finally if \(u(r_{0})=0\) and \(u^{\prime}(r_{0})=0\) then the initial value problem for (3.2) at \(r_{0}\) has a solution \(u\equiv 0\) but we know that our solution is non-constant.
\(\square\)
**Figure 2.**_Profile of \(|\phi|^{2}=e^{u}\) for vortex numbers \(N=1,2,3\)._
The profiles of \(u\) in Figure 2, and the profiles of the embeddings in Figures 3, 5, 6 were obtained numerically. When solving (2.3) subtract the logarithmic singularity by setting \(u(R)=h(R)+2N\ln{(R)}\), and shoot on \(b_{0}\) with the initial conditions \(h(0)=b_{0},h^{\prime}(0)=0\). In the case of the Sinh-Gordon and Tzitzeica vortices, the resulting ODE has a removable singularity at \(r=0\). To get around this either change the coordinates to \(R\) as in (3.6) and (6.2), or specify the initial conditions at \(r=\epsilon\) (we took \(\epsilon=10^{-30}\)). We used both approaches, and applied MATLAB ODE solver, ode45, which is based on a Runge-Kutta method with a variable time-step.
### Large \(r\) asymptotics
For large \(r\) we approximate \(\sinh(u/2)\sim u/2\), and deduce that \(u\) is proportional to a Bessel function of order \(0\).
\[u^{\prime\prime}+\frac{1}{r}u^{\prime}=u,\quad\text{so that}\quad u=cK_{0}(r) \sim c\sqrt{\frac{\pi}{2r}}e^{-r}\quad\text{as}\quad r\to\infty, \tag{3.9}\]
where \(c\) is a constant. In the particle interpretation of Speight [13] a vortex asymptotically behaves as a point-like object carrying both a scalar charge, and a magnetic dipole moment. The strength of this point-like charge is given by the absolute value of the constant \(c\). In the Sinh-Gordon case we can read-off the exact value of this constant from the connection formulae (3.4) with \(\sigma=N/(N+2)\) which yields
\[|c|=\frac{8}{\pi}\sin{\Big{(}\frac{\pi N}{2(N+2)}\Big{)}}.\]
For comparison, the strength of a \(1\)-vortex on the plane can only be computed numerically, and we found that
\[|c|=\lim_{r\to\infty}\frac{u(r)}{K_{0}(r)}\sim 3.41.\]
In the original paper of Speight [13] this numerical value was computed to be 3.36.
## 4. Sinh-Gordon isometric embeddings
We turn to visualizing \(\Sigma\) and \(\Sigma_{N}\) as surfaces. The original surface \(\Sigma\) can be embedded isometrically in \(\mathbb{R}^{3}\) as a surface of revolution which is regular apart from the conical singularity at \(R=0\). The ramified covering surfaces \(\Sigma_{N}\) can also be embedded as regular surfaces of revolution in the hyperbolic space. We shall discuss these embeddings in turn.
### Euclidean embedding
**Proposition 4.1**.: _There exists an isometric embedding \(\iota:\Sigma\to\mathbb{R}^{3}\) of the Sinh-Gordon \(N\)-vortex as a surface of revolution which is asymptotically flat, and regular everywhere away from the conical singularity._
**Proof.** We shall prove this by constructing this embedding explicitly. Consider the flat metric on \(\mathbb{R}^{3}\) in the cylindrical coordinates \((\rho,\theta,z)\) and pull it back to \(\Sigma\). Assuming that
the embedding preserves the \(U(1)\) symmetry, we must have \(z=z(r)\) and \(\rho=\rho(r)\) so that
\[d\rho^{2}+\rho^{2}d\theta^{2}+dz^{2}=[(\rho^{\prime})^{2}+(z^{\prime})^{2}]dr^{2} +\rho^{2}d\theta^{2}=e^{-u/2}(dr^{2}+r^{2}d\theta^{2})\]
\[\rho=e^{-u/4}r,\quad z^{\prime}=e^{-u/4}\sqrt{\frac{1}{2}ru^{\prime}\Big{(}1- \frac{1}{8}ru^{\prime}\Big{)}}. \tag{4.1}\]
For this to work we need the argument of the square root to be non-negative. As \(u^{\prime}>0\) (see Lemma 3.1), a problem can only arise if \(u^{\prime}>8/r\). To rule this out note that (3.3) and (3.5) imply
\[\lim_{r\to 0}ru^{\prime}=\frac{4N}{N+2},\quad\text{and}\quad\lim_{r\to \infty}ru^{\prime}=0.\]
To compute the maximum of \(ru^{\prime}\) we look at its critical points
\[0=(ru^{\prime})^{\prime}=r(u^{\prime\prime}+u^{\prime}/r)=r(e^{u/2}-e^{-u/2})\]
so \(r=0\) or \(u=0\) (so \(r=\infty\)). We conclude that \(ru^{\prime}\leq 4N/(N+2)<8\). Therefore a global embedding in \(\mathbb{R}^{3}\) exists away from the conical singularity at \(r=0\).
Expressing the conformal factor near \(r=0\) in terms of \(R\) yields
\[g\sim\Big{(}\frac{(N+2)^{2}}{4\beta^{2}}+\frac{(N+2)^{4}R^{2}}{128\beta^{4}} \Big{)}\Big{(}dR^{2}+\Big{(}\frac{2}{N+2}\Big{)}^{2}R^{2}d\theta^{2}\Big{)}, \quad\text{where}\quad r=R^{1+\frac{N}{2}}. \tag{4.2}\]
The embedding equations (4.1) can be solved for small \(R\), which yields
\[\rho = \frac{R}{\beta}+\frac{(N+2)^{2}}{64\beta^{3}}R^{3}+\ldots,\] \[z = \frac{\sqrt{N(N+4)}}{2\beta}R+\frac{\sqrt{N(N+4)}(N+2)^{2}(N^{2} +4N-8)}{384\beta^{3}(N+4)N}R^{3}+\ldots\]
in agreement with (4.2). Figure 3 contains plots of the embedded surfaces for \(N=1,2,3\).
**Figure 3.**_Isometric embeddings of \(\Sigma\) in \(\mathbb{R}^{3}\) for \(N=1,2,3\) coloured by the Gaussian curvature._
The conical singularity of the embedding is reflected in the blow-up of the mean curvature \(H\) at \(r=0\). The Gaussian curvature \(H\) is everywhere regular (Figure 4).
\[H=\frac{e^{3u/2}}{2r^{2}}(\rho z^{\prime}+\rho^{\prime}z^{\prime\prime}-\rho^ {\prime\prime}z^{\prime}),\quad K=\frac{1}{4}(e^{u}-1).\]
**Figure 4.**_Mean and Gaussian curvatures of the isometric embeddings in \(\mathbb{R}^{3}\) for \(N=2\)._
### Hyperbolic embeddings
The regular ramified surface \(\Sigma_{N}\) can not be embedded in \(\mathbb{R}^{3}\) as a surface of revolution. The circumference of circles centered at \(R=0\) in \(\Sigma_{N}\) grows faster than their radii, which makes such an embedding impossible. To see it explicitly, consider the asymptotic form (3.8) of the metric on \(\Sigma_{N}\). Its pull-back to \(\mathbb{R}^{3}\) with cylindrical polars yields \(\rho=r(N+2)/2\), so that
\[(z^{\prime})^{2}=1-\Big{(}\frac{N+2}{2}\Big{)}^{2}\]
which does not have solutions if \(z\) is real, and \(N>0\).
These surfaces can however, for any \(N\), be properly embedded in the hyperbolic space \(\mathbb{H}^{3}\) as surfaces of revolution around a hyperbolic geodesic. Before establishing this fact (Theorem 4.3) we shall demonstrate that a local embedding of a surface of revolution in the hyperbolic space is unique, up to a hyperbolic isometry. In what follows, we shall use two models of the hyperbolic 3-space with scalar curvature \(-6/L^{2}\): the upper half-space model with the metric
\[G_{\mathbb{H}^{3}}=L^{2}\frac{dz^{2}+d\rho^{2}+\rho^{2}d\psi^{2}}{z^{2}}, \quad z>0 \tag{4.3}\]
and the Poincare ball model with the metric
\[G_{B}=4L^{2}\frac{dw^{2}+w^{2}(d\chi^{2}+\sin\chi^{2}d\psi^{2})}{(1-w^{2})^{2}}. \tag{4.4}\]
These models are related by
\[z=\frac{1-w^{2}}{1+w^{2}-2w\cos\chi},\quad\rho=\frac{2w\sin\chi}{1+w^{2}-2w\cos \chi}. \tag{4.5}\]
**Proposition 4.2**.: _A Riemannian surface with \(U(1)\) isometry given in the arc-length parametrisation as_
\[g=dr^{2}+f(r)^{2}d\psi^{2}. \tag{4.6}\]
_admits a local embedding as a surface of revolution in the hyperbolic space \(\mathbb{H}^{3}\) around a hyperbolic geodesic \(\gamma\) in the range of \(r\) given by_
\[L^{2}(1-(f^{\prime})^{2})+f^{2}\geq 0. \tag{4.7}\]
_Given \(\gamma\), this embedding is unique up to a discrete hyperbolic isometry._
**Proof.** Considering the pull back of the hyperbolic metric (4.3) gives two hyperbolic embeddings in \(\mathbb{H}^{3}\)
\[z_{\pm}(r)=\exp\Big{(}\int_{0}^{r}\frac{-ff^{\prime}\pm\sqrt{L^{2}(1-(f^{ \prime})^{2})+f^{2}}}{L^{2}+f^{2}}du\Big{)},\quad\rho_{\pm}(r)=\frac{z_{\pm}( r)f(r)}{L}, \tag{4.8}\]
which are well defined as long as (4.7) holds.
To show that the two embeddings \((z_{\pm}(r),\rho_{\pm}(r))\) are related by a discrete hyperbolic isometry, consider instead the pull back of the Poincare ball metric (4.4) which gives
\[f=2L\frac{w\sin\chi}{1-w^{2}},\quad 1=4L^{2}\frac{(w^{\prime})^{2}+w^{2}{\chi^{ \prime}}^{2}}{(1-w^{2})^{2}}.\]
Set \(Z=we^{i\chi}\), then
\[f=L\frac{Z-\bar{Z}}{i(1-|Z|^{2})},\quad 1=\frac{4L^{2}|Z^{\prime}|^{2}}{(1-|Z|^{ 2})^{2}}\]
so that if \(Z\) is a solution, then so is \(-\bar{Z}\). This corresponds to \(\chi\) and \(\pi-\chi\).
As an example, consider \(f=kr\) which is the \(k\)-fold covering of the plane. The hyperbolic embeddings (4.8) exist as long as
\[r^{2}>\frac{L^{2}(k^{2}-1)}{k^{2}}\]
which can be made as small as we want by the choice of \(L^{2}\), but will not cover \(r=0\). Thus the \(k\)-fold cover of a plane can not be embedded in \(\mathbb{H}^{3}\) globally. On the other hand a surface interpolating between a plane, and its \(k\)th cover with a profile function
\[f=r\sqrt{\frac{1+k^{2}r^{2}}{1+r^{2}}}\]
admits a global embedding in \(\mathbb{H}^{3}\).
**Theorem 4.3**.: _Let \((\mathbb{H}^{3},G)\) be a hyperbolic space with the metric \(G\) with Ricci scalar \(-6/L^{2}\). For any \(L^{2}\in(0,4]\) there exists a regular isometric embedding \(\iota:\Sigma_{N}\to\mathbb{H}^{3}\) which preserves the \(U(1)\) symmetry of \(g_{N}\), and is unique up to a hyperbolic isometry._
**Proof.** We shall consider \(\Sigma_{N}\) as a surface of revolution around a geodesic \(\gamma\) in \(\mathbb{H}^{3}\). We shall work in the upper half-space model \(z>0\) with the metric \(G_{\mathbb{H}^{3}}\) given by (4.3) and choose \(\gamma\) to be the \(z\) semi-axis as in the proof of Proposition 4.2. The pull back of \(G\) to \(\Sigma_{N}\)
\[\iota^{*}(G)=e^{-u/2}\Big{(}dr^{2}+\Big{(}\frac{N+2}{2}\Big{)}^{2}r^{2}d\psi^{ 2}\Big{)}\]
yields the embedding formulae
\[z_{\pm} = z_{0}\exp\Big{(}\int_{0}^{r}\frac{(N+2)^{2}(t^{2}u^{\prime}-4t) e^{-u/2}\pm\sqrt{e^{-u/2}P}}{4(N+2)^{2}t^{2}e^{-u/2}+16L^{2}}dt\Big{)},\] \[\rho_{\pm} = \frac{N+2}{2L}rz_{\pm}e^{-u/4}, \tag{4.9}\]
where
\[P\equiv 16t^{2}e^{-u/2}(N+2)^{2}-L^{2}((N+2)tu^{\prime}-4N)((N+2)tu^{\prime}-4N- 16).\]
For this embedding to be regular we need \(P\geq 0\). First compute the asymptotic behaviour
\[P \sim 16((N+2)^{2}t^{2}-N(N+4)L^{2})\to\infty\quad\text{as}\quad t\to\infty\] \[P \to 0\quad\text{as}\quad r\to 0.\]
Next look for the critical points of \(P\) to show that it does not have any and therefore stays positive between \(0\) and \(\infty\). Using (3.2) we find
\[P^{\prime}=2r(N+2)^{2}(L^{2}e^{u/2}+(4-L^{2})e^{-u/2})(4-ru^{\prime})\]
which clearly vanishes at \(r=0\) and is non-negative if \(L^{2}\leq 4\). Moreover, if \(L^{2}=4\), then we avoid a blow up of \(P^{\prime}\) at \(r=0\) if \(N>2\). We shall later set \(L^{2}=4\) for all \(N\), as this value of \(L\) turns out to be optimal in the sense which we shall explain after the proof.
We claim that \(Q\equiv 4-ru^{\prime}>0\). Indeed, \(Q(0)=8/(N+2)\) and \(\lim_{r\to\infty}Q=4\) and
\[Q^{\prime}=-r(e^{u/2}-e^{-u/2})\]
which is zero only if \(r=0\) or \(u=0\). But in the proof of Lemma 3.1 we have shown that \(u\neq 0\) for finite \(r\). The two signs in (4.9) correspond to two embeddings related by a hyperbolic isometry as shown in Proposition 4.2.
**Remarks**
* The map \(\iota:\Sigma_{N}\to\mathbb{H}^{3}\) defined in Theorem 4.3 is a proper embedding (rather than an immersion); there are no self-intersections in the hyperbolic space which can intuitively be explained as the circumference of the hyperbolic circle grows faster than \(2\pi\) times the hyperbolic radius in a way which is sufficient to accommodate the \(k\)-fold cover of \(\mathbb{R}^{2}\) asymptotically.
* Near the vortex position \(r=0\) the embedding (4.9) is \[z\sim 1-\frac{(N+2)^{2}}{32\beta^{2}}r^{4/(N+2)},\quad\rho\sim\frac{N+2}{4 \beta}r^{2/(N+2)},\quad\text{so that}\quad z\sim 1-\frac{\rho^{2}}{2},\] (4.10) where we used a hyperbolic isometry (scaling) to set \(z(0)=1\). With our choice \(L^{2}=4\) this embedding osculates \(\mathbb{H}^{2}\subset\mathbb{H}^{3}\) which embeds in the hyperbolic space as the hemisphere \(z^{2}+\rho^{2}=1\). Indeed, near \(z=1\) this hemisphere is \[z=\sqrt{1-\rho^{2}}\sim 1-\frac{\rho^{2}}{2}\] in agreement with (4.10). In particular this reaffirms the regularity of the embedding at \(r=0\) or \(\rho=0\), as at this point the revolving curve \(z=z(\rho)\) has vanishing gradient.
* Near \(r=\infty\) (choosing one of the two embeddings) both \(z\) and \(\rho\) tend to an annular limit point (\(z=0,\rho=0\)) where the surface meets the boundary of \(\mathbb{H}^{3}\) tangentially with a rate \[z\sim r^{-\frac{N+4}{N+2}},\quad\rho\sim\frac{N+2}{4}r^{-\frac{2}{N+2}},\quad \text{so that}\quad z\sim\frac{(N+2)^{\frac{N+4}{2}}}{2N+4}\rho^{\frac{N+4}{2}}.\]
This asymptotes a cover of a horosphere2 in \(\mathbb{H}^{3}\) as \(r\to\infty\), with the metric (3.8). On Figures 5 and 6 we present the surface corresponding to \(N=2\) in the upper half space as well as the Poincare ball model of the hyperbolic geometry, together with the plane sections. Footnote 2: Recall that in the upper half space model a horosphere is either a sphere tangent to the boundary, or a plane parallel to the boundary. The horospheres have constant, non-zero, mean curvature \(H\) (CMC) in \(\mathbb{H}^{3}\) but are intrinsically flat with zero Gaussian curvature. On the other hand the Euclidean planes and hemispheres in \(\mathbb{H}^{3}\) which intersect the boundary orthogonally are isometric to \(\mathbb{H}^{2}\), and have non-zero constant Gaussian curvature \(K\), but vanishing mean curvature \(H\). The hemispheres which intersect the boundary transversally have \(0<H<1\). **Figure 5.**_Regular hyperbolic embeddings of \(\Sigma_{N}\) for \(N=2\) in the upper half space and the Poincare ball model._ **Figure 6.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball._ **Figure 7.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 8.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 9.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 10.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 11.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 12.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 13.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 14.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 15.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 16.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 17.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 18.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 19.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 20.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 21.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 22.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 23.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 24.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 25.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 26.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 27.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 28.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 29.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 30.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 31.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 32.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 33.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 34.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 35.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 36.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 37.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 38.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 39.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 30.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 31.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 32.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 33.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 34.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 35.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 36.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 37.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 38.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 39.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 30.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 31.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 32.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 33.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 34.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 35.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 36.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 37.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 38.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 39.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 39.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 31.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 32.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 33.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 34.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 35.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 36.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 37.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 38.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 39.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 39.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 30.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 31.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 31.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 32.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 33.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 34.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 35.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 36.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 37.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 38.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 39.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 31.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 32.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 39.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 31.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 32.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 33.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 34.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 35.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 36.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 37.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 38.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 39.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 39.**_Plane sections of the \(N=1,2,3\) vortex surfaces in the Poincare ball model._ **Figure 31.
Figure 7 shows the plots of \(K\) and \(H\) as functions of \(r\). The mean curvature \(H\) interpolates between \(0\) at \(r=0\), which is the mean curvature of the oscillating \(\mathbb{H}^{2}\), and \(-1/2\) which is the mean curvature of the horosphere at \(r=\infty\).
**Figure 7.**_Mean and Gaussian curvatures of the hyperbolic embeddings for \(N=2\)._
* Proposition 4.1 and Theorem 4.3 construct a conically singular embedding in \(\mathbb{R}^{3}\), or a regular embedding in \(\mathbb{H}^{3}\) where both embeddings preserve the intrinsic \(U(1)\) isometry. An alternative would be to seek embeddings/immersions of the \(U(1)\) invariant vortex surfaces, where the embedding does not admit \(U(1)\) as a symmetry, i.e. the second-fundamental form is not invariant under \(\psi\to\psi+c\). Examples of such embeddings are the Smyth surfaces [14] in \(\mathbb{R}^{3}\) (surfaces of revolution embedded as CMC but only with discrete rotational symmetry), or their counterparts in \(\mathbb{R}^{2,1}\) also described [2] by the elliptic Sinh-Gordon equation.
## 5. Gauss-Bonnet theorem with conical singularities
Finally we shall re-confirm the vortex number by computing the first Chern class of the bundle \(\mathcal{L}\), and relate it to the Gauss-Bonnet theorem on \(\Sigma_{N}\). Recall that the gauge field \(F\) of the Sinh-Gordon vortex, and the Gaussian curvature of the corresponding embedding are given by
\[F=-\frac{1}{2}\Delta_{0}u,\quad K=\frac{1}{4}(e^{u}-1).\]
Remembering that the range of \(\theta\) is \([0,(N+2)\pi]\) and integrating the radial Laplacian yields
\[\frac{1}{2\pi}\int_{\Sigma_{N}}F = -\frac{1}{4\pi}\int_{\Sigma_{N}}(\Delta_{0}u)\;rdrd\theta\] \[= -\frac{N+2}{4}\Big{(}\lim_{r\to\infty}(ru_{r})-\lim_{r\to 0}(ru_{r}) \Big{)}\] \[= N\]
where we used the asymptotic form (3.3), and (3.5) of \(u\) near \(r=0\) and the exponential decay of the modified Bessel function \(K_{0}(r)\) at \(\infty\). On the other hand the integral of the Gaussian curvature can be related to this Chern class
\[\frac{1}{2\pi}\int_{\Sigma_{N}}K\mathrm{vol}_{\Sigma_{N}}=-\frac{1}{4\pi}\int_ {\Sigma_{N}}F=-\frac{N}{2}. \tag{5.1}\]
This is in agreement with the Gauss-Bonnet theorem for surfaces with conical deficit/excess angles [16]. Indeed, if \(g\) is of the form \(|z|^{2\alpha_{0}}|dz|^{2}\) near \(z=0\) and \(|w|^{2\alpha_{\infty}}|dw|^{2}\) near \(w=1/z=0\) then
\[\frac{1}{2\pi}\int_{\Sigma_{N}}K\mathrm{vol}_{\Sigma_{N}}=2+\alpha_{0}+\alpha _{\infty}.\]
In our case \(\alpha_{0}=0\). Setting \(\hat{R}=R^{-1}\) in the asymptotic expression for \(g\) yields
\[g\sim\hat{R}^{-N-4}(d\hat{R}^{2}+\hat{R}^{2}d\psi^{2})\]
near \(\hat{R}=0\). Therefore \(\alpha_{\infty}=-2-N/2\) in agreement with (5.1).
## 6. Tzitzeica vortex
There is another possible choice of the conformal factor in the Taubes equation (2.2) which leads to an integrable PDE. Setting \(\Omega=\exp{(-2u/3)}\) yields the elliptic Tzitzeica equation
\[\Delta_{0}u+e^{-2u/3}-e^{u/3}=0.\]
The radial solutions \(u=u(r)\) of this equation are characterised by Painleve III, this time with parameters \((1,0,0,-1)\) (See Appendix). The asymptotic connection formulae for this equation have been obtained in [8]. There exists a one-parameter family of solutions singular only at the origin. Adapting the results of [8] to our case we find
\[u(r) \sim \Big{(}\frac{9p}{\pi}-6\Big{)}\log{r}+\beta-\frac{e^{-2\beta/3}} {36}\left(1-\frac{p}{\pi}\right)^{-2}r^{6(1-p/\pi)}\quad\mbox{for}\quad r\to 0\] \[\sim \frac{6\sqrt{3}}{\pi}\Big{(}\cos{p}+\frac{1}{2}\Big{)}K_{0}(r) \quad\mbox{for}\quad r\to\infty,\]
where \(0<p<\pi\) parametrises the solutions and
\[\beta=3\ln\Big{(}3^{-3p/\pi}\frac{9p^{2}}{2\pi^{2}}\frac{\Gamma(1-\frac{p}{2 \pi})\Gamma(1-\frac{p}{\pi})}{\Gamma(1+\frac{p}{2\pi})\Gamma(1+\frac{p}{\pi}) }\Big{)}-\Big{(}\frac{9p}{2\pi}-3\Big{)}\ln{12}.\]
Setting
\[r=R^{\frac{3+2N}{3}},\quad\text{and}\quad p=\pi\frac{2N+2}{2N+3} \tag{6.2}\]
yields
\[u=2N\ln R+\beta+\ldots\]
which is an \(N\)-vortex with the strength (compare (3.9)) given by
\[\frac{3\sqrt{3}}{\pi}\Big{|}1-2\cos\Big{(}\frac{\pi}{2N+3}\Big{)}\Big{|}.\]
Near \(R=0\) the metric \(g=\Omega|dz|^{2}\) becomes
\[g\sim dR^{2}+\Big{(}\frac{3}{3+2N}\Big{)}^{2}R^{2}d\theta^{2}.\]
The constructions of the Euclidean and hyperbolic embeddings of \(g\) proceed along the lines we discussed in the Sinh-Gordon case, so here we just summarise the results.
### Euclidean embedding
The isometric embedding of the resulting surface in \(\mathbb{R}^{3}\) can be constructed following the steps in the Proof of Proposition 4.1.
**Proposition 6.1**.: _There exists an isometric embedding \(\iota:\Sigma\to\mathbb{R}^{3}\) of the Tzitzeica \(N\)-vortex as a surface of revolution which is asymptotically flat, and regular everywhere away from the conical singularity._
**Proof.** We shall construct this embedding explicitly, assuming that the \(U(1)\) symmetry is preserved. We therefore have
\[d\rho^{2}+\rho^{2}d\theta^{2}+dz^{2}=[(\rho^{\prime})^{2}+(z^{\prime})^{2}]dr^ {2}+\rho^{2}d\theta^{2}=e^{-2u/3}(dr^{2}+r^{2}d\theta^{2})\]
so that
\[\rho=re^{-u/3},\quad z^{\prime}=\sqrt{\frac{2ru^{\prime}}{3}e^{-2u/3}\Big{(}1- \frac{1}{6}ru^{\prime}\Big{)}}.\]
For this to exist we need \(u^{\prime}>0\) and \(ru^{\prime}<6\). The expansion (6.1) implies that \(ru^{\prime}=\frac{6N}{2N+3}\) at \(r=0\) and \(0\) as \(r\to\infty\). Looking for the critical points of \(ru^{\prime}\) we find
\[(ru^{\prime})^{\prime}=r(e^{u/3}-e^{-2u/3})\]
which vanishes at \(r=0\), or when \(u=0\). It is obvious from the boundary conditions at \(0\) that \(u\neq 0\) as long as \(u^{\prime}>0\) for all \(r\), which we need to prove anyway. Note that if \(u^{\prime}=0\) and \(u>0\) at some point \(r_{0}\) then \(u\) has a maximum, and therefore \(u^{\prime\prime}<0\) which contradicts the Tzitzeica equation. The same contradiction is reached if \(u^{\prime}=0\) and \(u^{\prime\prime}=0\). If \(u^{\prime}=0\) and \(u<0\) then there must also exist a minimum (at some other value of \(r\)) so that \(u\) can reach to \(0\) as \(r\to\infty\). The LHS of the Tzitzeica equation is positive at this minimum, but the RHS is negative. In the Proof of (3.1) (which carries through to the Tzitzeica case) we have also dealt with \(u^{\prime\prime}=0\) at the critical point. Hence \(u^{\prime}\) must be positive for all \(r\), which along with the boundary conditions implies \(u<0\)
for all \(r\). It follows that \(ru^{\prime}\) is strictly decreasing, and hence \(ru^{\prime}\leq\frac{6N}{2N+3}<6\) for all \(r\). Therefore a global embedding in \(\mathbb{R}^{3}\) exists away from the conical singularity.
### Hyperbolic embedding
The metric \(g=e^{-2u/3}|dz|^{2}\) is regular at \(0\) as long as \(\theta\) is periodic with the period \(2\pi(3+2N)/3\). Taking
\[\psi=\frac{3\theta}{3+2N},\quad r=R^{(2N+3)/3}\]
makes \(\psi\) periodic with period \(2\pi\) and the metric becomes flat near \(0\).
**Theorem 6.2**.: _Let \((\mathbb{H}^{3},G)\) be a hyperbolic space with the metric \(G\) with Ricci scalar \(-6/L^{2}\). For any \(L^{2}\in(0,3]\), there exists a regular isometric embedding \(\iota:\Sigma_{N}\to\mathbb{H}^{3}\) of the Tzitzeica N-vortex which preserves the \(U(1)\) symmetry of \(g_{N}\), and is unique up to a hyperbolic isometry._
**Proof.** We follow the proof of Theorem 4.3, and work in the half-space model where the surface of revolution will be constructed with respect to the hyperbolic geodesics chosen to be the \(z\)-semiaxis. Replacing \(N\) by \(4N/3\) in SG formulae we have
\[\iota^{*}(G)=e^{-2u/3}\Big{(}dr^{2}+\Big{(}\frac{2N+3}{3}\Big{)}^{2}r^{2}d \psi^{2}\Big{)}\]
which yields the embedding formulae
\[z_{\pm} = z_{0}\exp\Big{(}\int_{0}^{r}\frac{2M^{2}t(tu^{\prime}-3)/3\pm \sqrt{4M^{2}e^{2u/3}P}}{2M^{2}t^{2}+2L^{2}e^{2u/3}}dt\Big{)},\] \[\rho_{\pm} = \frac{M}{L}rz_{\pm}e^{-u/3}, \tag{6.3}\]
where
\[P(r)\equiv r^{2}e^{-2u/3}-L^{2}\left(1-\frac{1}{M}-\frac{u^{\prime}r}{3}\right) \left(1+\frac{1}{M}-\frac{u^{\prime}r}{3}\right)\!,\quad M=\frac{2N+3}{3}\]
with the asymptotic behaviour
\[\lim_{r\to 0}P=0,\quad\lim_{r\to\infty}P=\infty\]
and
\[P^{\prime}(r)=2re^{-2u/3}\left(1-\frac{u^{\prime}r}{3}\right)\left(1+\frac{L^{ 2}}{3}(e^{u}-1)\right).\]
For the embedding to exist, we need \(F\geq 0\). From the Proof of Proposition 6.1, we have \(ru^{\prime}<\frac{6N}{2N+3}<3\). Moreover, taking the limit \(r\to 0\) and noting that \(e^{u}\in(0,1)\) we deduce that \(1+\frac{L^{2}}{3}(e^{u}-1)>0\) iff \(L^{2}<3\). Hence for this range of \(L\), \(P^{\prime}>0\) for all \(r\), which implies that \(P\geq 0\) for all \(r\). It follows that there exist two hyperbolic embeddings
of the Tzitzeica \(N\)-vortex, and they are related by a hyperbolic isometry as shown in Proposition 4.2.
### Gauss-Bonnet Theorem with conical singularities
The Gaussian curvature of the Tzitzeica \(N\)-vortex surface is given by
\[K=\frac{1}{3}(e^{u}-1).\]
Remembering that \(\theta\in[0,2\pi(2N+3)/3]\) and using the asymptotic behaviour of \(u\) given in equation (6.1) we obtain, after integrating the radial Laplacian
\[\frac{1}{2\pi}\int_{\Sigma_{N}}K\text{vol}_{\Sigma_{N}}=-\frac{2N}{3}.\]
In this case, the metric of the rectified surface \(\Sigma_{N}\) is \(dR^{2}+R^{2}d\psi^{2}\), after substituting \(r=R^{2N/3+1}\) and \(\psi=\frac{3\theta}{2N+3}\). Hence \(\alpha_{0}=0\) and setting \(\hat{R}=R^{-1}\) yields \(\alpha_{\infty}=-2-2N/3\). This is equivalent to the Sinh-Gordon case by interchanging \(N\leftrightarrow 4N/3\). Hence
\[\frac{1}{2\pi}\int_{\Sigma_{N}}K\text{vol}_{\Sigma_{N}}=2+\alpha_{0}+\alpha_{ \infty}=-\frac{2N}{3}\]
which again agrees with [16].
## Appendix
Painleve III is a family of second order ODEs for \(w=w(\zeta)\) parametrised by four constants \((\alpha,\beta,\gamma,\delta)\)
\[\frac{d^{2}w}{d\zeta^{2}}=\frac{1}{w}\Big{(}\frac{dw}{d\zeta}\Big{)}^{2}- \frac{1}{\zeta}\frac{dw}{d\zeta}+\frac{\alpha w^{2}+\beta}{\zeta}+\gamma w^{3 }+\frac{\delta}{w}.\] (A.1)
* Setting \(u=4\ln w(\zeta)\) and \(r=2\zeta\) in the radial Sinh-Gordon equation \[u^{\prime\prime}+\frac{1}{r}u=e^{u/2}-e^{-u/2}\] yields (A.1) with parameters \((0,0,1,-1)\)
* Setting \(u=3\ln w(\zeta),r=\frac{3\sqrt{3}}{2}\zeta^{2/3}\) in the radial Tzitzeica equation \[u^{\prime\prime}+\frac{1}{r}u=e^{u/3}-e^{-2u/3}\] yields (A.1), this time with parameters \((1,0,0,-1)\). |
2304.03637 | Temperature Detection from Images Using Smartphones | Since late 2019, the global spread of COVID-19 has affected people's daily
life. Temperature is an early and common symptom of Covid. Therefore, a
convenient and remote temperature detection method is needed. In this paper, a
non-contact method for detecting body temperature is proposed. Our developed
algorithm based on blackbody radiation calculates the body temperature of a
user-selected area from an obtained image. The findings were confirmed using a
FLIR Thermal Camera with an accuracy of 97%. | Kamrul H Foysal, Bipasha Kundu, Jo Woon Chong | 2023-04-07T13:38:53Z | http://arxiv.org/abs/2304.03637v1 | # Temperature Detection from Images Using Smartphones
###### Abstract
Since late 2019, the global spread of COVID-19 has affected people's daily life. Temperature is an early and common symptom of Covid. Therefore, a convenient and remote temperature detection method is needed. In this paper, a non-contact method for detecting body temperature is proposed. Our developed algorithm based on blackbody radiation calculates the body temperature of a user-selected area from an obtained image. The findings were confirmed using a FLIR Thermal Camera with an accuracy of 97%.
_Clinical Relevance--_Our proposed method provides a remote and convenient solution in detecting temperature of specific body parts using a smartphone.
## I Introduction
Approximately 464 million people have been infected worldwide due to covid according to World Health Organization. A common symptom of COVID-19 is fever with high body temperature, which may lead to death [1]. since the testing of COVID in a contact-based way can be inconvenient, costly, and time consuming. Especially, this contact-based way can spread COVID-19 more. Non-contact infrared thermometer sensors, devices and thermal imaging systems have been widely used to measure a person's temperature [2] However, these cannot differentiate the temperature between specific body parts of humans [2], resulting in inaccurate temperature measurements. In this paper, we propose a novel smartphone camera-based temperature detection method which estimates the temperature of any specific body part of a human using only a smartphone camera.
## II methods
Pseudocolor image generation such as Jet color space has been used for thermal image color-mapping [3]. Since the temperature relationship is not linear, the estimation of temperature from the thermal images is difficult. The temperature is calculated using the obtained RGB image by creating a pseudocolor space. Then, a linear relationship between temperature and color intensity was established. According to blackbody radiation theory, a low temperature increases the visibility of red light (700nm). In contrast, a high temperature increases the visibility of blue light (490nm) and the dominant color changes with the temperature. Eq. (1) shows the calculation of spectral radiance density (\(B_{v}(T)\)) of the red light with the temperature \(T\) in absolute temperature unit.
\[B_{\text{red},v}(T_{\text{abs}})=\frac{2v^{2}}{c^{2}}\frac{hv}{e^{hv/kT}-1} \tag{1}\]
where \(h\) is the Planck constant, \(k\) is Boltzmann constant, and \(v\) is the frequency of the red light. In the pseudo color space, each image pixel value represents a specific temperature data point (corresponds to \(B_{v}(T)\)). These data points are assigned a unique color or shade based on their value. As the temperature changes, the pixel value changes accordingly. As for lower temperatures (\(<\)800K), the red color channel is known to be effective in estimating temperature, and pixel values of pseudo color images are proportional to the red channel value of the original RGB image. A linear relationship (see Eq. (2)) is established between the temperature (\(T_{\text{{{{{C}}}et}}}\) ) and the pixel intensities (\(I\)) by using the grayscale image which is obtained from the pixel values of red channel image. The temperature detection algorithm described above can approximate the temperature of different points in degrees Celsius.
\[T_{\text{{{{{C}}et}}}}=T_{\text{{{{{{I}}ou}}}}}+(T_{\text{{{{{{I}}ou}}}}}-T_{ \text{{{{{I}}ou}}}})\times\frac{(I_{\text{{{{{I}}ou}}}}-I_{\text{{{{{I}}ou}}}} )}{{{{{{I}}ou}}}} \tag{2}\]
where \(T_{\text{{{{{{C}}et}}}}}\) is the temperature of the location, \(T_{\text{{{{{I}}ou}}}}\) and \(T_{\text{{{{{I}}igh}}}}\) are low and high temperatures, respectively, and \(I_{\text{{{{{I}}inh}}}}\) and \(I_{\text{{{{{I}}ou}}}}\) are lowest and highest intensities of the image, respectively.
## III Results
Following the IRB from Texas Tech University (IRB#: 2019_150), the smartphone is used to acquire an image as shown in Fig. 2(a), then our algorithm estimates the temperature of a subject's body. Figs. 2(a) and 2(b) are the RGB and pseudocolor images acquired from a subject, respectively. Fig. 2(c) shows the zoom-in version of the Fig. 2 (b) and our algorithm estimates the temperature of the forehead part from Fig. 2(c), using Eq. (2) Here, the temperature is estimated to 33.8\({}^{\circ}\)C. A FLIR Thermal camera (see Figs. 2d and 2e) is used to validate the data. The obtained accuracy is 97%.
## IV Conclusion
We have developed a smartphone-based temperature estimation method which can rapidly and conveniently estimate human temperature, which is expected to reduce human encounters, and risk of spreading the virus.
## V References
* [1] WHO.init. "WHO Coronavirus (COVID-19) Dashboard 2022. [Online]." [https://covid19.who.int/](https://covid19.who.int/) (accessed 2022).
* [2] M. F. A. Mushahar and N. Zaini, "Human Body Temperature Detection based on Thermal Imaging and Screening using YOLO Person Detection," in _2021 11th IEEE International Conference on Control System, Computing and Engineering (ICCSCE)_, 2021: IEEE, pp. 222-227.
* The SAO Encyclopedia of Astronomy.[Online]." [https://astronomy.swin.edu.au/cosmos/b/b1](https://astronomy.swin.edu.au/cosmos/b/b1) ackbody+radiation (accessed 2022).
Figure 1: Temperature Detection Algorithm Results (a) RGB Image, (b) pseudo color Image, (c) ROI, (d) Taking an image with a thermal camera (34.6\({}^{\circ}\)C), (e) Validation of temperature using a thermal camera (34.9\({}^{\circ}\)C) |
2307.15505 | Exact intermittent solutions in a turbulence multi branch shell model | Reproducing complex phenomena with simple models marks our understanding of
the phenomena themselves and this is what Jack Herring's work demonstrated
multiple times. In that spirit, this work studies a turbulence shell model
consisting of a hierarchy of structures of different scales $\ell_n$ such that
each structure transfers its energy to two substructures of scale $\ell_{n+1} =
\ell_n /\lambda$. For this model we construct exact inertial range solutions
that display intermittency ie absence of self-similarity. Using a large
ensemble of these solutions we investigate how the probability distributions of
the velocity modes change with scale. It is demonstrated that while velocity
amplitudes are not scale invariant their ratios are. Furthermore using large
deviation theory we show how the probability distributions of the velocity
modes can be re-scaled to collapse in a scale independent form. Finally, we
discuss the implications the present results have for real turbulent flows. | Ben Ajzner, Alexandros Alexakis | 2023-07-28T12:00:35Z | http://arxiv.org/abs/2307.15505v1 | # Exact intermittent solutions in a turbulence multi branch shell model
###### Abstract
**For the commemorative Issue Dedicated to the Memory of Jackson Rea Herring**
Reproducing complex phenomena with simple models marks our understanding of the phenomena themselves and this is what Jack Herring's work demonstrated multiple times. In that spirit, this work studies a turbulence shell model consisting of a hierarchy of structures of different scales \(\ell_{n}\) such that each structure transfers its energy to two substructures of scale \(\ell_{n+1}=\ell_{n}/\lambda\). For this model we construct exact inertial range solutions that display intermittency _ie_ absence of self-similarity. Using a large ensemble of these solutions we investigate how the probability distributions of the velocity modes change with scale. It is demonstrated that while velocity amplitudes are not scale invariant their ratios are. Furthermore using large deviation theory we show how the probability distributions of the velocity modes can be re-scaled to collapse in a scale independent form. Finally, we discuss the implications the present results have for real turbulent flows.
Introduction
Constructing simple models that reproduce the phenomenological complex behavior of fluid flows has always been a driving force in turbulence research and is a direction in which Jack Herring's work excelled. There are numerous works in his career explaining complex phenomena in fluid dynamics with simplified models [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. In particular the energy cascade in scale space is a phenomenon that has met various modeling approaches in the literature as the direct interaction approximation [1; 5; 14; 15; 16], eddy damping quasi-normal Markovian models, [17; 18; 19; 20] energy diffusion models [21; 22] and shell models [23; 24; 25; 26]. Such models have lead to predictions about the direction of cascade, the power-law exponents of the energy spectra and intermittency. Intermittency that still escapes a firm quantitative understanding manifests itself as a deviation from self-similarity and from the prediction obtained on purely dimensional grounds. In particular, shell models have been used to study intermittency for many years now. Recent reviews can be found in [27; 28]. Typically, shell models quantify all structures of a given scale \(\ell\) by a single real or complex amplitude \(U_{\ell}\). As such, spatial intermittency that is linked to the appearance of rare but extremely intense structures, can not be captured this way. Nonetheless, the temporal variation of the modes \(U_{l}\) does display intermittency as has been demonstrated by many models [24; 25; 26; 29]. This type of intermittency has been recently linked to the fluctuation dissipation theorem [30]. Furthermore a solvable (but not energy conserving) model was also derived and studied in [31].
In the spirit discussed in the first paragraph we here construct and study a binary tree shell model for turbulence that displays intermittency. In this model energy at each scale is split between multiple different structures. Each structure transfers its energy into two smaller scale structures of smaller scale building a binary tree structure as shown in figure 1. In this way the number of structures increases exponentially as smaller scales are reached. Such models with binary structure were introduced in the 1990s but have not been investigated extensively [32; 33]. Here, we follow a similar analysis as in [34] where stationary solutions of non-binary shell models were investigated. We demonstrate that such analysis allows the construction of exact stationary solutions that display intermittency.
Multi-branch shell models
We consider the evolution of a turbulent flow modeled by the real amplitudes \(U_{n,m}\) of structures of scale \(\ell_{n}=1/k_{n}\) where
\[k_{n}=\lambda^{n}k_{0}\quad\text{or}\quad\ell_{n}=\ell_{0}/\lambda^{n} \tag{1}\]
and \(1<\lambda\). At scale \(\ell_{1}\) there is one structure whose amplitude is given by \(U_{1,1}\), this structure will transfer its energy to \(\mu\in\mathbb{N}\) structures of scale \(\ell_{2}\), each one of which will transfer its energy to \(\mu\) structures of scale \(\ell_{3}\) and so on as shown in figure 1 for \(\mu=2\). The volume of each structure is given by \(V_{i}=\ell_{n}^{D}\) where \(D\) is the spatial dimension. If the cascade process is space filling the number of substructures \(\mu\) is related to \(\lambda\) and \(D\) by
\[\lambda^{D}=\mu. \tag{2}\]
Figure 1: A sketch of the two branch (\(\mu=2\)) shell model. Each node marked by blue circles represents one dynamical mode of amplitude \(U_{n,m}\) marked by the two indexes \(n,m\) where \(n\) characterizes the scale \(\ell_{n}=\ell_{0}\lambda^{n}\) and \(m\) characterises the number of the mode in that scale. In each scale \(n\) there are \(M_{n}=\mu^{n-1}\) modes.
Accordingly at energy scale \(\ell_{i}\) we have \(M_{n}=\mu^{n-1}\) (with \(M_{0}=1\)) structures so that if we consider \(N\) such scales we have a total of
\[Z=1+\sum_{n=1}^{N}M_{n}=\frac{\mu^{N}-1}{\mu-1}+1 \tag{3}\]
structures. The energy of every structure is given by
\[E_{n,m}=\frac{1}{2}\rho U_{n,m}^{2}V_{n}=\frac{1}{2}\rho U_{n,m}^{2}\ell_{n}^{D} \tag{4}\]
so the total energy is given by
\[E=\frac{1}{2}\sum_{n=0}^{N}\frac{1}{M_{n}}\sum_{m=1}^{M_{n}}U_{n,m}^{2} \tag{5}\]
where \(\rho\) is from now on taken to be unity.
In the Desnianskii & Novikov model [23] structures of scale \(\ell_{i}\) interact with only structures of scale \(\ell_{n+1}\) and \(\ell_{n-1}\) and there is no branching \(\mu=1\). The amplitudes \(U_{n}\) then follow the following dynamical equation:
\[\dot{U}_{n}=ak_{n}[U_{n-1}U_{n-1}-\lambda U_{n}U_{n+1}]+bk_{i}[U_{n}U_{n-1}- \lambda U_{n+1}U_{n+1}]-\nu k^{2}U_{n}+F_{n} \tag{6}\]
For \(\nu=0\) and \(F_{n}=0\) this system conserves the energy 5 (with \(M_{n}=1\)) for any value of \(a,b\). The flux of energy across a scale \(\ell_{n}\) is given by:
\[\Pi_{n}\:=\:ak_{n}U_{n}U_{n-1}U_{n-1}+bk_{n}U_{n,m}U_{n,m}U_{n-1}. \tag{7}\]
Expanding on the Desnianskii & Novikov model allowing each structure to branch out to two (\(\mu=2\)) smaller scale structures \(U_{i,j}\) results in the following dynamical equation:
\[\dot{U}_{n,m}\:=\:ak_{n}\left[U_{n-1,m^{*}}U_{n-1,m^{*}}-\frac{ \lambda}{2}\left(U_{n,m}U_{n+1,m^{\prime}}+U_{n,m}U_{n+1,m^{\prime}+1}\right) \right]+ \tag{8}\] \[bk_{n}\left[U_{n,m}U_{n-1,m^{*}}-\frac{\lambda}{2}\left(U_{n+1,m ^{\prime}}U_{n+1,m^{\prime}}+U_{n+1,m^{\prime}+1}U_{n+1,m^{\prime}+1}\right)\right]\] (9) \[-\nu k_{n}^{2}U_{n,m}+F_{n,m} \tag{10}\]
where \(\nu\) is the viscosity, \(F_{n,m}\) is the forcing and \(a,b\) are again free parameters. The branching diagram for the model given in 10 is given in figure 1. The integer \(m^{\prime}\) and \(m^{\prime}+1\) correspond to the index of scales \(\ell_{n+1}\) with which the mode \(U_{n,m}\) is linked where \(m^{\prime}\) is explicitly given my \(m^{\prime}=2m-1\) and \(m^{*}\) corresponds to the index of scale \(\ell_{n-1}\) linked to \(U_{n,m}\) given by
\(m^{*}={\rm Int}[(m+1)/2]\) as illustrated in the left panel of figure 3. For \(\nu=0\), \(F_{n,m}=0\) and for any value of \(a,b\) the system conserves the energy 5 where now \(M_{n}=2^{n-1}\). The energy flux \(\Pi_{n,m}\) through a scale \(\ell_{n}\) and structure \((n,m)\) (expressing the rate energy from the large scales (\(i<n\)) is lost to the smaller scales (\(i\geq n\)) through the structure \(m\) due to the non-linearity is given by
\[\Pi_{n,m}\:=\:ak_{n}U_{n,m}U_{n-1,m^{*}}U_{n-1,m^{*}}+bk_{n}U_{n,m}U_{n,m}U_{n- 1,m^{*}} \tag{11}\]
The total flux through scale \(\ell_{n}\) is then given by
\[\Pi_{n}=\frac{1}{M_{n}}\sum_{m=1}^{M_{n}}\Pi_{n,m} \tag{12}\]
Conservation of energy by the non-linear terms implies that at scales smaller than the forcing scale and larger than the dissipation scale (\(\ell_{\nu}\)) the flux \(\Pi_{n}\) is constant and equal to the energy injection/dissipation \(\epsilon\)
\[\Pi_{n}=\epsilon,\qquad 1<n\ll n_{\nu} \tag{13}\]
where \(\ell_{\nu}=(\nu^{3}/\epsilon)^{1/4}\) and \(n_{\nu}=\log_{\lambda}(\ell_{1}/\ell_{\nu}).\) The range \(1<n\ll n_{\nu}\) where forcing and viscous effects can be neglected is called the inertial range.
In figure 2 we plot the energy spectra \(U_{n,m}^{2}\) as a function of \(n\) with blue dots, while the red dots indicate the averaged value \(\overline{U_{n}^{2}}=(\sum_{m}U_{n,m}^{2})/M_{n}\) from a realisa
Figure 2: Energy spectrum from a numerical simulations of the model 10. In the left panel the red points correspond to \(U_{n,m}\) averaged over \(m\) for a given \(n\) while the blue points correspond \(U_{n,m}\) for all \(n,m\). The right panel displays \(U_{n,m}\) as a function of \(m\) for \(n=9\).
eq. 10 performed with \(N=14,\lambda=2^{1/3}\) forced at \(n=1\). The averaged value follows power-law close to the Kolmogorov scaling \(\overline{U_{n}^{2}}\propto k_{n}^{-2/3}\) although individual \(U_{n,m}^{2}\) can vary orders of magnitude from this mean value. This indicates that higher order statistics can deviate from the dimensional analysis spectrum. The present model is computationally expensive as its complexity increases as \(2^{N}\). As a result it is not easy to obtain a long inertial range (large \(N\)) to investigate the resulting power-law behaviors numerically. On the other hand however its simplicity allows for analytical treatment which is what we are examining in the next section by constructing exact inertial range solutions of arbitrary large \(n\).
## III Inertial range intermittent solutions
We look for stationary solutions of eq. 10 such that in the inertial range where forcing and dissipation can be ignored. Stationarity implies that for any \(n,m\):
\[0 = a\left[U_{n-1,m^{*}}U_{n-1,m^{*}}-\frac{\lambda}{2}\left(U_{n,m}U _{n+1,m^{\prime}}+U_{n,n}U_{n+1,m^{\prime}+1}\right)\right] \tag{14}\] \[+ b\left[U_{n,m}U_{n-1,m^{*}}-\frac{\lambda}{2}\left(U_{n+1,m^{ \prime}}U_{n+1,m^{\prime}}+U_{n+1,m^{\prime}+1}U_{n+1,m^{\prime}+1}\right)\right]\]
The way we proceed to find such a solution is the following: Given \(U_{n-1,m^{*}}\) and \(U_{n,m}\) we look for \(U_{n+1,m^{\prime}}\) and \(U_{n+1,m^{\prime}+1}\) such that the equation above is satisfied; then we proceed to the next scale and search for \(U_{n+2,2m^{\prime}-1}\) and \(U_{n+2,2m^{\prime}}\) and so on finding a recursive relation that gives all \(U_{n,m}\). The solutions only depend on the relative amplitude of \(U_{n,m}\) so we define
Figure 3: This is a wide figure.
their normalised ratio as:
\[r_{n,m}=\frac{U_{n,m}\lambda^{1/3}}{U_{n-1,m^{*}}} \tag{15}\]
To simplify the notation we denote
\[r=r_{n,m},\quad x=r_{n+1,m^{\prime}}\quad y=r_{n+1,m^{\prime}+1}\quad\mbox{and} \quad b=c\lambda^{1/3} \tag{16}\]
then stationary solutions of 14 satisfy
\[0\;=\;U_{n,m}^{2}\lambda^{2/3}\left(a\left[\frac{1}{r^{2}}-\frac{1}{2}\left(x+ y\right)\right]+c\left[\frac{1}{r}-\frac{1}{2}\left(x^{2}+y^{2}\right) \right]\right) \tag{17}\]
that simplifies to
\[\left(x+\frac{a}{2c}\right)^{2}+\left(y+\frac{a}{2c}\right)^{2}=2\left(\frac{ a}{cr^{2}}+\frac{1}{r}+\frac{a^{2}}{4c^{2}}\right). \tag{18}\]
which has real solutions only if
\[0\leq\frac{a}{cr^{2}}+\frac{1}{r}+\frac{a^{2}}{4c^{2}}=R^{2}. \tag{19}\]
The solutions \(x,y\) form a circle in the \(x,y\) plane centered at \(-a/2c,a/2c\) and radius \(R\) depicted in the right panel of figure 3. It is important to note that any point \(x,y\) in this circle is a solution of 18, and thus we have multiple possible solutions. The condition 19 is satisfied for positive \(r,a,c\) that will be the focus of the present investigation. Returning to the \(r_{n,m}\) notation the values of \(r_{n+1,m^{\prime}}\) and \(r_{n+1,m^{\prime}+1}\) that satisfy the stationarity condition can be written in full generality as:
\[r_{n+1,m^{\prime}} = -\frac{a}{2c}+\sqrt{2}\cos(\theta_{n,m})\sqrt{\frac{a}{cr_{n,m}^{ 2}}+\frac{1}{r_{n,m}}+\frac{a^{2}}{4c^{2}}} \tag{20}\] \[r_{n+1,m^{\prime}+1} = -\frac{a}{2c}+\sqrt{2}\sin(\theta_{n,m})\sqrt{\frac{a}{cr_{n,m}^{ 2}}+\frac{1}{r_{n,m}}+\frac{a^{2}}{4c^{2}}} \tag{21}\]
where \(\theta_{n,m}\) is arbitrary. Equations 20,21 form a recurrence relation out of which given \(r_{1,1}\) and a choice of \(\theta_{n,m}\) one can construct all \(r_{n,m}\). Then, given \(r_{n,m}\) one can obtain \(U_{n,m}\) based on 16 as
\[U_{n,m}=U_{1,1}\,r_{1,1}\,r_{2,m_{1}}\,r_{3,m_{2}}\,\dots\,r_{n,m} \tag{22}\]
where \(m_{1},m_{2},\dots\) are the \(m\) one crosses along the path from (1,1) to \((n,m)\) as shown by the red line in figure 1. This recurrence relation however does not always lead to bounded solutions of \(r_{n,m}\). For some values of \(\theta_{n,m}\) the resulting \(x,y\) can be negative or zero. Negative
values can lead to un-physical solutions with negative flux from the small to the large scales which are not possible (for stationary solutions) since no energy source is assumed at small scales. If \(x\) or \(y\) is zero it means that the particular branch is zero for all subsequent values. We need thus to limit the choice of \(\theta\) so that positive and finite \(r_{n,m}\) are obtained.
The simplest case is obtained by choosing \(\theta_{n,m}=\pi/4\). It corresponds to an equal part of energy being distributed to the left and the right branch and leads to the Kolmogorov solution \(r_{n,m}=1\) or in terms of the velocity \(U_{n,m}=\lambda^{n/3})\) (where \(U_{1,1}=1\) is assumed). It corresponds to a finite flux non-intermittent (self-similar) solution.
Intermittency however can manifest itself if we chose \(\theta_{n,m}\neq\pi/4\) so that energy is not equally distributed in the left and right branch. Here we will chose \(\theta_{n,m}\) randomly with uniform distribution in the range \(\theta_{\min}=\pi/4-\Delta\theta<\theta_{n,m}<\pi/4+\Delta\theta=\theta_{\max}\) for a given \(\Delta\theta<\pi/4\). Then it can be shown that for \(c>a\) there exists \(r_{\max}>r_{\min}>0\) such that for all \(r\in(r_{\max},r_{\min})\) both \(x\in(r_{\max},r_{\min})\) and \(y\in(r_{\max},r_{\min})\). For \(c\leq a\) the recurrence relation converges either to \(r_{n,m}=0\) or \(r_{n,m}=\infty\) and we are going to limit ourselves only to the \(c>a\) case here. To obtain \(r_{\max},r_{\min}\) one needs to note that from the recurrence relation 21 the largest value of \(r_{n+1,m^{\prime}}=r_{max}\) is obtained when \(\theta=\theta_{max}\) and \(r_{n,m}=r_{min}\) while the smallest value of \(r_{n+1,m^{\prime}}=r_{min}\) is obtained when \(\theta=\theta_{min}\) and \(r_{n,m}=r_{max}\). This leads to the following relations
\[r_{\max} = -\frac{a}{2c}+\sqrt{2}\cos(\theta_{\min})\sqrt{\frac{a}{cr_{\min }^{2}}+\frac{1}{r_{\min}}+\frac{a^{2}}{4c^{2}}} \tag{23}\] \[r_{\min} = -\frac{a}{2c}+\sqrt{2}\cos(\theta_{\max})\sqrt{\frac{a}{cr_{\max }^{2}}+\frac{1}{r_{\max}}+\frac{a^{2}}{4c^{2}}}. \tag{24}\]
We arrive at exactly the same relations if we examine eq. 21.
We solved equations 23,24 numerically and the results are shown in the left panel of figure 4 for three different values of \(c/a\). For \(\Delta\theta=0\) only the Kolmogorov solution is allowed with \(r_{max}=r_{min}=1\). As \(\Delta\theta=0\) increases \(r_{n,m}\) cover a wider range of values up until a critical value of \(\Delta\theta=\Delta\theta_{c}\) for which \(r_{min}\) becomes zero and \(r_{max}\) diverges. The value of this critical angle as a function \(c/a\) is shown in the right panel of the same figure. \(\Delta\theta_{c}\) is zero for \(c/a=1\) and grows for larger values approaching \(\Delta\theta_{c}=\pi/4\) as \(c/a\rightarrow\infty\) (not demonstrated here).
For any given choice of \(\Delta\theta<\Delta\theta_{c}\) we can construct an ensemble of exact solutions of the present model by following the recurrence relations 20,21 picking each time randomnly \(\theta_{n,m}\in(\pi/4-\Delta\theta,\pi/4+\Delta\theta)\) and reconstructing \(U_{n,m}\) by eq. 22. We note that other than \(c/a\) the only other parameter that controls the ensemble of solutions considered is \(\Delta\theta/\Delta\theta_{c}\) that provides a measure of how much our ensemble deviates from the Kolmogorov solution \(\Delta\theta=0\). This process has direct links with the random cascade models studied in the past [35; 36; 37], however we need to note that unlike the random cascade models the solutions found here are energy conserving.
## IV Statistical behavior and intermittency
In this section we examine a large ensemble of the exact solutions shown in the previous section and investigate their properties. For our investigation we have set \(c/a=2\) and we consider only a single path (as the one shown in red in figure 1) and not the full tree. The differences in the statistics between the two choices (single path and full tree) lie in the cross correlations between different modes that are not captured in the single path. As an
example we mention that the flux \(\Pi_{n}\) in eq. 12 is identically equal to \(\epsilon\) for every realization while the flux \(\Pi_{n,m}\) given in eq.11 fluctuates and only its mean value is equal to \(\epsilon\)
\[\langle\Pi_{n,m}\rangle=\Pi_{n}=\epsilon.\]
Along such path we consider three different ensembles for \(\Delta\theta/\Delta\theta_{c}=0.1,\,0.5,\,0.9\) each one composed of \(10^{7}\) different solutions. The solutions were constructed by picking randomly \(\theta_{n,m}\) for each node examined, from a uniform distribution between \(\pi/4-\Delta\theta\) and \(\pi/4+\Delta\theta\). The value of \(n\) varied from \(n=1\) to \(n=200\). We note that if the full tree was investigated instead of a single path for such large value \(n\) it would require to solve for \(2^{200}\) degrees of freedom that is computationally unattainable.
In the top panels of figure 5 we plot the probability distribution function (PDF) \(P_{U}(U_{n,m})\) of the variable \(U_{n,m}\) for the three values of \(\Delta\theta/\Delta\theta_{c}=0.1,\,0.5,\,0.9\) (from left to right) and different values of \(n\). The PDFs of different values of \(n\) do not seem to overlap, although the \(x\)-axis has been normalised by the Kolmogorov prediction \(\lambda^{-1/3}\). Instead as large values of \(n\) are reached the pdf's display larger tails reaching values of \(U_{n,m}\) much larger and much smaller than its mean value. The closer \(\Delta\theta\) is to the critical value \(\Delta\theta_{c}\) the larger this
deviation is. On the other hand, the PDFs \(P_{r}(r_{n,m})\) of the ratios \(r_{n,m}\) that are displayed in the lower panels of figure 5 do not display such widening. For sufficiently large \(n\) all PDFs collapse to the same functional form that depends only on the choice of \(\Delta\theta_{c}\). This implies that while \(U_{n,m}\) are not self-similar under scale transformations their ratios \(r_{n,m}\) are!
The same behavior can be seen for the energy fluxes \(\Pi_{n,m}\). In the top panels of figure 6 we plot the PDFs \(P_{\Pi}\) of \(\Pi_{n,m}\) for the same values of \(\Delta\theta\) and \(n\) as in figure 5. As with the velocity amplitudes \(U_{n,m}\) as \(n\) is increased the PDFs of \(\Pi_{n,m}\) widen without collapsing to to an \(n\)-independent form. In the lower panel of the same figure we plot the the PDFs \(P_{\pi}(\pi_{n,m})\) of the flux ratio \(\pi_{n,m}\). It is defined as
\[\pi_{n,m}=\frac{\Pi_{n,m}}{\Pi_{n-1,m^{*}}} \tag{26}\]
that after little algebra and using 11 and 20 leads to
\[\pi_{n+1,m^{\prime}}=1+f(r_{n,m})\cos(2\theta_{n,m}) \tag{27}\]
where \(f(r)=1+(a/2c)^{2}r^{2}/(a/c+r)\). The flux ratio, much like the velocity ratio \(r_{n,m}\), does converge to an \(n\) independent PDF as large values of \(n\) are reached. Furthermore, the functional form of this PDF appears to be flat limited by a minimum and a maximum value
of \(\pi_{n,m}\). This appears to be so because \(f(r)\) in 27 varies little with \(r\) for small variations of \(r\) and the variations of \(\pi_{n,m}\) are mostly controlled by the variations of \(\theta_{n,m}\).
Given that the PDFs of \(r_{n,m}\) and \(\pi_{n,m}\) arrive at an \(n\)-independent form at large \(n\) has some implications for the evolution in \(n\) of the PDFs \(P_{U},P_{\Pi}\). Both \(U_{n,m}\) and \(\Pi_{n,m}\) can be written as a product of all \(r_{n^{\prime},m}\) and \(\pi_{n^{\prime},m}\) with \(n^{\prime}\leq n\). As a result the logarithms of \(U_{n,m}\) and \(\Pi_{n,m}\) can be written as
\[\ln\left(U_{n,m}\right)=\ln\left(U_{1,1}\right)+nL_{U},\quad\ln\left(\Pi_{n,m} \right)=\ln(\Pi_{1,1})+nL_{\Pi} \tag{28}\]
where \(L_{U}\) and \(L_{\Pi}\) stand for the mean value of the logarithms of \(r_{n,m}\) and \(\pi_{n,m}\) respectively:
\[L_{U}=\frac{1}{n}\sum_{n^{\prime}=1}^{n}\ln(r_{n^{\prime},m}),\quad\text{and} \quad L_{\Pi}=\frac{1}{n}\sum_{n^{\prime}=1}^{n}\ln(\pi_{n^{\prime},m}). \tag{29}\]
The properties of \(U_{n,m}\) and \(\Pi_{n,m}\) remind the random cascades studied in the past [35; 36; 37]. However while the random cascade models were not conserving energy in the present model energy is conserved exactly. An other important difference here is that \(r_{n^{\prime},m}\) and \(\pi_{n,m}\) are not independent but each one depends on the value of the previous one. Nonetheless we can proceed assuming such independence although not entirely correct. In that case \(P_{U}\) and
can be reconstructed using large deviation theory [38]. In this framework \(L_{U}\) and \(L_{\Pi}\) follow for large \(n\) a distribution of the form
\[P_{L_{U}}(L_{U})\propto\exp[-nI_{U}(L_{U})],\quad\text{and}\quad P_{L_{\Pi}}(L_{ \Pi})\propto\exp[-nI_{\Pi}(L_{\Pi})] \tag{30}\]
where \(I_{U}\) and \(I_{\Pi}\) are called the rate functions that can in principle be obtained from \(P_{r}\) and \(P_{\pi}\) using the Legendre-Fenchel transform [38]. Here we limit ourselves in noting that if \(P_{L_{U}}\) and \(P_{L_{\Pi}}\) follow the form of eq. 30 then the distribution of \(U_{n,m}\) and \(\Pi_{n,m}\) that are linked to \(L_{U}\) and \(L_{\Pi}\) by 28 should take the form
\[P_{U}(U_{n,m})\propto\exp\left[-nI_{U}\left(\frac{1}{n}\ln\left(\frac{U_{n,m}} {U_{1,1}}\right)\right)\right],\quad P_{\Pi}(\Pi_{n,m})\propto\exp\left[-nI_{ \Pi}\left(\frac{1}{n}\ln\left(\frac{\Pi_{n,m}}{\Pi_{1,1}}\right)\right)\right] \tag{31}\]
where only the largest terms in \(n\) are kept. To test this prediction we plot in figure 7\((P_{U}/P_{U}^{*})^{1/n}\) as a function \((U_{n,m}/U^{*})^{1/n}\) (top panels) and \((P_{\Pi}/P_{\Pi}^{*})^{1/n}\) as a function \((\Pi_{n,m}/\Pi^{*})^{1/n}\) where \(U^{*}\) and \(\Pi^{*}\) corresponds to the value the probability obtains its maximum \(P_{U}^{*},P_{\Pi}^{*}\). With this normalization the PDFs both for \(U_{n,m}\) and for \(\Pi_{n,m}\) collapse, indicating that the large deviation principle works well for this model.
As a final look in the intermittency problem we display in the top panels of figure 8 the
first ten structure functions \(S_{p}(\ell_{p})\) defined as
\[S_{p}(\ell_{n})=\left\langle U_{n,m}^{p}\right\rangle \tag{32}\]
where the angular brackets stand for ensemble average. The structure functions have been normalized by the Kolmogorov scaling to emphasise the differences. The structure functions are fitted to power-laws
\[S_{p}(\ell_{n})\propto\ell_{n}^{\zeta_{p}} \tag{33}\]
and the measured exponents \(\zeta_{p}\) are plotted in the lower panels of figure 8. The exponents show similar behavior with real turbulence displaying larger values for \(p<3\) and smaller values for \(p>3\) while the exact result \(\zeta_{3}=1\) is satisfied. It is worth noting that the deviations from the Kolmogorov scaling are not universal but depend on our choice of ensemble that is controlled by \(\Delta\theta\).
## V Discussion and conclusion
One can argue that the exact stationary solutions obtained in this work little do they have to do with real turbulence that displays chaotic spatio-temporal dynamics. This maybe true and multy branch models with two neighbour interactions as in [32; 33] that display chaotic dynamics should be further investigated. The present results however do point to a clear instructive demonstration of how intermittency can appear in realistic flows and how it can be modeled. Furthermore, it leads to a series of clear messages which are described bellow that are of great use in future turbulence research and can guide measurements in numerical simulations and experiments.
First, we note that intermittency appearing in stationary fields found here comes in contrast with the typical shell model studies in single branch models for which intermittency comes from the temporal dynamics alone as only a single structure exists for each scale \(\ell_{n}\). In the latter case intermittency has been linked to the temporal dynamics through the fluctuation dissipation theorem [30]. In reality, both temporal and spatial dynamics contribute to the presence of intermittency and their role needs to be clarified.
In the present model randomness comes from our choice of \(\theta_{n,m}\) and the resulting intermittency depends on that choice. In reality, (or in more complex shell models) such
randomness will come from local chaotic dynamics that need to studied in order to clarify which processes lead to enhanced cascade and with what probability.
Perhaps, the most interesting implication of this work is that it suggests new ways to plot data from experiment and numerical simulations. One way suggested by this work is instead of focusing on the PDFs of velocity differences experimental or numerical data could focus on the PDFs of ratios of velocity differences. The latter are shown in this work to become scale independent and could lead to more precise measurements. An alternative way is to re-scale the PDFs of velocities differences using the large deviation prediction 30 as was done in figure 7. Of course in realistic data \(n\propto\ln(L/\ell_{n})\) is not precisely defined and an optimal choice should be searched for.
A good model of a complex phenomenon, to the authors opinion, is not one that quantitatively reproduces experimental measurements through parameter fitting but rather one that unravels the processes involved. To that respect we believe that the present model and results are very fruitful. We only hope that this work would come close to the standards set by Jack Herring. AA met Jack Herring during his ASP post doc in 2004-2006. Jack is fondly remembered stopping by the offices of post-docs just to see if they are OK. He will be greatly missed.
This work was also supported by the Agence nationale de la recherche (ANR DYSTURB project No. ANR-17-CE30-0004).
AA would also like to acknowledge the help of the Petrelis brothers, Francois and Nicolas, for their help with the large deviation theory. We would also like to thank Annick Pouquet for inviting us to write an article in this special issue dedicated to Jack Herring.
|
2308.05193 | Motifs in earthquake networks: Romania, Italy, United States of America,
and Japan | We present a detailed description of seismic activity in Romania, Italy, and
Japan, as well as the California seismic zone in the United States of America,
based on the statistical analysis of the underlying earthquake networks used to
model the aforementioned zones. Our results on network connectivity and simple
network motifs allow for a complex description of seismic zones, while at the
same time reinforcing the current understanding of seismicity as a critical
phenomenon. The reported distributions on node connectivity, three-, and
four-event motifs are consistent with power-law, i.e., scale-free,
distributions over large intervals and are robust across earthquake networks
obtained from different discretizations of the seismic zones of interest. In
our analysis of the distributions of node connectivity and simple motifs, we
distinguish between the global distribution and the powerlaw part of it with
the help of maximum likelihood estimation (MLE) method and complementary
cumulative distribution functions (CCDF). The main message is that the
distributions reported for the aforementioned seismic zones have large
power-law components, extending over some orders of magnitude, independent of
discretization. All the results were obtained using publicly-available
databases and open-source software, as well as a new toolbox available on
GitHub, specifically designed to automatically analyze earthquake databases. | Gabriel Tiberiu Pană, Alexandru Nicolin-Żaczek | 2023-08-09T19:11:06Z | http://arxiv.org/abs/2308.05193v2 | # Motifs in seismic networks:
###### Abstract
We present a detailed description of seismic activity in Romania, Italy, and Japan, as well as the California seismic zone in the United States of America, based on the statistical analysis of the underlying seismic networks used to model the aforementioned zones. Our results on network connectivity and simple network motifs allow for a complex description of seismic zones, while at the same time reinforcing the current understanding of seismicity as a critical phenomenon. The reported distributions on node connectivity, three-, and four-event motifs are consistent with power-law, _i.e._, scale-free, distributions over large intervals and are robust across seismic networks obtained from different discretizations of the seismic zones of interest. In our analysis of the distributions of node connectivity and simple motifs, we distinguish between the global distribution and the power-law part of it with the help of maximum likelihood estimation (MLE) method and complementary cumulative distribution functions (CCDF). The main message is that the distributions reported for the aforementioned seismic zones are not intrinsically power laws, but have large power-law components, extending over some orders of magnitude, independent of discretization. All the results were obtained using publicly-available databases and open-source software, as well as a new toolbox available on GitHub, specifically designed to automatically analyze earthquake databases.
seismic networks, network motifs, power-law distributions, cumulative distribution functions
## I Introduction
The apparent ubiquity of distributions that resemble power laws has effectively turned almost every field of research into a contributor to the science of complex critical systems. While the list of reported power-law-like distributions seems endless, with observations ranging from linguistics [1], musicology [2], and biology [3] to meteorology [4], environmental studies [5], economics [6], and computer science [7], the list of structural models which explain these distributions is considerably smaller. This asymmetry reflects, on one hand, the large amount of empirical data, while on the other hand, the difficulty of constructing models that exhibit power-law scaling which is caused by the subtle interplay between the components of a given system. On the side of the available empirical data, we recall that the main defining \(v\)s of Big Data are volume, variety, velocity, and volatility, effectively turning data into a new commodity [8; 9]. This should be contrasted with the substantially fewer available models, for the creation of which there are no clear recipes, because of the numerous challenges which stem from the treatment of the underlying physical processes across different scales of time and space.
One of the most striking properties of complex systems is the possibility to self-organize into a critical state, a process which has been initially proposed by Bak _et al._ in 1987 through a now classical sandpile model [10]. The emergence of scale-free distributions of various observables was the main fingerprint of self-organized criticality (SOC) that numerous subsequent models have tried to reproduce. Among them, we mention here the earthquake model of Olami-Feder-Christensen (OFC) [11], which exhibited SOC and was able to reproduce empirical laws such as the Gutenberg-Richter law and Omori law. It should be noted that the OFC model draws from numerous older mechanical earthquake models [12; 13; 14], but none of the previous ones was as versatile or as effective in reproducing empirical laws. The underlying lattice of the initial SOC models was regular, but now many models use networks as well, and it is commonly accepted that the topology of the underlying network can impact the observed dynamics, see Refs. [15; 16] for the OFC model over different types of networks.
Motivated by our recent findings on scale-free-like distribution of waiting times for earthquakes [17] and seismic events on the Moon and Mars [18], we investigate here the structure of seismic networks, using the available earthquake data for Romania, Italy, the California seismic zone in the United States of America, and Japan. Complementary to our previous studies where the focus was solely on the magnitude of the observed quakes, we now model the seismic region through a network, following the approach originally introduced by Abe _et al._ in Ref. [19] and later refined in Refs. [20; 21]. This formalism has been used to model different seismic processes, most recently in Refs. [22; 23; 24; 25]. In Ref. [22] the authors compared seismic activity in the Korean peninsula within two distinct network models, while in Ref. [23] we find a review of the current understanding of earthquake modeling through seismic networks. On a related topic, in Ref. [24] the authors find correlations between the \(\alpha\) exponent of the connectivity distribution and the \(b\)-value
from the Gutenberg-Richter law. Lastly, in Ref. [25] we find a report on how the time scales used for seismic networks affect the network's characteristics.
The method proposed by Abe _et al._ for constructing seismic networks differs from that proposed by Baiesi _et al._ in Ref. [26]. The main difference is that Baiesi _et al._ considered a weight associated with the edges of the network to discard weakly linked events, thus identifying correlations between arbitrary pairs of earthquakes, such that aftershocks are better identified. In the article of Abe _et al._, nodes represent the grid cells that the region is split into, and these are linked when subsequent earthquakes occur in them, each edge being treated equally. Within this framework, we show that despite being very different in terms of seismic activity, the aforementioned four seismic regions share striking similarities in terms of network properties. The seismic activity in Romania is almost entirely concentrated in the Vrancea seismic zone, which consists of both crustal and intermediate-depth earthquakes [27]. Italy has both surface and very deep earthquakes, present in the Apennine range, which runs from northern to southern Italy and contains several faults running along the entire peninsula, forming a destructive boundary between tectonic plates [28]. California has earthquakes that occur very close to the surface compared to the other regions, with its main seismic activity along the San Andreas Fault [29], Fig. 1. Japan has the most intense seismic activity, being at the confluence of four tectonic plates: the Pacific, North American, Eurasian, and Filipino [30]. The databases of the aforementioned seismic zones are publicly available, see [31; 32; 33; 34], being curated by specialized institutions.
Our article is structured as follows: we report the construction of seismic networks in Section II.1 and detail the fundamental characteristic of a network, the connectivity distribution in II.1. In Section II.1.2, we briefly discuss previous works [35; 36] on the fitting of empirical data which is suspected to have power-law behavior to support our choice of the maximum likelihood estimation method [37]. Following this, in Section II.1.3, we discuss the use of the Kolmogorov-Smirnov statistic [38] to quantify the quality of the fits. Network motifs [39] are introduced in Section II.2, where we show how they can be used to characterize seismic regions. In Section II.3 we present the toolbox developed for the analyses of seismic data. The results of our analyses on network connectivity are presented in Section III.1, and the distributions of three- and four-event motifs in Section III.2. Finally, in Section IV, we present our conclusions and an outlook on future research.
All the figures and visualizations are developed with original codes, in an open-source environment using Julia and Python programming languages and are available and documented on GitHub.1
Footnote 1: The seismic networks toolbox is publicly available at [https://github.com/gabipana7/seismic-networks](https://github.com/gabipana7/seismic-networks)
## II Method
### Seismic Networks
Seismic networks are constructed as follows: the considered region is divided into cubic cells in which earthquakes occur, each cube representing a vertex of the network. Two successive events define an edge between two vertices. Successive events occurring in the same cube form a loop. The network is constructed by adding vertices and edges as they result from the earthquake database. In what follows we will refer to the _cell size_, or _cell length_, or simply \(L\) as the length in kilometers (km)
Figure 1: Visualization of earthquakes from the California region with 784754 events from 01-01-1977 until 07-03-2023 with magnitudes ranging between 0.01 and 7.3. The color gradient represents _(left)_ the depth and _(right)_ the magnitude of the earthquakes.
of a side of one small cube. This number, therefore, defines a network. For example, a network of \(L=5\,\mathrm{km}\), means that the seismic region has been split into cubes of \(5\,\mathrm{km}\times 5\,\mathrm{km}\times 5\,\mathrm{km}\).
#### ii.1.1 Connectivity
One of the fundamental properties of a network is its node connectivity (or degree distribution), and we study it for the seismic networks in Romania, Italy, California (USA), and Japan. As scale-free networks seem to be ubiquitous, the list of examples of such networks including the World Wide Web, citation networks, movie actor collaboration networks, cellular networks, etc. [40], we investigate here if seismic networks have the same property. We look at seismic networks as static objects and disregard their growth, even though one could potentially check the validity of the Barabasi model of preferential attachment [41] for the evolution of these networks using the available seismic data.
For the purpose of our study, a scale-free network is one that has a distribution of node connectivity of the form
\[P_{k}\sim k^{-\alpha} \tag{1}\]
with \(\alpha>1\). The value of \(\alpha\) ranges between \(1\) and \(3\) in most empirical examples.
#### ii.1.2 Fitting power-law distributed data
The proper way of fitting data which is suspected to have a power-law distribution has been a topic of discussion in the literature [35, 36]. From linear binning to logarithmic binning, it is shown also in Ref. [42] that binning can lead to inaccurate data-fitting results. Due to noise in the tail and the reduction of data by binning, fitting the slope on a logarithmic plot is known to introduce systematic biases to the value of the exponent. An alternative, more accurate method relies on using cumulative distribution functions and computing the \(\alpha\) exponents via maximum likelihood estimation method.
A cumulative distribution function (CDF) of a real-valued random variable \(X\) is given by
\[F_{X}(x)=P(X\leq x), \tag{2}\]
the right side of the equation representing the probability that the variable \(X\) takes on a value _less than or equal to \(x\)_.
In the case of data suspected to follow a power-law distribution, it is useful to study the opposite phenomenon, representing the data via a complementary cumulative distribution function (CCDF)
\[\hat{F}_{X}(x)=P(X>x)=1-F_{X}(x), \tag{3}\]
where the probability that the variable \(X\)_strictly exceeds_ the value \(x\) is depicted by the right side.
For a power-law PDF
\[p(x)=Cx^{-\alpha}, \tag{4}\]
where \(C\) is the normalization constant, the probability \(P(x)\) that the variable has a value greater than \(x\) is
\[P(x)=\int_{x}^{\infty}C(x^{\prime})^{-\alpha}dx^{\prime}=\frac{C}{\alpha-1}x^{ -(\alpha-1)}. \tag{5}\]
The preferred method for determining the \(\alpha\) exponent is the maximum likelihood estimation (MLE). For MLE we follow the numerical recipe provided in Refs. [35, 36] and use the Python powerlaw package presented in Ref. [36]. The exponent is found through the formula
\[\alpha =1+n\left(\sum_{i=1}^{n}\ln\frac{x_{i}}{x_{\mathrm{min}}}\right)^ {-1}, \tag{6}\] \[\sigma =\sqrt{n}\left(\sum_{i=1}^{n}\ln\frac{x_{i}}{x_{\mathrm{min}}} \right)^{-1}=\frac{\alpha-1}{\sqrt{n}}, \tag{7}\]
where \(x_{i}\) with \(i=1,...,n\) are the observed values of \(x\) such that \(x_{i}\geq x_{\mathrm{min}}\). The minimum value of \(x\), denoted as \(x_{\mathrm{min}}\), corresponds to the threshold below which the power-law behavior is observed.
Finding the normalization constant \(C\) is done by calculating
\[1 =\int_{x_{\mathrm{min}}}^{\infty}p(x)dx=C\int_{x_{\mathrm{min}}}^ {\infty}x^{-\alpha}dx \tag{8}\] \[=\frac{C}{1-\alpha}x^{-\alpha+1}\Big{|}_{x_{\mathrm{min}}}^{ \infty},\]
which only holds for \(\alpha>1\) since otherwise, the right side of the equation would diverge. When \(\alpha>1\) we find that \(C=(\alpha-1)x_{\mathrm{min}}^{\alpha-1}\) and the normalized expression for the power law is
\[p(x)=\frac{\alpha-1}{x_{\mathrm{min}}}\left(\frac{x}{x_{\mathrm{min}}}\right)^ {-\alpha}. \tag{9}\]
#### ii.1.3 Estimating the quality of fit and \(x_{\mathrm{min}}\)
There is a variety of measures quantifying the distance between two probability distributions, but for data following non-normal distributions, the most common is the Kolmogorov-Smirnov (KS) statistic [43]
\[D=\max_{x\geq x_{\mathrm{min}}}|S(x)-P(x)|, \tag{10}\]
which represents the maximum distance between the CDFs of the data and the fitted model. Here \(S(x)\) is the CDF of the data for observations with values greater than or equal to \(x_{\mathrm{min}}\) and \(P(X)\) is the CDF for the power-law model that best fits the data in the region \(x\geq x_{\mathrm{min}}\)
Our estimate of the \(x_{\min}\) will be the one that minimizes the KS statistic.
As a first step in our analysis, we calculate how two parameters of the connectivity distribution, namely the \(\alpha\) exponent and \(x_{\min}\), vary with the cell size across different networks. To this end, we sweep numerically the \(L\)-interval from 0.5 to 20 in increments of 0.5. For each value of \(L\), we calculate \(\alpha\) and \(x_{\min}\) using the Maximum Likelihood Estimation (MLE), corresponding to each degree distribution. These results are then plotted against the corresponding \(L\) values, and the color gradient on the plot represents the quality of fit computed using the Kolmogorov-Smirnov distance. These plots are essential for our subsequent analyses as they reveal which cell sizes correspond to networks having scale-free like degree distributions. We then select a few cell lengths for each of the four seismic regions in focus to construct networks and determine the relevant distributions, namely the connectivity distributions and the distributions of three- and four-event motifs.
### Structure of motifs
A network motif is a subgraph or a cycle of length at least three, which is found to occur in real-world complex networks much more frequently than in their corresponding randomized counterparts [39]. Network motifs are specific patterns of local interconnections with potential functional properties, and can be seen as the basic building blocks of real-world networks. For example, motifs of length three, which we refer to as triangles, are highly recurrent in social and biological networks [44]. As another example, let us mention that network motifs are used extensively in molecular biology to study the complexity of protein-protein interaction or genetic regulatory networks [45]. Motifs are also used in social sciences to uncover hidden interactions between individuals [46]. Drawing from the existing literature on network motifs, we apply the method of motif discovery and analysis to our seismic networks. In this study, we focus the analysis on three- and four-node motifs, depicting triangles and tetrahedrons in real-world three-dimensional networks. The selected motifs offer significant insight into the structure of seismic networks, while at the same time being computationally tractable using the current seismic databases.
Motif analysis is often limited either by the small number of nodes of the network under scrutiny or by the large computing load that comes with motif discovery in big networks. In our study, we have seen both limitations, as the dataset for earthquakes with epicenters in Romania is rather small and the resulting statistic is rather poor, while in the case of Japan obtaining the results was computationally very demanding due to the large number of seismic events in the database. Different tools and methods have been developed to analyze motifs in complex networks, see, for instance, MODA [47] and Grobow-Kellis (GK) [48]. Currently, one of the best tools for motif discovery is Nemomap[49], an improved motif-centric algorithm, which adds upon MODA and GK, being more computationally efficient in mapping complex motif patterns. We use the Python code for Nemomap2 on our seismic networks to find the three- and four-node motifs. We note that for the motifs with four nodes, we search only for squares and then construct the tetrahedron by adding the corresponding edges.
Footnote 2: The code for NemoMapPy is publicly available at [https://github.com/zicanl/NemoMapPy](https://github.com/zicanl/NemoMapPy)
Since the networks depict real space in the seismic zone, triangles can define an area and tetrahedrons represent a volume. For our analysis, we calculate the areas and volumes of these motifs using each node's position in real space.
For triangles, given the position of nodes, we first calculate the lengths of the three sides (\(a\), \(b\), and \(c\)) and then compute the area using Heron's formula
\[A=\sqrt{s(s-a)(s-b)(s-c)},\quad s\equiv\frac{a+b+c}{2}. \tag{11}\]
For tetrahedrons, given the position of nodes, we calculate the lengths of the six sides (\(U,V,W,u,v,w\)), then use the 3D Heron formula to compute the volume
\[\begin{split} V=&\frac{1}{192uvw}\times[(s+r)^{2}-( p-q)^{2}]^{1/2}\\ &\times[(p+q)^{2}-(r-s)^{2}]^{1/2}\end{split} \tag{12}\]
where \(A\equiv(w-U+v)\cdot(U+v+w),B\equiv(u-V+w)\cdot(V+w+u),C\equiv(v-W+u)\cdot(W+u+v),a\equiv(U-v+w)\cdot(v-w+U),b\equiv(V-w+u)\cdot(w-u+V),c\equiv(W-u+v)\cdot(u- v+W),p\equiv\sqrt{a\cdot B\cdot C},q\equiv\sqrt{b\cdot C\cdot A},r\equiv\sqrt{c \cdot A\cdot B},s\equiv\sqrt{a\cdot b\cdot c}\).
After computing these quantities for each of the motifs identified by NemoMap, we weigh the areas of three-event motifs and volumes of four-event motifs with the total energy released by the earthquakes in each motif. To compute the energy release, we use the magnitudes from existing databases [31, 32, 33, 34] and we do not distinguish between different types of magnitudes (body-wave, s-wave, local magnitude, etc.), calculating the energy release using the Gutenberg-Richter formula.
### Computational framework
We have developed a seismic networks toolbox, seismic-networks, publicly available on GitHub,3 that contains a series of codes that can be used to automatically analyze the structure of seismic networks, _i.e._ network construction, connectivity distribution, and distributions of three- and four-event motifs. The codes work
with standardized seismic databases, which means that for most analyses, some small curation of the existing datasets is required. The standardized format used by our codes requires the events' timestamp, latitude, longitude, depth, and magnitude. With the help of the codes in the toolbox, the user can generate seismic networks using different discretization lengths \(L\) of the 3D real space, and can compute the connectivity distribution of the network using the powerlaw package to analyze the parameter dependency of \(\alpha\) and \(x_{\min}\) based on the discretization length \(L\). To filter out micro-earthquakes, a magnitude cut-off \(M_{\min}=2,3\) is used for the distribution functions of three- and four-event motifs, which are identified using the NenoMapPy[50] (Python version of Nemomap) software.
For interoperability and open-source disponibility, the toolbox is written in Julia and Python. We applied the codes to the four different regions mentioned in Section I: Romania, Italy, California (USA), and Japan. For these regions, after collecting the databases and some minor cleaning and trimming, the remaining data is described in Table 1.
## III Results
### Connectivity analysis
After creating seismic networks for different discretizations of the seismic zone, _i.e._, different values of \(L\), we perform a parameter dependency to determine the cell
\begin{table}
\begin{tabular}{c c c c c} & \(L\) & \(\alpha\) & \(\beta\left(M_{\min}\right)\) & \(\gamma\left(M_{\min}\right)\) \\ \hline & 3.0 & 3.25 & 1.38 (2) & N/A \\ Romania & 3.5 & 3.03 & 1.41 (2) & N/A \\ & 4.5 & 3.09 & N/A & 1.71 (3) \\ & 5.5 & 3.01 & N/A & 1.92 (3) \\ \hline Italy & 4.0 & 1.97 & 1.54 (2) & 1.65 (2) \\ & 4.5 & 1.95 & 1.52 (2) & 1.68 (2) \\ \hline California & 1.0 & 3.04 & 1.53 (2) & 1.40 (2) \\ & 2.0 & 2.51 & 1.65 (2) & 1.77 (2) \\ \hline Japan & 2.5 & 2.16 & 1.37 (3) & 1.26 (3) \\ & 3.0 & 2.13 & 1.49 (2) & 1.24 (3) \\ \end{tabular}
\end{table}
Table 2: Results on selected \(\alpha\) exponents for connectivity distributions, and \(\beta,\gamma\) exponents of the motif analysis.
\begin{table}
\begin{tabular}{c c c c c c c} & Events & Timespan & Latitude & Longitude & Depth (km) & Magnitude \\ \hline Romania & 31378 & 1977-03-04 — 2022-12-29 & 43.59 — 48.23 & 20.19 — 29.84 & 0.0 — 218.4 & 0.1 — 7.4 \\ Italy & 410234 & 1985-01-02 — 2023-03-08 & 35.50 — 47.08 & 6.61 — 18.51 & 0.0 — 644.4 & 0.1 — 6.5 \\ California & 784754 & 1977-01-02 — 2023-03-07 & 32.00 — 37.00 — 122.0 — 114.0 & 0.0 — 51.1 & 0.01 — 7.3 \\ Japan & 3422706 & 1995-01-01 — 2020-08-31 & 17.40 — 50.42 & 118.90 — 156.68 & 0.0 — 698.4 & 0.1 — 9.0 \\ \end{tabular}
\end{table}
Table 1: Information regarding the seismic data used for our analysis.
Figure 2: Parameter dependence of \(\alpha\) and \(x_{\min}\) on the cell size \(L\) for networks constructed for Romania, Italy, California, and Japan. The color gradient represents the Kolmogorov-Smirnov distance which measures the fit quality. A low KS value depicted by the blue color means a better fit, while a high value, depicted by red, means a poor fit.
sizes \(L\) which correspond to seismic networks that best approximate the data. These results are presented in Fig. 2 where we notice that for Romania and Italy, \(\alpha\) varies greatly for small values of \(L\), then settles at values around \(3\) and \(2\), respectively, as \(L\) increases. In California and Japan, the values of the \(\alpha\) exponent show less variation, ranging from \(1.5\) to \(3.5\). For all regions, the value of \(x_{\min}\) is smaller for low \(L\), except for California, where we see a substantial decrease in \(x_{\min}\) at high \(L\), but these fits are poor, as shown by the KS statistic. Fit quality can be easily determined with the help of the accompanying color gradient, good fits generally occurring for values of \(L\) around \(5\,\mathrm{km}\) or smaller. Our selection of the values of \(L\) for which we report detailed results is based both on the quality of the fit and the value of \(x_{\min}\), a small value of the latter being preferred as the resulting power-law component of the distribution function is larger.
We present detailed connectivity results for three cube
Figure 3: Complementary cumulative distribution functions for the connectivity of three networks corresponding to different cube sizes \(L\) created with data from (a) Romania, (b) Italy, (c) California, and (d) Japan. The theoretical distribution is represented as a fit through the data, starting with the value \(x_{\min}\) that minimizes the Kolmogorov-Smirnov distance. The inset presents only the CCDFs and fits of the data that is considered to behave as a power law, see Eq. (1). Some of the distributions and fits are scaled by a factor depicted on the plot for better visualization.
lengths \(L\) for each seismic zone in Fig. 3. For Romania, in Fig. 3(a), we notice that robust power-law behavior is seen across only two orders of magnitude. Varying the cell length \(L\) does not produce a significant change to the exponent \(\alpha\). In California, Fig. 3(c), the selected cell sizes for which we obtain good results are smaller compared to the other regions. Robust power-law behavior is observed across three orders of magnitude. Increasing the cell size \(L\) induces a decrease in the \(\alpha\) exponent. For Italy and Japan, Fig. 3(b) and Fig. 3(d), the power
Figure 4: Complementary cumulative distribution functions for network motifs discovered in networks corresponding to different cube sizes \(L\) and magnitude thresholds \(M_{\min}\) in Romania. The distribution of triangle areas weighted by the total energy released in motif is present in (a) with an inset presenting only the power-law part of the distribution. The analysis for \(L=3.5\) is shifted by a factor of 10 for better visualization. The distribution of tetrahedron volumes weighted by total energy is present in (b) with an inset similar to the left panel.
Figure 5: Complementary cumulative distribution functions for network motifs discovered in networks corresponding to different cube sizes \(L\) and a magnitude threshold \(M_{\min}=2\) in Italy. Details are as in Fig. 4. In both panels, for \(L=4.5\) we shifted the distribution with a factor for better visualization.
law behavior is seen over many orders of magnitude. In both regions, an increase in \(L\) produces a very small decrease in \(\alpha\). The KS error indicator is also small for these regions, indicating a very good fit.
### Motifs
We proceed with the discovery and analysis of the motifs for selected cell sizes \(L\), and we also employ a magnitude cut-off for the data \(M_{\min}\). Details of the selected parameters are in Table 2. For Romania, the results of
Figure 6: Complementary cumulative distribution functions for network motifs discovered in networks corresponding to different cube sizes \(L\) and a magnitude threshold \(M_{\min}=2\) in California. The details are as in Fig. 4. Only the inset in panel (b) presents a shift of the distribution with \(L=2.0\) for better visualization.
Figure 7: Complementary cumulative distribution functions for network motifs discovered in networks corresponding to different cube sizes \(L\) and magnitude thresholds \(M_{\min}\) in Japan. The details are as in Fig. 4. In both panels, we shifted the distribution corresponding to \(L=3.0\) by a factor for better visualization.
our analyses are present in Fig. 4. For triangles 4, an increase in \(L\) generates a slight increase in the \(\beta\) exponent. For tetrahedrons, 4, we require a bigger magnitude cut-off, \(M_{\text{min}}=3\), and observe a similar increase in \(\gamma\) when \(L\) is larger, similarly to triangles. Please note that the reported results for \(\beta\) and \(\gamma\) come from different discretizations of the seismic region, a feature which is specific to Romania. This reflects the small size of the seismic database, but it can also reflect some hidden features of the Vranacea seismic zone. In California, Fig. 6, for both triangles and tetrahedrons, an increase in \(L\) induces an increase in the \(\beta\) and \(\gamma\) exponents. The power-law region extends over 5 orders of magnitude in the case of triangles 6 and 3 orders of magnitude in the case of tetrahedrons 6. In Italy, Fig. 5, for triangles 5, the fits for slightly different \(L\) almost coincide, with a \(\beta\) exponent of \(1.52\). Similarly, in the case of tetrahedrons, 5, a small change in \(L\) generates a small variation of the \(\gamma\) exponent. In Japan, Fig. 7, for triangles 7 a small increase in \(L\) induces a small increase in \(\beta\). The power-law region extends over many order magnitudes. In the case of tetrahedrons 7, we introduce a bigger magnitude cut-off, \(M_{\text{min}}=3\), to reduce the time needed to compute the motifs and eliminate from our results the parasitic component caused by small magnitude earthquakes. Please note that the fits almost coincide when changing \(L\) from \(2.5\) to \(3\) km. Our numerical results are summarized in Table 2 where we show selected values of \(\alpha\) for the connectivity distribution, as well as selected critical exponents for the distribution of motifs across the four seismic regions under scrutiny.
## IV Conclusion
We revisit the databases pertaining to earthquakes in Romania, Italy, and Japan, as well as the California seismic zone in the USA, and show that the seismic networks used to model these regions have remarkable properties. Largely independent of the discretization of the seismic zones under scrutiny, the seismic networks used to model them exhibit distributions of node connectivity, three-, and four-event motifs consisting of large power-law components, extending over some orders of magnitude. The analysis of node connectivity is standard, but for three- and four-event motifs, _i.e._, triangles and tetrahedrons, we consider the distributions of the areas and volumes, respectively, weighted in both cases by the total energy released by the earthquakes in the motif. Our analysis relies on maximum likelihood estimation (MLE) and complementary cumulative distribution functions (CCDF) and offers an accurate and rigorous image of the distributions. Our approach is complementary to that obtained from binning methods, which have the particularity that the results depend on the selection of bins. The main message conveyed by the aforementioned distributions is that they are not intrinsically power-laws, _i.e._, scale-free distributions, but include a power-law component that extends over some orders of magnitude.
Summing up, we have first identified by numerical means the discretization lengths \(L\) with which we construct networks that have connectivity distributions well fitted by a power-law distribution. For each seismic zone, we analyzed distributions for three-best cell lengths and found the \(\alpha\) exponent to range from \(1\) to \(\approx 3\). We note that the sizes of the aforementioned three-best cell lengths differ from region to region. In California, for example, our analysis holds best towards smaller cell lengths, \(L=1.0,1.5,2.0\) km, whereas in the other zones, we find that it holds at around a cell size of \(L=5\,\)km. The results for California and Japan are consistent with the findings of [19], which we extend through our distributions of three- and four-event motifs, while for Romania and Italy, our results are new.
The novelty of our analysis relies on the structure of seismic motifs, namely the distribution of three- and four-event motifs, which includes a significant power-law component of scaling exponents \(\beta\) and \(\gamma\), respectively. We show that these results hold against different discretization lengths and are therefore robust and not an artifact of the underlying discretization. The results presented above include a weighting of the surface of three-event motifs and the volume of the four-event motifs with the total released energy, but the results hold qualitatively also without applying the weights. We also observe that the results are impacted by the size of the available seismic datasets, the most confident modeling of a seismic zone through an underlying network corresponding to the largest available datasets.
Our results reinforce the image of a seismic zone as a self-organized critical system, while at the same time offering a straightforward way of comparing different seismic zones through the aforementioned scaling exponents \(\alpha\), \(\beta\), and \(\gamma\). Moreover, the distribution of three- and four-event motifs can be used as a test bed for new models of seismic activity.
We expect future studies into seismic networks to focus on different spatial and temporal scales, potentially connected with the applicability of the fluctuation-dissipation theorem for seismic zones, as well as the study of different complex networks characteristics, such as clustering and community structures [44], which may reveal interesting correlations within the networks.
###### Acknowledgements.
We wish to acknowledge fruitful discussions with Virgil Baran. This work was supported by the Romanian Ministry of Research, Innovation and Digitalization under Romanian National Core Program LAPLAS VII - contract no. 30N/2023. The numerical simulations reported here were performed in the computing center of the Faculty of Physics of the University of Bucharest. |
2306.01659 | Existence of a global attractor for the compressible Euler equation in a
bounded interval | In this paper, we are concerned with the one-dimensional initial boundary
value problem for isentropic gas dynamics.
Through the contribution of great researchers such as Lax, P. D., Glimm, J.,
DiPerna, R. J. and Liu, T. P., the decay theory of solutions was established.
They treated with the Cauchy problem and the corresponding initial data have
the small total variation. On the other hand, the decay for initial data with
large oscillation has been open for half a century. In addition, due to the
reflection of shock waves at the boundaries, little is known for the decay of
the boundary value problem on a bounded interval.
Our goal is to prove the existence of a global attractor, which yields a
decay of solutions for large data. To construct approximate solutions, we
introduce a modified Godunov scheme. | Yun-guang Lu, Okihiro Sawada, Naoki Tsuge | 2023-06-02T16:28:25Z | http://arxiv.org/abs/2306.01659v1 | # Existence of a global attractor for the compressible Euler equation in a bounded interval
###### Abstract.
In this paper, we are concerned with the one-dimensional initial boundary value problem for isentropic gas dynamics.
Through the contribution of great researchers such as Lax, P. D., Glimm, J., DiPerna, R. J. and Liu, T. P., the decay theory of solutions was established. They treated with the Cauchy problem and the corresponding initial data have the small total variation. On the other hand, the decay for initial data with large oscillation has been open for half a century. In addition, due to the reflection of shock waves at the boundaries, little is known for the decay of the boundary value problem on a bounded interval.
Our goal is to prove the existence of a global attractor, which yields a decay of solutions for large data. To construct approximate solutions, we introduce a modified Godunov scheme.
Key words and phrases:The Compressible Euler Equation, global attractor, decay estimates, the compensated compactness, the modified Godunov scheme 2020 Mathematics Subject Classification: Primary 35B41, 35L03, 35L65, 35Q31, 76N10, 76N15; Secondary 35A01, 35B35, 35B50, 35L60, 76M20 Y.-G. Lu Lu's research is partially supported by the NSFC grant No.12071106 of China N. Tsuge's research is partially supported by Grant-in-Aid for Scientific Research (C) 17K05315, Japan.
Introduction
Let \(\mathcal{F}\) be a bounded bounded operator on \(\mathbb{R}^{d}\), \(d\geq 1\), \(\mathcal{F}\) be a bounded operator on \(\mathbb{R}^{d}\), \(\mathcal{F}\) is a bounded operator on \(\mathbb{R}^{d
_hold for any test function \(\varphi_{1},\varphi_{2}\in C^{1}_{0}([0,1]\times[0,\infty))\) satisfying \(\varphi_{2}(0,t)=\varphi_{2}(1,t)=0\) and_
\[\int_{0}^{1}\int_{0}^{\infty}\!\!\eta(u)\psi_{t}+q(u)\psi_{x}dxdt\geq 0 \tag{1.4}\]
_holds for any non-negative test function \(\psi\in C^{1}_{0}((0,1)\times(0,\infty))\), where \((\eta,q)\) is a pair of convex entropy-entropy flux of (1.1)._
We set \(\bar{\rho}=\int_{0}^{1}\rho_{0}(x)dx\). Since \(\bar{\rho}=0\) implies that the initial data becomes vacuum, we assume \(\bar{\rho}>0\). In addition, we define the mechanical energy as
\[\eta_{*}(u)=\frac{1}{2}\frac{m^{2}}{\rho}+\frac{1}{\gamma(\gamma-1)}\rho^{ \gamma}.\]
We choose a positive constant \(\mu\) small enough. We then set
\[\begin{split}&\bar{\eta}=\int_{0}^{1}\eta(u_{0}(x))dx+\mu,\quad \nu=\frac{3\gamma-1}{\gamma+1}\frac{\bar{\eta}}{\bar{\rho}},\quad K=\bar{\rho }\nu-\bar{\eta},\\ & M_{\infty}=\frac{4}{3\gamma-1}\left(\frac{2\gamma^{2}(\gamma-1 )}{3\gamma-1}\right)^{\frac{\gamma+1}{2(\gamma-1)}}\frac{\nu^{\frac{3\gamma- 1}{2(\gamma-1)}}}{\bar{\rho}\nu-\bar{\eta}}+\frac{2\left(\nu\bar{\rho}+\bar{ \eta}+K\right)}{\gamma-1}.\end{split} \tag{1.5}\]
We notice that \(K>0\), if necessary, by choosing \(\mu\) small enough.
Moreover, for any fixed positive constant \(\varepsilon^{\prime}\), we define
\[\tilde{z}(x,t)=z(x,t)-\varepsilon^{\prime}\int_{0}^{x}\zeta(u(y,t))dy,\;\tilde {w}(x,t)=w(x,t)-\varepsilon^{\prime}\int_{0}^{x}\zeta(u(y,t))dy, \tag{1.6}\]
where
\[\zeta(u)=\eta_{*}(u)-\nu\rho+K. \tag{1.7}\]
From the conservation of mass and the energy inequality, we find that
\[\begin{split}\int_{0}^{x}\zeta(u(y,t))dy\leq&\int_ {0}^{x}\left\{\eta_{*}(u(y,t))+\nu\rho(y,t)+K\right\}dy\\ \leq&\int_{0}^{1}\left\{\eta_{*}(u(x,t))+\nu\rho(x,t )+K\right\}dy\\ \leq&\int_{0}^{1}\left\{\eta_{*}(u_{0}(x))+\nu\rho_{ 0}(x)+K\right\}dy.\end{split}\]
Our main theorem is as follows.
**Theorem 1.1**.: _We assume that_
\[\rho_{0}(x)\geq 0\quad a.e.\;x\in I,\quad\rho_{0}\in L^{\infty}(I),\quad\frac {m_{0}}{\rho_{0}}\in L^{\infty}(I). \tag{1.8}\]
_Then, there exists a global entropy weak solution of the initial boundary value problem (1.3). Moreover, for any positive constant \(\varepsilon\), there exists positive constant \(t_{0}\) such that the solution satisfies_
\[\begin{split}&-M_{\infty}-\varepsilon\leq\tilde{z}(x,t),\quad \tilde{w}(x,t)\leq M_{\infty}+\varepsilon,\quad\rho(x,t)\geq 0,\\ & a.e.\;(x,t)\in I\times[t_{0},\infty),\end{split} \tag{1.9}\]
_where \(t_{0}\) depends only on \(\varepsilon,\;\varepsilon^{\prime}\) and the bound of initial data._
For simplicity, we set \(\varepsilon^{\prime}=1\) hereafter.
**Remark 1.2**.: _In this remark, we state the important conditions necessary to construct an invariant region at the boundary. This condition will be used in Section 3 (3.8)._
_We choose a positive constant \(M_{0}\) such that_
\[-M_{0}+\int_{0}^{x}\zeta(u_{0}(y))dy\leq z(u_{0}(x)),\;w(u_{0}(x)) \leq M_{0}+\int_{0}^{x}\zeta(u_{0}(y))dy. \tag{1.10}\]
_Then, in the proof of Theorem 1.1, we will observe that there exists a continuous function \(\mathcal{M}(t)\) such that \(\mathcal{M}(0)=M_{0}\), \(\mathcal{M}(t_{0})=M_{\infty}\) and_
\[-\mathcal{M}(t)+\int_{0}^{x}\zeta(u(y,t))dy\leq z(u(x,t)),\;w(u(x, t))\leq\mathcal{M}(t)+\int_{0}^{x}\zeta(u(y,t))dy. \tag{1.11}\]
_Let the lower and upper bounds in (1.11) be_
\[L(x,t;u)=-\mathcal{M}(t)+\int_{0}^{x}\zeta(u(y,t))dy,\;U(x,t;u)= \mathcal{M}(t)+\int_{0}^{x}\zeta(u(y,t))dy,\]
_respectively. Then we notice that_
\[-L(0,t;u)\leq U(0,t;u),\;-L(1,t;u)\geq U(1,t;u). \tag{1.12}\]
_In fact, the former is clear. The latter is from (1.5) and the energy inequality deduced as follows._
\[L(1,t;u)+U(1,t;u)= 2\int_{0}^{1}\left\{\eta_{*}(u(x,t))-\nu\rho(x,t)+K\right\}dx\] \[= 2\int_{0}^{1}\left\{\eta_{*}(u(x,t))-\nu\rho(x,t)+\nu\bar{\rho}- \bar{\eta}\right\}dx\] \[= 2\int_{0}^{1}\left\{(\eta_{*}(u(x,t))-\eta_{*}(u_{0}(x)))-\nu \left(\rho(x,t)-\bar{\rho}\right)-\mu\right\}dx\] \[\leq -2\mu. \tag{1.13}\]
(1.12) _is a necessary condition that (1.11) holds for boundary data \(m=0\)._
### Outline of the proof (formal argument)
The proof of the main theorem is a little complicated. Therefore, before proceeding to the subject, let us grasp the point of the main estimate by a formal argument. We assume that a solution is smooth and the density is nonnegative in this section.
We consider the physical region \(\rho\geq 0\) (i.e., \(w\geq z\).). Recalling Remark 1.1, it suffices to derive the lower bound of \(z(u)\) and the upper bound of \(w(u)\) to obtain the bound of \(u\). To do this, we diagonalize (1.1). If solutions are smooth, we deduce from (1.1)
\[z_{t}+\lambda_{1}z_{x}=0,\quad w_{t}+\lambda_{2}w_{x}=0, \tag{1.14}\]
where \(\lambda_{1}\) and \(\lambda_{2}\) are the characteristic speeds defined as follows
\[\lambda_{1}=v-\rho^{\theta},\quad\lambda_{2}=v+\rho^{\theta}. \tag{1.15}\]
We introduce \(\tilde{z},\tilde{w},\tilde{\rho},\tilde{v},\tilde{\lambda}_{1},\tilde{\lambda}_{2}\) as follows.
\[\begin{split} z&=\tilde{z}+\int_{0}^{x}\left\{\eta_{* }(u)-\nu\rho+K\right\}dy,\quad w=\tilde{w}+\int_{0}^{x}\left\{\eta_{*}(u)-\nu \rho+K\right\}dy,\\ \tilde{\rho}&=\left(\frac{\theta(\tilde{w}-\tilde{z} )}{2}\right)^{1/\theta},\quad\tilde{v}=\frac{\tilde{w}+\tilde{z}}{2},\quad \tilde{\lambda}_{1}=\tilde{v}-\tilde{\rho}^{\theta},\quad\tilde{\lambda}_{2}= \tilde{v}+\tilde{\rho}^{\theta}.\end{split} \tag{1.16}\]
We denote the flux of \(\eta_{*}(u)\) by
\[q_{*}(u)=m\left(\frac{1}{2}\frac{m^{2}}{\rho^{2}}+\frac{\rho^{\gamma-1}}{ \gamma-1}\right). \tag{1.17}\]
Then, from (1.1), it holds that
\[\left(\eta_{*}(u)\right)_{t}+\left(q_{*}(u)\right)_{x}=0. \tag{1.18}\]
For \(\delta=\theta K\varepsilon/2\), we define \(\hat{z}=\tilde{z}-\delta t,\ \hat{w}=\tilde{w}+\delta t\). We then deduce from \(\eqref{eq:2}_{1}\) and (1.18) that
\[\hat{z}_{t}+\lambda_{1}\hat{z}_{x}=g_{1}(x,t,u),\quad\hat{w}_{t}+\lambda_{2} \hat{w}_{x}=g_{2}(x,t,u), \tag{1.19}\]
where
\[\begin{split} g_{1}(x,t,u)=&-K\lambda_{1}+\frac{1}{ \gamma(\gamma-1)}\rho^{\gamma+\theta}+\frac{1}{\gamma}\rho^{\gamma}v+\frac{1}{ 2}\rho^{\theta+1}v^{2}-\nu\rho^{\theta+1}-\delta,\\ g_{2}(x,t,u)=&-K\lambda_{2}-\frac{1}{\gamma(\gamma- 1)}\rho^{\gamma+\theta}+\frac{1}{\gamma}\rho^{\gamma}v-\frac{1}{2}\rho^{ \theta+1}v^{2}+\nu\rho^{\theta+1}+\delta.\end{split} \tag{1.20}\]
On the other hand, we notice that
\[-M_{0}\leq\hat{z}_{0}(x),\ \hat{w}_{0}(x)\leq M_{0}.\]
Our goal is to prove that
\[\hat{S}_{inv}=\{(\hat{z},\hat{w})\in\mathbf{R}^{2};-M_{0}\leq\hat{z},\ \hat{w}\leq M_{0}\} \tag{1.21}\]
is an invariant region for \(0\leq t\leq t_{0}\), where \(t_{0}=\max\{(M_{0}-M_{\infty}-\varepsilon)/\delta,0\}\).
We consider the case where \(M_{0}>M_{\infty}+\varepsilon\). To achieve this, assuming that
\[-M_{0}<\hat{z}_{0}(x),\ \hat{w}_{0}(x)<M_{0}\]
and there exist \(x_{*}\in(0,1),\ 0<t_{*}\leq t_{0}\) such that (1.22) or (1.23) holds, we shall deduce a contradiction, where
\[\begin{split}&-M_{0}<\hat{z}(x,t),\ \hat{w}(x,t)<M_{0},\quad x\in(0,1),\ 0\leq t<t_{*}\\ \text{and}\quad\hat{z}(x_{*},t_{*})=-M_{0},\ \hat{w}(x_{*},t_{*}) \leq M_{0},\\ &-M_{0}<\hat{z}(x,t),\ \hat{w}(x,t)<M_{0},\quad x\in(0,1),\ 0\leq t<t_{*}\\ \text{and}\quad-M_{0}\leq\hat{z}(x_{*},t_{*}),\ \hat{w}(x_{*},t_{*}) =M_{0}.\end{split} \tag{1.22}\]
To do this, we prove
\[g_{1}(x_{*},t_{*},u)>0,\text{ when (\ref{eq:22}) holds,} \tag{1.24}\] \[g_{2}(x_{*},t_{*},u)<0,\text{ when (\ref{eq:23}) holds.} \tag{1.25}\]
From (1.16), we notice \(\tilde{\rho}=\rho\). We thus obtain
\[\frac{(\rho(x,t))^{\theta}}{\theta}= \frac{(\tilde{\rho}(x,t))^{\theta}}{\theta}=\frac{\tilde{w}(x,t)- \tilde{z}(x,t)}{2}=\frac{\hat{w}(x,t)-\hat{z}(x,t)-2\delta t}{2}\leq M_{0}-\delta t \tag{1.26}\]
and observe
\[\begin{split}\lambda_{1}&=z+\frac{3-\gamma}{\gamma-1} \rho^{\theta}=\tilde{z}+\int_{0}^{x}\zeta(u)dx+\frac{3-\gamma}{\gamma-1}\rho^{ \theta},\\ \lambda_{2}&=w-\frac{3-\gamma}{\gamma-1}\rho^{\theta} =\tilde{w}+\int_{0}^{x}\zeta(u)dx-\frac{3-\gamma}{\gamma-1}\rho^{\theta}.\end{split} \tag{1.27}\]
For \((x,t)=(x_{*},t_{*})\), since \(M_{0}-\delta t_{*}\geq M_{0}-\delta t_{0}=M_{\infty}+\varepsilon\), recalling \(\delta=\theta K\varepsilon/2\) and (1.5), we deduce from (1.26) and (1.27) that
\[\begin{split} g_{1}(x,t,u)=&-K\left(\tilde{z}+\int_ {0}^{x}\zeta(u)dx+\frac{3-\gamma}{\gamma-1}\rho^{\theta}\right)+\frac{\rho^{ \theta+1}}{2}\left(v+\frac{\rho^{\theta}}{\gamma}\right)^{2}+\frac{\gamma+1}{ 2\gamma^{2}(\gamma-1)}\rho^{\gamma+\theta}\\ &-\nu\rho^{\theta+1}-\delta\\ \geq& K\left(M_{0}-\delta t_{*}\right)-K\left(\nu \bar{\rho}+\bar{\eta}+K\right)-\frac{3-\gamma}{\gamma-1}\theta K\left(M_{0}- \delta t_{*}\right)\\ &+\min_{\rho}\left\{\frac{\gamma+1}{2\gamma^{2}(\gamma-1)}\rho^{ \gamma+\theta}-\nu\rho^{\theta+1}\right\}-\delta\\ \geq&\theta K\left(M_{\infty}+\varepsilon\right)-K \left(\nu\bar{\rho}+\bar{\eta}+K\right)\\ &+\frac{2(\gamma-1)}{3\gamma-1}\left(\frac{2\gamma^{2}(\gamma-1)} {3\gamma-1}\right)^{\frac{\gamma+1}{2(\gamma-1)}}\nu^{\frac{3\gamma-1}{2( \gamma-1)}}-\delta\\ =&\delta\\ >& 0.\end{split} \tag{1.28}\]
On the other hand, since \(\tilde{z}\) attains the minimum at \((x,t)=(x_{*},t_{*})\), we find that \(\tilde{z}_{t}(x_{*},t_{*})\leq 0,\ \tilde{z}_{x}(x_{*},t_{*})=0\). Then, from \(\eqref{eq
(1.21) implies that \((\tilde{z}(x,t_{0}),\tilde{w}(x,t_{0}))\) is contained in
\[\tilde{S}_{inv}=\{(\tilde{z},\tilde{w})\in\mathbf{R}^{2};-M_{\infty}-\varepsilon \leq\tilde{z},\ \tilde{w}\leq M_{\infty}+\varepsilon\}.\]
In addition, we find that \(\tilde{S}_{inv}\) is an invariant region in the similar manner to (1.21). Therefore, we can prove (1.9).
The present paper is organized as follows. In Section 2, we construct approximate solutions by the Godunov scheme mentioned above. In Section 3, we drive the bounded a decay estimate of our approximate solutions.
## 2. Construction of Approximate Solutions
In this section, we construct approximate solutions. In the strip \(0\leq t\leq\llbracket T\rrbracket+1\) for any fixed positive constant \(T\), we denote these approximate solutions by \(u^{\Delta}(x,t)=(\rho^{\Delta}(x,t),m^{\Delta}(x,t))\), where \(\llbracket T\rrbracket\) is the greatest integer not greater than \(T\). For \(N_{x}\in\mathbf{N}\), we define the space mesh lengths by \(\Delta x=1/(2N_{x})\). We take time mesh length \(\Delta t\) such that
\[\frac{\Delta x}{\Delta t}=2\llbracket\max\{M_{0},M_{\infty}+ \varepsilon\}+\bar{\eta}+\nu\bar{\rho}+K\rrbracket+1, \tag{2.1}\]
where \(\llbracket x\rrbracket\) is the greatest integer not greater than \(x\). Then we define \(N_{t}=(\llbracket T\rrbracket+1)/(2\Delta t)\in\mathbf{N}\). In addition, we set
\[(j,n)\in\mathbf{N}_{x}\times\mathbf{N}_{t},\]
where \(\mathbf{N}_{x}=\{1,3,5,\ldots,2N_{x}-1\}\) and \(\mathbf{N}_{t}=\{0,1,2,\ldots,2N_{t}\}\). For simplicity, we use the following terminology
\[x_{j}=j\Delta x,\ t_{n}=n\Delta t,\ t_{n.5}=\left(n+\frac{1}{2} \right)\Delta t,\ t_{n-}=n\Delta t-0,\ t_{n+}=n\Delta t+0. \tag{2.2}\]
First we define \(u^{\Delta}(x,-0)\) by \(u^{\Delta}(x,-0)=u_{0}(x)\). Then, for \(j\in\mathbf{N}_{x}\), we denote \(E_{j}^{0}(u)\) by
\[E_{j}^{0}(u)=\frac{1}{2\Delta x}\int_{x_{j-1}}^{x_{j+1}}u^{\Delta}(x,-0)dx.\]
Next, assume that \(u^{\Delta}(x,t)\) is defined for \(t<t_{n}\). Then, for \(j\in\mathbf{N}_{x}\), we denote \(E_{j}^{n}(u)\) by
\[E_{j}^{n}(u)=\frac{1}{2\Delta x}\int_{x_{j-1}}^{x_{j+1}}u^{\Delta}(x,t_{n-})dx.\]
Let \(E^{n}(x;u)\) be a piecewise constant function defined by
\[E^{n}(x;u)=E_{j}^{n}(u),\ \ x\in[x_{j-1},x_{j+1})\ \ \ (j\in\mathbf{N}_{x}).\]
To define \(u_{j}^{n}=(\rho_{j}^{n},m_{j}^{n})\) for \(j\in\mathbf{N}_{x}\), we first determine symbols \(I_{j}^{n}\) and \(L_{n}\). Let the approximation of \(\zeta(u)\) be
\[I_{j}^{n}:=\int_{0}^{x_{j-1}}\zeta(E^{n}(x;u))dx+\frac{1}{2}\int _{x_{j-1}}^{x_{j+1}}\zeta(E^{n}(x;u))dx=\int_{0}^{x_{j}}\zeta(E^{n}(x;u))dx,\]
where \(\zeta\) is defined in (1.7).
Let \(\mathcal{D}=(x(t),t)\) denote a discontinuity in \(u^{\Delta}(x,t),\ [\eta_{*}]\) and \([q_{*}]\) denote the jump of \(\eta_{*}(u^{\Delta}(x,t))\) and \(q_{*}(u^{\Delta}(x,t))\) across \(\mathcal{D}\) from left to right, respectively,
\[[\eta_{*}] =\eta_{*}(u^{\Delta}(x(t)+0,t))-\eta_{*}(u^{\Delta}(x(t)-0,t)),\] \[=q_{*}(u^{\Delta}(x(t)+0,t))-q_{*}(u^{\Delta}(x(t)-0,t)),\]
where \(q_{*}(u)\) is defined in (1.17).
To measure the error in the entropy condition and the gap of the energy at \(t_{n\pm}\), we introduce the following functional.
\[\begin{split} L_{n}=&\int_{0}^{t_{n}}\sum_{0\leq x \leq 1}\left(\sigma[\eta_{*}]-[q_{*}]\right)dt+\sum_{k=0}^{n}\int_{0}^{1} \left\{\eta_{*}(u^{\Delta}(x,t_{k-0}))-\eta_{*}(E^{k}(x;u))\right\}dx\\ &+\sum_{k=0}^{n}\sum_{j\in J_{k}}\frac{1}{2\Delta x}\int_{x_{j-1 }}^{x_{j+1}}\int_{x_{j-1}}^{x}R_{j}^{k}(y)dydx,\end{split} \tag{2.3}\]
where
\[\begin{split} R_{j}^{n}(x)=&\int_{0}^{1}(1-\tau) \cdot{}^{t}\left(u^{\Delta}(x,t_{n-})-E^{n}(x;u)\right)\\ &\times\nabla^{2}\eta_{*}\left(E^{n}(x;u)+\tau\left\{u^{\Delta}(x,t_{n-})-E^{n}(x;u)\right\}\right)\left(u^{\Delta}(x,t_{n-})-E^{n}(x;u)\right) d\tau\end{split}\]
and the summation in \(\sum_{0\leq x\leq 1}\) is taken over all discontinuities in \(u^{\Delta}(x,t)\) at a fixed time \(t\) over \(x\in[0,1]\), \(\sigma\) is the propagating speed of the discontinuities.
From the entropy condition, \(\sigma[\eta_{*}]-[q_{*}]\geq 0\). From the Jensen inequality, \(\int_{0}^{1}\left\{\eta_{*}(u^{\Delta}(x,t_{n-0}))-\eta_{*}(E^{n}(x;u)) \right\}dx\geq 0\). Therefore, we find that \(L_{n}\geq 0\).
Using \(I_{j}^{n}\) and \(L_{n}\), we define \(u_{j}^{n}\) as follows. First, we define a sequence \(\left\{M_{n}\right\}_{n\in\mathbf{N}_{t}}\) with the initial term \(M_{0}\) as follows.
\[M_{n+1}=\begin{cases}M_{n}-\delta\Delta t,&\text{when }M_{n}+L_{n}\geq M_{ \infty}+\varepsilon,\\ M_{n},&\text{when }M_{n}+L_{n}<M_{\infty}+\varepsilon.\end{cases} \tag{2.4}\]
We notice that \(M_{n}+L_{n}\geq M_{\infty}+\varepsilon-\delta\Delta t\).
Next, we choose \(\beta\) such that \(1<\beta<1/(2\theta)\). If
\[E_{j}^{n}(\rho):=\frac{1}{2\Delta x}\int_{x_{j-1}}^{x_{j+1}}\rho^{\Delta}(x,t _{n-})dx<(\Delta x)^{\beta},\]
we define \(u_{j}^{n}\) by \(u_{j}^{n}=(0,0)\); otherwise, setting
\[z_{j}^{n}:=\max\left\{z(E_{j}^{n}(u)),\ -M_{n}-L_{n}+I_{j}^{n}\right\},\ w_{j}^{n }:=\min\left\{w(E_{j}^{n}(u)),\ M_{n}+L_{n}+I_{j}^{n}\right\}, \tag{2.5}\]
we define \(u_{j}^{n}\) by
\[u_{j}^{n}:=(\rho_{j}^{n},m_{j}^{n}):=(\rho_{j}^{n},\rho_{j}^{n}v_{j}^{n}):= \left(\left\{\frac{\theta(w_{j}^{n}-z_{j}^{n})}{2}\right\}^{1/\theta},\left\{ \frac{\theta(w_{j}^{n}-z_{j}^{n})}{2}\right\}^{1/\theta}\frac{w_{j}^{n}+z_{j} ^{n}}{2}\right).\]
**Remark 2.1**.: We find
\[-M_{n}-L_{n}+I_{j}^{n}\leq z(u_{j}^{n}),\quad w(u_{j}^{n})\leq M_{n}+L_{n}+I_ {j}^{n}. \tag{2.6}\]
This implies that we cut off the parts where \(z(E_{j}^{n}(u))<-M_{n}-L_{n}+I_{j}^{n}\) and \(w(E_{j}^{n}(u))>M_{n}+L_{n}+I_{j}^{n}\) in defining \(z(u_{j}^{n})\) and \(w(u_{j}^{n})\).
We must construct our approximate solutions \(u^{\Delta}(x,t)\) near the boundary and in an interior domain. Here we recall (1.12). This condition is necessary so that the inequality (1.11) holds even if a shock wave appears at the boundary \(x=1\). To see this, we are devoted to treating with the construction near the boundary \(x=1\). For the construction in the interior domain, refer to [19].
We then assume that approximate solutions \(u^{\Delta}(x,t)\) are defined in domains \(D_{1}:t<t_{n}\quad(n\in\mathbf{N}_{t})\) and \(D_{2}:x<x_{2N_{x}-1},\;t_{n}\leq t<t_{n+1}\). By using \(u_{j}^{n}\) defined above and \(u^{\Delta}(x,t)\) defined in \(D_{2}\), we construct the approximate solutions in the cell \(D:\;t_{n}\leq t<t_{n+1}\quad(n\in\mathbf{N}_{t}),\;x_{2N_{x}-1}\leq x<x_{2N_{x}}\).
We denote \(u_{2N_{x}-1}^{n}\) by \(u_{-}=(\rho_{-},m_{-})=(\rho_{-},\rho_{-}v_{-})\) and solve the Riemann initial boundary value problem (1.1) and
\[u|_{t=t_{n}}=u_{-},\quad m|_{x=1}=0 \tag{2.7}\]
in \(D\). We draw a diagram by using the wave curve of the first family and the vacuum as follows (see [11] and Figure 1):
1. If \(\rho_{-}>0\) and \(v_{-}\geq 0\), there exists \(u_{+}=(\rho_{+},m_{+})=(\rho_{+},\rho_{+}v_{+})\) with \(v_{+}=0\) from which \(u_{+}\) is connected by a \(1\)-shock curve.
2. If \(v_{-}\leq 0\) and \(w(u_{-})\geq 0\), then there exists \(u_{+}\) with \(v_{+}=0\) from which \(u_{+}\) is connected by a \(1\)-rarefaction curve.
3. If \(v_{-}\leq 0\) and \(w(u_{-})\leq 0\), then there exists \(u_{*}\) with \(\rho_{*}=0\) from which \(u_{-}\) is connected by a \(1\)-rarefaction, and \(u_{*}\) and \(u_{+}\) with \(\rho_{+}=v_{+}=0\) are connected by the vacuum.
4. If \(v_{-}\geq 0\) and \(\rho_{-}=0\), then \(u_{+}\) with \(\rho_{+}=v_{+}=0\) is connected from \(u_{-}\) by the vacuum.
Figure 1. Solutions of the Riemann initial boundary value problem (1.1) and (2.7) in \((z,w)\)-plane.
### Case 1: the case where a shock waves arise
In this case, the solution \(u_{\rm R}(x,t)\) of (1.1) and (2.7) is as follows.
\[u_{\rm R}(x,t)=\begin{cases}u_{-},&D\cap\left\{x-x_{2N_{x}}\leq\sigma_{s}(t-t_{ n})\right\},\\ u_{+},&D\cap\left\{x-x_{2N_{x}}>\sigma_{s}(t-t_{n})\right\},\end{cases}\]
where \(\sigma_{s}\) is the speed of 1-shock wave.
We next replace the above constant states \(u_{-},\ u_{+}\) with functions of \(x\) and \(t\) as follows:
In view of (1.16), we construct \(u_{-}^{\Delta}(x,t)\). We first determine the approximation of \(\tilde{z},\tilde{w}\) in (1.16) as follows.
\[\tilde{z}_{-}^{\Delta}= z_{-}-\int_{0}^{x_{2N_{x-1}}}\zeta(u_{n,0}^{\Delta}(x))dx,\ \tilde{w}_{1}^{\Delta}=w_{-}-\int_{0}^{x_{2N_{x-1}}}\zeta(u_{n,0}^{\Delta}(x))dx,\]
where \(u_{n,0}^{\Delta}(x)\) is a piecewise constant function defined by
\[u_{n,0}^{\Delta}(x)=u_{j}^{n},\ \ \ x\in[x_{j-1},x_{j+1})\ \ \ (j\in{\bf N}_{x}). \tag{2.8}\]
We set
\[\begin{split}\tilde{z}_{-}^{\Delta}(x,t)=&\ \tilde{z}_{-}^{\Delta}+\int_{0}^{x_{2N_{x}-1}}\zeta(u_{n,0}^{\Delta}(x))dx+ \int_{x_{2N_{x-1}}}^{x}\zeta(u_{-})dy\\ &+\left\{g_{1}(x,t;u_{-})+V(u_{-})\right\}(t-t_{n}),\\ \tilde{w}_{-}^{\Delta}(x,t)=&\ \tilde{w}_{-}^{\Delta}+ \int_{0}^{x_{2N_{x-1}}}\zeta(u_{n,0}^{\Delta}(x))dx+\int_{x_{2N_{x-1}}}^{x} \zeta(u_{-})dy\\ &+\left\{g_{2}(x,t;u_{-})+V(u_{-})\right\}(t-t_{n}),\end{split} \tag{2.9}\]
where \(g_{1}\) and \(g_{2}\) are defined in (1.20),
\[V(u)=q_{*}(u)-\nu m. \tag{2.10}\]
Using \(\tilde{u}_{-}^{\Delta}(x,t)\), we next define \(u_{-}^{\Delta}(x,t)\) as follows.
\[\begin{split} z_{-}^{\Delta}(x,t)=&\ \tilde{z}_{-}^{\Delta}+\int_{0}^{x_{2N_{x}-1}} \zeta(u_{n,0}^{\Delta}(x))dx+\int_{x_{2N_{x-1}}}^{x}\zeta(\tilde{u}_{-}^{ \Delta}(y,t))dy\\ &+\left\{g_{1}(x,t;\tilde{u}_{-}^{\Delta})+V(u_{-})\right\}(t-t_ {n}),\\ w_{-}^{\Delta}(x,t)=&\ \tilde{w}_{-}^{\Delta}+\int_{0}^{x_{2N_{x -1}}}\zeta(u_{n,0}^{\Delta}(x))dx+\int_{x_{2N_{x-1}}}^{x}\zeta(\tilde{u}_{-}^{ \Delta}(y,t))dy\\ &+\left\{g_{2}(x,t;\tilde{u}_{-}^{\Delta})+V(u_{-})\right\}(t-t_ {n}).\end{split} \tag{2.11}\]
**Remark 2.2**.:
* _We notice that approximate solutions_ \(z_{-}^{\Delta},w_{-}^{\Delta}\) _and_ \(\tilde{z}_{-}^{\Delta},\tilde{w}_{-}^{\Delta}\) _correspond to_ \(z,w\) _and_ \(\tilde{z},\tilde{w}\) _in (_1.16_), respectively._
* _For_ \(t_{n}<t<t_{n+1}\)_, our approximate solutions will satisfy_ \[\begin{split}\int_{0}^{x_{2N_{x}-1}}&\ \zeta(u^{\Delta}(x,t_{n+1-}))dx+\int_{t_{n}}^{t_{n+1}} \sum_{0\leq x\leq x_{2N_{x-1}}}(\sigma[\eta_{*}]-[q_{*}])dt\\ &=\int_{0}^{x_{2N_{x}-1}}\zeta(u_{n,0}^{\Delta}(x))dx+V(u_{-}) \Delta t+o(\Delta x).\end{split}\] (2.12)
* _Our construction of approximate solutions uses the iteration method twice (see (_2.16_) and (_2.17_)) to deduce (_3.2_)._
We first set
\[\tilde{z}_{+}^{\Delta}=z_{+}-\int_{0}^{x_{2N_{x}}}\zeta(u_{n,0}^{\Delta}(x))dx, \;\tilde{w}_{+}=w_{+}-\int_{0}^{x_{2N_{x}}}\zeta(u_{n,0}^{\Delta}(x))dx,\]
where \(z_{+}=-\dfrac{\left(\rho_{+}\right)^{\theta}}{\theta},\;w_{+}=\dfrac{\left(\rho _{+}\right)^{\theta}}{\theta}\).
We next construct \(\tilde{u}_{+}^{\Delta}\)
\[\tilde{z}_{+}^{\Delta}(x,t) =\] \[\tilde{w}_{+}^{\Delta}(x,t) = \tilde{w}_{+}^{\Delta}+\int_{0}^{x_{2N_{x}}}\zeta(u_{n,0}^{\Delta} (x))dx+\int_{x_{2N_{x}}}^{x}\zeta(u_{+})dy+g_{2}(x,t;u_{+})(t-t_{n}).\]
Using \(\tilde{u}_{+}^{\Delta}(x,t)\), we define \(u_{+}^{\Delta}(x,t)\) as follows.
\[z_{+}^{\Delta}(x,t)= \tilde{z}_{+}^{\Delta}+\int_{0}^{x_{2N_{x}}}\zeta(u_{n,0}^{ \Delta}(x))dx+\int_{x_{2N_{x}}}^{x}\zeta(\tilde{u}_{+}(y,t))dy+g_{1}(x,t; \tilde{u}_{+})(t-t_{n}),\] \[w_{+}^{\Delta}(x,t)= \tilde{w}_{+}^{\Delta}+\int_{0}^{x_{2N_{x}}}\zeta(u_{n,0}^{ \Delta}(x))dx+\int_{x_{2N_{x}}}^{x}\zeta(\tilde{u}_{+}(y,t))dy+g_{2}(x,t; \tilde{u}_{+})(t-t_{n}). \tag{2.13}\]
Then, we define approximate solution \(u^{\Delta}(x,t)\) in \(D\) as follows (see Figure 2).
\[u^{\Delta}(x,t)=\begin{cases}u_{-}^{\Delta}(x,t),&D\cap\left\{x-x_{2N_{x}} \leq\sigma_{s}(t-t_{n})\right\},\\ u_{+}^{\Delta}(x,t),&D\cap\left\{x-x_{2N_{x}}>\sigma_{s}(t-t_{n})\right\}. \end{cases}\]
### Case 2: the case where a rarefaction wave arises
Let \(\alpha\) be a constant satisfying \(1/2<\alpha<1\). Then we can choose a positive value \(\beta\) small enough such that \(\beta<\alpha\), \(1/2+\beta/2<\alpha<1-2\beta\), \(\beta<2/(\gamma+5)\) and \((9-3\gamma)\beta/2<\alpha\).
Figure 2. Case 1: The case where a 1-shock wave arises near the boundary.
_Step 1._
In order to approximate a 1-rarefaction wave by a piecewise constant _rarefaction fan_, we introduce the integer
\[p:=\max\left\{\left[\!\left[\!\left(z_{+}-z_{-}\right)\!/(\varDelta x)^{\alpha} \right]\!\right]+1,2\right\},\]
where \(z_{-}=z(u_{-}),z_{+}=z(u_{+})\) and \(\left[\!\left[x\right]\!\right]\) is the greatest integer not greater than \(x\). Notice that
\[p=O((\varDelta x)^{-\alpha}). \tag{2.14}\]
Define
\[z_{1}^{*}:=z_{-},\ z_{p}^{*}:=z_{+},\ w_{i}^{*}:=w_{-}\ (i=1,\ldots,p),\]
and
\[z_{i}^{*}:=z_{2N_{x}-1}+(i-1)(\varDelta x)^{\alpha}\ (i=1,\ldots,p-1).\]
We next introduce the rays \(x=1+\lambda_{1}(z_{i}^{*},z_{i+1}^{*},w_{-})(t-n\varDelta t)\) separating finite constant states \((z_{i}^{*},w_{i}^{*})\ (i=1,\ldots,p)\), where
\[\lambda_{1}(z_{i}^{*},z_{i+1}^{*},w_{-}):=v(z_{i}^{*},w_{-})-S(\rho(z_{i+1}^{ *},w_{-}),\rho(z_{i}^{*},w_{-})),\]
\[\rho_{i}^{*}:=\rho(z_{i}^{*},w_{-}):=\left(\frac{\theta(w_{-}-z_{i}^{*})}{2} \right)^{1/\theta}\,\quad v_{i}^{*}:=v(z_{i}^{*},w_{-}):=\frac{w_{-}+z_{i}^{*}}{2}\]
and
\[S(\rho,\rho_{0}):=\left\{\begin{array}{ll}\sqrt{\frac{ \rho(p(\rho)-p(\rho_{0}))}{\rho_{0}(\rho-\rho_{0})}},\quad\text{if}\ \rho\neq\rho_{0},\\ \sqrt{p^{\prime}(\rho_{0})},\quad\text{if}\ \rho=\rho_{0}.\end{array}\right. \tag{2.15}\]
We call this approximated 1-rarefaction wave a _1-rarefaction fan_.
_Step 2._
In this step, we replace the above constant states with functions of \(x\) and \(t\) as follows:
In view of (1.16), we construct \(u_{1}^{\varDelta}(x,t)\). We first determine the approximation of \(\tilde{z},\tilde{w}\) in (1.16) as follows.
\[\tilde{z}_{1}^{\varDelta}= z_{-}-\int_{0}^{x_{2N_{x}-1}}\zeta(u_{n,0}^{\varDelta}(x))dx,\ \tilde{w}_{1}^{\varDelta}=w_{-}-\int_{0}^{x_{2N_{x}-1}}\zeta(u_{n,0}^{\varDelta} (x))dx.\]
We set
\[\begin{split}\tilde{z}_{1}^{\varDelta}(x,t)=& \ \tilde{z}_{1}^{\varDelta}+\int_{0}^{x_{2N_{x}-1}}\zeta(u_{n,0}^{\varDelta}(x))dx +\int_{x_{2N_{x}-1}}^{x}\zeta(u_{-})dy\\ &+\left\{g_{1}(x,t;u_{-})+V(u_{-})\right\}(t-t_{n}),\\ \tilde{w}_{1}^{\varDelta}(x,t)=&\ \tilde{w}_{1}^{\varDelta}+\int_{0}^{x_{2N_{x}-1}} \zeta(u_{n,0}^{\varDelta}(x))dx+\int_{x_{2N_{x}-1}}^{x}\zeta(u_{-})dy\\ &+\left\{g_{2}(x,t;u_{-})+V(u_{-})\right\}(t-t_{n}).\end{split} \tag{2.16}\]
Using \(\tilde{u}_{1}^{\Delta}(x,t)\), we next define \(u_{1}^{\Delta}(x,t)\) as follows.
\[\begin{split} z_{1}^{\Delta}(x,t)=&\,\tilde{z}_{1}^{ \Delta}+\int_{0}^{x_{2N_{x-1}}}\zeta(u_{n,0}^{\Delta}(x))dx+\int_{x_{2N_{x-1}}} ^{x}\zeta(\tilde{u}_{1}^{\Delta}(y,t))dy\\ &+\left\{g_{1}(x,t;\tilde{u}_{1}^{\Delta})+V(u_{-})\right\}(t-t_ {n}),\\ w_{1}^{\Delta}(x,t)=&\tilde{w}_{1}^{\Delta}+\int_{0 }^{x_{2N_{x-1}}}\zeta(u_{n,0}^{\Delta}(x))dx+\int_{x_{2N_{x-1}}}^{x}\zeta( \tilde{u}_{1}^{\Delta}(y,t))dy\\ &+\left\{g_{2}(x,t;\tilde{u}_{1}^{\Delta})+V(u_{-})\right\}(t-t_ {n}).\end{split} \tag{2.17}\]
First, by the implicit function theorem, we determine a propagation speed \(\sigma_{2}\) and \(u_{2}=(\rho_{2},m_{2})\) such that
1. \(z_{2}:=z(u_{2})=z_{2}^{*}\)
2. the speed \(\sigma_{2}\), the left state \(u_{1}^{\Delta}(x_{2}^{\Delta}(t_{n.5}),t_{n.5})\) and the right state \(u_{2}\) satisfy the Rankine-Hugoniot conditions, i.e., \[f(u_{2})-f(u_{1}^{\Delta}(x_{2}^{\Delta}(t_{n.5}),t_{n.5}))=\sigma_{2}(u_{2}- u_{1}^{\Delta}(x_{2}^{\Delta}(t_{n.5}),t_{n.5})),\]
where \(x_{2}^{\Delta}(t)=1+\sigma_{2}(t-t_{n})\). Then we fill up by \(u_{1}^{\Delta}(x)\) the sector where \(t_{n}\leq t<t_{n+1},x_{2N_{x}-1}\leq x<x_{2}^{\Delta}(t)\) (see Figure 3).
Assume that \(u_{k}\), \(u_{k}^{\Delta}(x,t)\), a propagation speed \(\sigma_{k}\) and \(x_{k}^{\Delta}(t)\) are defined. Then we similarly determine \(\sigma_{k+1}\) and \(u_{k+1}=(\rho_{k+1},m_{k+1})\) such that
1. \(z_{k+1}:=z(u_{k+1})=z_{k+1}^{*}\),
2. \(\sigma_{k}<\sigma_{k+1}\),
3. the speed \(\sigma_{k+1}\), the left state \(u_{k}^{\Delta}(x_{k+1}^{\Delta}(t_{n.5}),t_{n.5})\) and the right state \(u_{k+1}\) satisfy the Rankine-Hugoniot conditions,
where \(x_{k+1}^{\Delta}(t)=1+\sigma_{k+1}(t-t_{n})\). Then we fill up by \(u_{k}^{\Delta}(x,t)\) the sector where \(t_{n}\leq t<t_{n+1},x_{k}^{\Delta}(t)\leq x<x_{k+1}^{\Delta}(t)\).
We construct \(u_{k+1}^{\Delta}(x,t)\) as follows.
Figure 3. Case 2: The case where a 1-rarefaction arises near the boundary.
We first determine
\[\tilde{z}^{\Delta}_{k+1}= z_{k+1}-\int_{0}^{x_{2N_{x-1}}}\zeta(u^{\Delta}_{n,0}(x))dx-V(u_{-}) \frac{\Delta t}{2}-\sum_{l=1}^{k}\int_{x^{\Delta}_{l}(t_{n.5})}^{x^{\Delta}_{l+ 1}(t_{n.5})}\zeta(u^{\Delta}_{l}(x,t_{n.5}))dx,\] \[\tilde{w}^{\Delta}_{k+1}= w_{k+1}-\int_{0}^{x_{2N_{x-1}}}\zeta(u^{\Delta}_{n,0}(x))dx-V(u_{-}) \frac{\Delta t}{2}-\sum_{l=1}^{k}\int_{x^{\Delta}_{l}(t_{n.5})}^{x^{\Delta}_{l+ 1}(t_{n.5})}\zeta(u^{\Delta}_{l}(x,t_{n.5}))dx,\]
where \(x^{\Delta}_{1}(t)=x_{2N_{x}-1},\ x^{\Delta}_{l}(t)=1+\sigma_{l}(t-t_{n})\ \ (l=2,3, \ldots,k+1)\) and \(t_{n.5}\) is defined in (2.2).
We next define \(\tilde{u}^{\Delta}_{k+1}\) as follows.
\[\tilde{z}^{\Delta}_{k+1}(x,t)= \tilde{z}^{\Delta}_{k+1}+\int_{0}^{x_{2N_{x}-1}}\zeta(u^{\Delta} _{n,0}(x))dx+V(u_{-})(t-t_{n})\] \[+\sum_{l=1}^{k}\int_{x^{\Delta}_{l}(t)}^{x^{\Delta}_{l+1}(t)} \zeta(u^{\Delta}_{l}(x,t))dx+\int_{x^{\Delta}_{k+1}(t)}^{x}\zeta(u_{k+1})dy\] \[+g_{1}(x,t;u_{k+1})(t-t_{n.5})+\int_{t_{n.5}}^{t}\sum_{x_{2N_{x}- 1}\leq y\leq x}(\sigma[\eta_{*}]-[q_{*}])ds,\]
\[\tilde{w}^{\Delta}_{k+1}(x,t)= \tilde{w}^{\Delta}_{k+1}+\int_{0}^{x_{2N_{x}-1}}\zeta(u^{\Delta}_ {n,0}(x))dx+V(u_{-})(t-t_{n})\] \[+\sum_{l=1}^{k}\int_{x^{\Delta}_{l}(t)}^{x^{\Delta}_{l+1}(t)} \zeta(u^{\Delta}_{l}(x,t))dx+\int_{x^{\Delta}_{k+1}(t)}^{x}\zeta(u_{k+1})dy\] \[+g_{2}(x,t;u_{k+1})(t-t_{n.5})+\int_{t_{n.5}}^{t}\sum_{x_{2N_{x}- 1}\leq y\leq x}(\sigma[\eta_{*}]-[q_{*}])ds.\]
Finally, using \(\tilde{u}^{\Delta}_{k+1}(x,t)\), we define \(u^{\Delta}_{k+1}(x,t)\) as follows.
\[z^{\Delta}_{k+1}(x,t)= \tilde{z}^{\Delta}_{k+1}+\int_{0}^{x_{2N_{x}-1}}\zeta(u^{\Delta}_ {n,0}(x))dx+V(u_{-})(t-t_{n})\] \[+\sum_{l=1}^{k}\int_{x^{\Delta}_{l}(t)}^{x^{\Delta}_{l+1}(t)} \zeta(u^{\Delta}_{l}(x,t))dx+\int_{x^{\Delta}_{k+1}(t)}^{x}\zeta(\tilde{u}^{ \Delta}_{k+1}(y,t))dy\] \[+g_{1}(x,t;\tilde{u}^{\Delta}_{k+1})(t-t_{n.5})+\int_{t_{n.5}}^{t }\sum_{x_{2N_{x}-1}\leq y\leq x}(\sigma[\eta_{*}]-[q_{*}])ds, \tag{2.18}\] \[w^{\Delta}_{k+1}(x,t)= \tilde{w}^{\Delta}_{k+1}+\int_{0}^{x_{2N_{x}-1}}\zeta(u^{\Delta} _{n,0}(x))dx+V(u_{-})(t-t_{n})\] \[+\sum_{l=1}^{k}\int_{x^{\Delta}_{l}(t)}^{x^{\Delta}_{l+1}(t)} \zeta(u^{\Delta}_{l}(x,t))dx+\int_{x^{\Delta}_{k+1}(t)}^{x}\zeta(\tilde{u}^{ \Delta}_{k+1}(y,t))dy\] \[+g_{2}(x,t;\tilde{u}^{\Delta}_{k+1})(t-t_{n.5})+\int_{t_{n.5}}^{t }\sum_{x_{2N_{x}-1}\leq y\leq x}(\sigma[\eta_{*}]-[q_{*}])ds.\]
By induction, we define \(u_{i}\), \(u^{\Delta}_{i}(x,t)\) and \(\sigma_{i}\) (\(i=1,\ldots,p-1\)). Finally, we determine a propagation speed \(\sigma_{p}\) and \(u_{p}=(\rho_{p},m_{p})\) such that
\((p.{\rm a})\): \(z_{p}:=z(u_{p})=z_{p}^{*}\),
* the speed \(\sigma_{p}\), and the left state \(u_{p-1}^{\Delta}(x_{p}^{\Delta}(t_{n.5}),t_{n.5})\) and the right state \(u_{p}\) satisfy the Rankine-Hugoniot conditions,
where \(x_{p}^{\Delta}(t)=1+\sigma_{p}(t-t_{n})\). We then fill up by \(u_{p-1}^{\Delta}(x,t)\) and \(u_{p}\) the sector where \(t_{n}\leq t<t_{n+1},x_{p-1}^{\Delta}(t)\leq x<x_{p}^{\Delta}(t)\) and the line \(t_{n}\leq t<t_{n+1},x=x_{p}^{\Delta}(t)\), respectively.
Given \(u_{-}\) and \(z_{+}\) with \(z_{-}\leq z_{+}\), we denote this piecewise functions of \(x\) and \(t\) 1-rarefaction wave by \(R_{1}^{\Delta}(u_{-},z_{+},x,t)\).
Finally, we construct \(u_{p}^{\Delta}(x,t)\) with \(u_{p}^{\Delta}(1,t_{n})=u_{+}\) in the similar manner to \(u_{+}^{\Delta}(x,t)\) in the Case 1 and fill up by \(u_{p}^{\Delta}(x,t)\) the sector where \(t_{n}\leq t<t_{n+1},x_{p}^{\Delta}(t)\leq x\leq 1\).
### Case 3: the case where a rarefaction wave and the vacuum arise
In this case, we consider the case where \(\rho_{+}\leq(\Delta x)^{\beta}\), which means that \(u_{+}\) is near the vacuum. In this case, we cannot construct approximate solutions in a similar fashion to the case 1-2. Therefore, we must define \(u^{\Delta}(x,t)\) in the different way.
**Case 3.1**\(\rho_{-}>(\Delta x)^{\beta}\)
Let \(u_{-}^{(1)}\) be a state satisfying \(w(u_{-}^{(1)})=w(u_{-})\) and \(\rho_{-}^{(1)}=(\Delta x)^{\beta}\).
(i) \(z(u_{+})-z(u_{-}^{(1)})\leq(\Delta x)^{\alpha}\)
Notice that \(w(u_{+})=w(u_{-})=w(u_{-}^{(1)})\). Then there exists \(C>0\) such that \(\rho_{-}^{(1)}-\rho_{+}\leq C(\Delta x)^{\alpha}\). Since \(\alpha>\beta\), we then have \(\rho_{+}\geq 3(\Delta x)^{\beta}/4\). This case is reduced to the case 2.
(ii) \(z(u_{+})-z(u_{-}^{(1)})>(\Delta x)^{\alpha}\)
Set
\[\bar{z}:=-M_{n+1}-L_{n}+\int_{0}^{x_{2N_{x-1}}}\zeta(u_{n,0}^{\Delta}(x))dx+V (u_{-})\Delta t+\int_{x_{2N_{x}-1}}^{x_{2N_{x}}}\left\{\eta_{*}(u_{-})+K \right\}dx.\]
Let \(u_{-}^{(2)}\) be a state connected to \(u_{-}\) on the right by \(R_{1}^{\Delta}(\max\{z_{-}^{(1)},\bar{z}\})(u_{-})\). Connecting the left and right states \(u_{-}^{(2)}\), \(u_{+}\) with \(\rho_{+}=v_{+}=0\) by a rarefaction curve and the vacuum, we construct a Rimann solution \((u_{-}^{(2)},u_{+})\). Then, in the region where \(u^{\Delta}(x,t)\) is \(R_{1}^{\Delta}(\max\{z_{-}^{(1)},\bar{z}\})(u_{-})\), the definition of \(u^{\Delta}(x,t)\) is similar to Case 2. In the other region, we define \(u^{\Delta}(x,t)\) by the Riemann solution \((u_{-}^{(2)},u_{+})\) itself.
**Case 3.2**\(\rho_{-}\leq(\Delta x)^{\beta}\)
(i) \(z(u_{-})\geq\bar{z}\)
In this case, we define \(u^{\Delta}(x,t)\) as a Riemann solution \((u_{-},u_{+})\).
(ii) \(z(u_{-})<\bar{z}\)
Set
\[\bar{w}:=M_{n+1}+L_{n}+\int_{0}^{x_{2N_{x}-1}}\zeta(u_{n,0}^{\Delta}(x))dx+V(u_ {-})\Delta t-\int_{x_{2N_{x}-1}}^{x_{2N_{x}}}\nu\rho_{-}dx.\]
Let \(\lambda_{1}(u_{-})\) be the 1-characteristic speed of \(u_{-}\). In the region where \(t_{n}\leq t<t_{n+1}\) and \(x_{2N_{x}-1}\leq x\leq 1+\lambda_{1}(u_{-})(t-t_{n})\), we define \(\bar{u}^{\Delta}(x,t)\) in the similar manner to \(\bar{u}_{1}^{\Delta}(x,t)\) in Case 2.
We next take \(u_{-}^{(3)}\) such that \(z(u_{-}^{(3)})=\max\{z_{-},\bar{z}\}\) and \(w(u_{-}^{(3)})=\min\{w_{-},\bar{w}\}\). We then solve a Riemann problem \((u_{-}^{(3)},u_{+})\). In the region where \(t_{n}\leq t<t_{n+1}\) and \(1+\lambda_{1}(u_{-})(t-t_{n})<x\leq x_{2N_{x}}\), we define \(\bar{u}^{\Delta}(x,t)\) as this Riemann solution.
**Remark 2.3**.: The approximate solution \(u^{\Delta}(x,t)\) is piecewise smooth in each of the divided parts of the cell. Then, in the divided part, \(u^{\Delta}(x,t)\) satisfies
\[(u^{\Delta})_{t}+f(u^{\Delta})_{x}=o(1).\]
## 3. The \(L^{\infty}\) estimate of the approximate solutions
We deduce from (2.6) the following theorem:
**Theorem 3.1**.: _For \(x_{2N_{x}-1}\leq x\leq 1\),_
\[\begin{split} z^{\Delta}(x,t_{n+1-})\geq&-M_{n+1}- L_{n}+\int_{0}^{x}\zeta(u^{\Delta}(y,t_{n+1-}))dy-o(\Delta x),\\ w^{\Delta}(x,t_{n+1-})\leq& M_{n+1}+L_{n}+\int_{0}^{x}\zeta(u^{\Delta}(y,t_{n+1-}))dy+\int_{t_{n} }^{t_{n+1}}\sum_{0\leq x\leq 1}(\sigma[\eta_{*}]-[q_{*}])dt\\ &+o(\Delta x),\end{split} \tag{3.1}\]
_where \(M_{n+1}\) is defined in (2.4), \(t_{n+1-}=(n+1)\Delta t-0\) and \(o(\Delta x)\) depends only on the bound of solutions._
Now, in the previous section, we have constructed \(u^{\Delta}(x,t)\) near the boundary \(x=1\). In this case, we are devoted to case 1 in particular. For case 2 and 3, we refer to [19] and [20].
### Estimates of \(z^{\Delta}(x,t)\) for the case where a shock arises near the boundary
We first consider \(\tilde{z}_{-}^{\Delta}\). We recall that
\[\tilde{z}_{-}^{\Delta}=z_{-}-\int_{0}^{x_{2N_{x}-1}}\zeta(u_{n,0}^{\Delta}(x) )dx.\]
From (2.6), we have \(\tilde{z}_{-}^{\Delta}\geq-M_{n}-L_{n}\).
Since
\[\tilde{u}_{-}^{\Delta}(x,t)=u_{-}^{\Delta}(x,t)+O((\Delta x)^{2}), \tag{3.2}\]
recalling (2.12), we have
\[\begin{split} z_{-}^{\Delta}(x,t)&\!\!=\!\!\tilde{ z}_{-}^{\Delta}+\int_{0}^{x_{2N_{x}-1}}\zeta(u_{n,0}^{\Delta}(x))dx+V(u_{-})(t-t_{n}) +\int_{x_{2N_{x}-1}}^{x}\zeta(\tilde{u}_{-}^{\Delta}(y,t))dy\\ &+g_{1}(x,t;\tilde{u}_{-}^{\Delta})(t-t_{n})\\ \geq&-M_{n}-L_{n}+\int_{0}^{x_{2N_{x}-1}}\zeta(u_{n,0}^ {\Delta}(x))dx+V(u_{-})(t-t_{n})\\ &+\int_{x_{2N_{x}-1}}^{x}\zeta(u_{-}^{\Delta}(y,t))dy+g_{1}(x,t; u_{-}^{\Delta})(t-t_{n})-o(\Delta x).\end{split} \tag{3.3}\]
If \(z_{-}^{\Delta}(x,t_{n+1-0})>-M_{n}-L_{n}+I_{2N_{x}-1}^{n}-\sqrt{\Delta x}\), from (2.12) and \(M_{n+1}=M_{n}+O(\Delta x)\), we obtain (3.1)\({}_{2}\). Otherwise, from the argument (1.28), regarding \(M_{0}-\delta t\) in (1.28) as \(M_{n}+L_{n}\), we have \(g_{1}(x,t;u_{-}^{\Delta})>\delta\). From (2.12), we conclude (3.1)\({}_{1}\).
Next, we next consider \(z_{+}^{\Delta}\). We introduce the following lemma holds.
**Lemma 3.2**.: _There exists a unique piecewise smooth entropy solution \((\rho(x,t),\)\(m(x,t))\) containing the vacuum state \((\rho=0)\) on \(D\) for the problem (1.1) and (2.7) satisfying_
\[z(u(x,t))\geq\min(-w(u_{-}),z(u_{-})),\;w(u(x,t))\leq\max(w(u_{-}),0),\;\rho(x, t)\geq 0.\]
In this case, it follows from (2.6) and the above lemma that
\[z(u_{+})\geq\min\{-M_{n}-L_{n}-I^{n}_{2N_{x}-1},-M_{n}-L_{n}+I^{n}_{2N_{x}-1}\}. \tag{3.4}\]
On the other hand, we have
\[I^{n}_{2N_{x}-1}=I^{n}_{2N_{x}}+O(\Delta x), \tag{3.5}\]
where \(I^{n}_{2N_{x}}=\int_{0}^{x_{2N_{x}}}\zeta(u^{\Delta}_{n,0}(x))dx.\)
Moreover, our approximate solutions satisfies the conservation of mass:
\[\int_{0}^{1}\rho^{\Delta}(x,t_{n-})dx=\int_{0}^{1}\rho_{0}(x)dx+o(1) \tag{3.6}\]
and the energy inequality:
\[\int_{0}^{1}\eta_{*}(u^{\Delta}(x,t_{n-}))dx\leq\int_{0}^{1}\eta_{*}(u_{0}(x) )dx+o(1). \tag{3.7}\]
From (1.13), (3.5)-(3.7), we obtain
\[I^{n}_{2N_{x}}<-\mu+O(\Delta x). \tag{3.8}\]
It follows from (3.4) that
\[z(u_{+})\geq-M_{n}-L_{n}+I^{n}_{2N_{x}}\]
by choosing \(\Delta x\) small enough. Then, we have
\[\tilde{z}_{+}^{\Delta}=z_{+}-\int_{0}^{x_{2N_{x}}}\zeta(u^{\Delta}_{n,0}(x)) dx\geq-M_{n}-L_{n}.\]
Therefore, since \(\tilde{u}_{+}^{\Delta}(x,t)=u_{+}^{\Delta}(x,t)+O((\Delta x)^{2})\), we conclude that
\[z_{+}^{\Delta}(x,t)= \tilde{z}_{+}^{\Delta}+\int_{0}^{x_{2N_{x}}}\zeta(u^{\Delta}_{n, 0}(x))dx+\int_{x_{2N_{x}}}^{x}\zeta(\tilde{u}_{+}^{\Delta}(y,t))dy+g_{1}(x,t; \tilde{u}_{+}^{\Delta})(t-t_{n})\] \[\geq \tilde{z}_{+}^{\Delta}+\int_{0}^{x_{2N_{x}}}\zeta(u^{\Delta}_{n, 0}(x))dx+\int_{x_{2N_{x}}}^{x}\zeta(u^{\Delta}_{+}(y,t))dy+g_{1}(x,t;u_{+}^{ \Delta})(t-t_{n}).\]
In this case, we can obtain \(\eqref{eq:2.1}_{1}\) in the similar manner to (3.3). We can similarly obtain \(\eqref{eq:2.2}_{2}\).
Our approximate solutions satisfy the following propositions holds (these proofs are similar to [11]-[13], [19], [20].).
**Proposition 3.3**.: _The measure sequence_
\[\eta_{*}(u^{\Delta})_{t}+q(u^{\Delta})_{x}\]
_lies in a compact subset of \(H^{-1}_{\rm loc}(\Omega)\) for all weak entropy pair \((\eta_{*},q)\), where \(\Omega\subset[0,1]\times[0,1]\) is any bounded and open set._
**Proposition 3.4**.: _Assume that the approximate solutions \(u^{\Delta}\) are bounded and satisfy Proposition 3.3. Then there is a convergent subsequence \(u^{\Delta_{n}}(x,t)\) in the approximate solutions \(u^{\Delta}(x,t)\) such that_
\[u^{\Delta_{n}}(x,t)\to u(x,t)\ \ \ {\rm a.e.},\ \ \ {\rm as}\ \,n\to\infty.\]
_The function \(u(x,t)\) is a global entropy solution of the Cauchy problem (1.3)._
|
2305.00017 | Discovering the Origin of Neutrino Masses at SHiP | In $U(1)_R$ extensions of supersymmetric models, the bino and its Dirac
partner, the singlino, can play the role of right-handed neutrinos. The bino
and the singlino form a pseudo-dirac pair, dubbed the `bi$\nu$o', which can
generate Standard Model neutrino masses via the inverse seesaw mechanism. We
investigate the prospects for detecting long-lived bi$\nu$os at SHiP, where GeV
scale bi$\nu$os can be copiously produced in the decays of mesons. We show that
SHiP can probe new regions of parameter space that are complementary to
searches for the lepton flavor-violating decay $\mu \to e \gamma$. This
scenario provides a well-motivated benchmark for future experiments of a
right-handed neutrino that mixes with all Standard Model neutrinos, and is
directly related to the generation of neutrino masses. | Seyda Ipek, Douglas Tuckler | 2023-04-28T18:00:01Z | http://arxiv.org/abs/2305.00017v1 | # Discovering the Origin of Neutrino Masses at SHiP
###### Abstract
In \(U(1)_{R}\) extensions of supersymmetric models, the bino and its Dirac partner, the singlino, can play the role of right-handed neutrinos. The bino and the singlino form a pseudo-dirac pair, dubbed the 'bivo', which can generate Standard Model neutrino masses via the inverse seesaw mechanism. We investigate the prospects for detecting long-lived bi\(\nu\)s at SHiP, where GeV scale bi\(\nu\)os can be copiously produced in the decays of mesons. We show that SHiP can probe new regions of parameter space that are complementary to searches for the lepton flavor-violating decay \(\mu\to e\gamma\). This scenario provides a well-motivated benchmark for future experiments of a right-handed neutrino that mixes with all Standard Model neutrinos, and is directly related to the generation of neutrino masses.
## I Introduction
The observation of neutrino oscillations indicates that at least two of the Standard Model (SM) neutrinos have non-zero masses. Measurements of atmospheric, reactor, solar, and accelerator neutrinos have determined the neutrino mass differences and mixing angles to be [1]
\[\Delta m^{2}_{21}\simeq 7.4\times 10^{-5}\ {\rm eV}^{2},\ \ | \Delta m^{2}_{31}|\simeq 2.5\times 10^{-3}\ {\rm eV}^{2},\] \[\sin^{2}\theta_{12}\simeq 0.3,\ \sin^{2}\theta_{23}\simeq 0.45,\ \sin^{2} \theta_{13}\simeq 0.022. \tag{1}\]
In the SM, neutrinos are massless and an explanation of non-zero neutrino masses requires beyond-the-SM (BSM) physics. A simple way to generate neutrino masses is to introduce Majorana fermions that are SM gauge singlets called right-handed neutrinos which lead to a suppression of SM neutrino masses via the seesaw mechanism [2; 3; 4; 5]. In this mechanism the light neutrino masses are inversely proportional to the Majorana mass and hence, a very heavy scale, \(M\sim 10^{16}\) GeV, is needed to explain the smallness of SM neutrino masses. In comparison, in the Inverse Seesaw (ISS) mechanism right-handed neutrinos are pseudo-Dirac fermions, with both Majorana and Dirac masses [6; 7; 8]. In this case, light neutrino masses are _proportional_ to the (small) Majorana mass.
It has been shown that the ISS mechanism can be realized in a \(U(1)_{R}\)-symmetric minimal supersymmetric SM (MSSM) [9]. In \(U(1)_{R}\)-symmetric MSSM gauginos are necessarily pseudo-Dirac fermions. The Dirac gaugino masses are produced via super-soft terms while the Majorana masses, proportional to the gravitino mass \(m_{3/2}\), can be produced via anomaly mediation. The pseudo-Dirac bino can be considered as a pseudo-Dirac right-handed neutrino, generating the neutrino masses. (The model is described in more detail in the next section.) Such a bino is called "bi\(\nu\)o" to stress the neutrino connection. Light neutrino masses in this model are proportional to the ratio of the gravitino mass over the messenger scale \(\Lambda_{M}\). Low energy observables like BR(\(\mu\to e\gamma\)) give a constraint of \(\Lambda_{M}\gtrsim 35\) TeV. For \(\Lambda_{M}\sim O(100\ {\rm TeV})\), generating the correct neutrino mixing parameters requires \(m_{3/2}\sim O(10\ {\rm keV})\).
The bi\(\nu\)o model also provides rich collider phenomenology [10; 11]. Since the bi\(\nu\)o mixes with neutrinos, it decays to a mixture of leptons, quarks and missing energy. If the bi\(\nu\)o is heavier than the weak scale, \(M_{\tilde{B}}\gtrsim 90\) GeV, and the messenger scale is not too high, \(\Lambda_{M}<10^{8}\) TeV, the bi\(\nu\)o decays promptly. However, if the bi\(\nu\)o is light, it needs to decay via off-shell \(W/Z\) or Higgs to a three-body final state, making it a long-lived particle even for \(\Lambda_{M}=100\) TeV. In [11] a long-lived bi\(\nu\)o signal at MATHUSLA, FASER and CODEX-b was studied for \(M_{\tilde{B}}>1\) GeV.
These earlier collider studies relied on a certain choice of the supersymmetric mass spectrum: the bi\(\nu\)o is the next-to-lightest supersymmetric particle (NLSP) and all other gauginos are heavier than the squarks. In this scenario, the bi\(\nu\)o is produced via squark decays. Hence, the constraints are cast on a combination of the squark mass, the bi\(\nu\)o mass and the messenger scale. However, if the squark masses are beyond the reach of high-energy colliders like the LHC, producing the bi\(\nu\)o in large quantities could be challenging.
Because the bi\(\nu\)o mixes with SM neutrinos, it can be produced in any process where a SM neutrino is produced. In particular, if they are light enough, bi\(\nu\)os can be copiously produced in the decays of mesons at high energy beam dump experiments.1 In addition, bi\(\nu\)os can become long-lived even for relatively low messenger scales. In this paper, we investigate the prospects for GeV-scale bi\(\nu\)os at beam dump experiments such as SHiP. We show that the large production rate of bi\(\nu\)os from meson decays enables the probe of new regions of parameter space that are not excluded by \(\mu\to e\gamma\) and Big Bang Nucleosynthesis (BBN) constraints. Currently, long-lived particle experiments like SHiP can set the leading exclusion limits for bi\(\nu\)o masses in the \(\sim 1-5\) GeV range. Our results are summarized in Fig. 2.
Footnote 1: Neutralino decays in R-parity violating (RPV) MSSM can also be probed at experiments like SHiP, see [12]. The model we describe here is distinctly different than generic RPV.
This paper is organized as follows. In Sec. II we intro
duce the model and fix our notation. In Sec. III we briefly describe the SHiP experiment and discuss meson production in the proton beam dump. The production and decay phenomenology of the bi\(\nu\)o is described in Sec IV. The sensitivity of SHiP is discussed in Sec. V. We conclude in Sec. VII.
## II Model
In this section we summarize the relevant parts of the model we study. The details can be found in [9; 10].
In \(U(1)_{R}\)-symmetric MSSM, a global \(U(1)_{R}\) is imposed on the supersymmetric sector. The SM particles are not charged under this symmetry while the supersymmetric partners carry \(+1\)\(U(1)_{R}\) charges. We work with a modified version of this model where the global symmetry is instead \(U(1)_{R-L}\), with \(L\) being the lepton number. Due to this global symmetry, gauginos cannot be Majorana fermions. In order to give Dirac masses to gauginos, adjoint partners with \(U(1)_{R-L}\) charges of \(-1\) are introduced. For example, for the bino, \(\tilde{B}\), there will be a superfield \(\Phi_{S}\) whose fermionic component \(S\), _singlino_, is a SM singlet with \(-1\)\(U(1)_{R-L}\) charge. (Similarly, there is a tripletino and an octino as Dirac partners to the weakinos and the gluino respectively.) We only focus on the bino, lepton and Higgs superfields here as they are the relevant particle content for generating the neutrino masses.
We assume supersymmetry is broken in a hidden sector, via both \(F-\) and \(D-\)terms and is mediated to the visible sector at a messenger scale \(\Lambda_{M}\). Furthermore, as any global symmetry, \(U(1)_{R-L}\) must be broken due to gravity. At the end, bino will have both a Dirac [13] and a Majorana mass [14; 15]:
\[M_{\tilde{B}}=c_{i}\frac{D}{\Lambda_{M}}\,,\quad m_{\tilde{B}}=\frac{\beta(g_ {Y})}{g_{Y}}m_{3/2}\,, \tag{2}\]
where \(c_{i}\) is an \(O(1)\) coefficient and \(m_{3/2}=\sum(F_{i}^{2}+D_{i}^{2}/2)/\sqrt{3}M_{\rm Pl}^{2}\) is the gravitino mass.
In [9] it was shown that the following dimension-5 and dimension-6 operators
\[\frac{f_{i}}{\Lambda_{M}^{2}}\int d^{2}\theta W_{\alpha}^{\prime}W_{\tilde{B} }^{\alpha}H_{u}L_{i}\,\ \frac{d_{i}}{\Lambda_{M}}\int d^{4}\theta\phi^{\dagger}\Phi_{S}H_{u}L_{i}\,, \tag{3}\]
can generate two non-zero neutrino masses via the inverse seesaw mechanism. (Here \(\phi=1+\theta^{2}m_{3/2}\) is the conformal compensator.) Together with the mass terms for the bino, these interactions lead to the following Lagrangian
\[\mathcal{L}\supset M_{\tilde{B}}\tilde{B}S+M_{\tilde{B}}\tilde{B}\tilde{B}+f_ {i}\frac{M_{\tilde{B}}}{\Lambda_{M}}\ell_{i}h_{u}\tilde{B}+d_{i}\frac{m_{3/2} }{\Lambda_{M}}\ell_{i}h_{u}S\,, \tag{4}\]
where \(f_{i}\) and \(d_{i}\) are determined by the neutrino mass differences as
\[f_{i}\simeq\begin{pmatrix}0.35\\ 0.85\\ 0.35\end{pmatrix}\,,\quad d_{i}\simeq\begin{pmatrix}-0.06\\ 0.44\\ 0.89\end{pmatrix}. \tag{5}\]
After electroweak symmetry breaking (EWSB) the light neutrino masses are given by
\[m_{1}=0,\ m_{2}=\frac{m_{3/2}v^{2}}{\Lambda_{M}^{2}}(1-\rho),\ m_{3}=\frac{m_{ 3/2}v^{2}}{\Lambda_{M}^{2}}(1+\rho), \tag{6}\]
where \(\rho\simeq 0.7\) is determined by the neutrino mass splittings in Eq. (1). Note, that the coupling of \(S\) to SM particles induced after electroweak symmetry breaking are proportional to the small gravitino mass \(m_{3/2}\sim\ \mathcal{O}\)(1-10 keV), and will not play a phenomenological role. Therefore, we only focus on \(\tilde{B}\) which we will call the "bi\(\nu\)o" for the rest of this paper.
## III The SHiP experiment
The Search for Hidden Particles (SHiP) experiment is a proposed proton beam dump experiment that uses the 400 GeV proton beam at the CERN Super Proton Synchrotron (SPS) accelerator to search for long-lived particles [16; 17; 18]. The proton beam extracted from the SPS will be dumped onto a high density target and provides 2\(\times 10^{20}\) protons-on-target in 5 years of operation. The entire SHiP experiment consists of a high density target followed by a hadron absorber, a muon shield, and a neutrino detector which all together have a length of \(\ell_{\rm sh}=64\) m. Immediately following the neutrino detector is a decay volume with a length \(\ell_{\rm decay}=50\) m, and a Hidden Sector Decay Spectrometer (HSDS) to detect the decay products of long-lived hidden sector particles.
Long-lived particles with MeV-GeV masses can be produced from the decays of mesons. Because of the high energy of the proton beam and high density of the target, a large number of mesons will be produced in the beam dump when the incoming proton beam collides with the target. The production of kaons and heavy flavor mesons at SHiP has been previously studied in [16; 19; 20].
To determine the number of charged kaons produced at SHiP, we use the kaon production fractions from [20]. With a GEANT4 simulation that takes into account production and propagation of kaons, it was found that \(\sim 8\)\(K^{+}\)s and \(\sim 3.5\)\(K^{-}\)s are produced per proton-on-target. About half of these are absorbed in the target, and a large fraction of the remaining kaons decay at rest. The resulting final states are isotropic and only a small number of the decay products will have trajectories that are in the direction of the HSDS. Therefore, kaons that are stopped and decay at rest are neglected. The usable kaons are those that decay in-flight and it was found that 0.29 \(K^{+}\) and 0.07 \(K^{-}\) are produced per proton-on-target which can be used for the production of BSM particles.
The number of heavy flavor mesons produced at SHiP is given by [16; 19]
\[N_{M}=N_{\rm POT}\times(2\times X_{\bar{q}q}\times f^{q}_{\rm cascade})\times f(q \to M)\,, \tag{7}\]
where \(N_{\rm POT}=2\times 10^{20}\) is the number of protons-on-target, \(X_{\bar{q}q}\) is the quark-antiquark production fraction, i.e. the probability of producing a \(q\bar{q}\) pair in \(pp\) collisions, \(f^{q}_{\rm cascade}\) (\(q=c,b\)) is a cascade enhancement factor that takes into account secondary meson production in the target, and \(f(q\to M)\) is the meson production fraction - the probability that a quark \(q\) will hadronize into a meson \(M\). Values for these parameters are given in Tab. 1 and detailed discussions can be found in [19; 21]. Note that we only show values for \(D^{\pm}_{(s)}\) and \(B^{\pm}_{(c)}\) since these have the largest branching ratios to bi\(\nu\)os above the kaon mass.
The total number of charged mesons produced at SHiP with \(N_{\rm POT}=2\times 10^{20}\) are
\[N_{K^{+}}= 5.8\times 10^{19},\quad N_{K^{-}}=1.4\times 10^{19}\,,\] \[N_{D^{\pm}}= 3.2\times 10^{17},\quad N_{D^{\pm}_{\bar{\nu}}}=1.4\times 10^{17}\,, \tag{8}\] \[N_{B^{\pm}}= 4.5\times 10^{13},\quad N_{B^{\pm}_{\bar{\nu}}}=2.8\times 10^{11}\,,\]
where we have saturated the upper bound of \(f(b\to B^{\pm}_{c})\) to calculate \(N_{B^{\pm}_{c}}\). Because of the large number of mesons produced, SHiP can have remarkable sensitivity to MeV-GeV-scale BSM particles.
## IV Bl\(\nu\)O Phenomenology at SHiP
The bi\(\nu\)o mixes with the SM neutrinos via interactions given in (4). After EWSB, this mixing will induce couplings of the bi\(\nu\)o to the EW gauge bosons \(W^{\pm},Z\), given by
\[\mathcal{L}\supset\frac{g_{2}}{\sqrt{2}}f_{i}\frac{M_{\tilde{B}}}{\Lambda_{M} }W^{+}_{\mu}\ell_{i}\bar{\sigma}^{\mu}\tilde{B}+\frac{g_{2}}{2\cos\theta_{W}} f_{i}\frac{M_{\tilde{B}}}{\Lambda_{M}}Z_{\mu}\nu_{i}\bar{\sigma}^{\mu}\tilde{B}\,, \tag{9}\]
where \(g_{2}\) is the \(SU(2)_{L}\) coupling constant and \(\theta_{W}\) is the Weinberg angle. These interactions will allow the bi\(\nu\)o to be produced in any process where a SM neutrino is produced, and to decay directly to EW gauge bosons if \(M_{\tilde{B}}>M_{W,Z}\) or to SM fermions via off-shell gauge bosons when \(M_{\tilde{B}}<M_{W,Z}\). Since the mixing parameters \(f_{i}\) are fixed by the observed neutrino mass differences, unlike minimal heavy neutral lepton (HNL) scenarios, the bi\(\nu\)o phenomenology is completely determined once the bi\(\nu\)o mass \(M_{\tilde{B}}\) and the messenger scale \(\Lambda_{M}\) are fixed.
### Bi\(\nu\)o Production
In proton beam dump experiments, bi\(\nu\)os will be copiously produced in two- or three-body decays of mesons in addition to secondary production from the decays of \(\tau^{\pm}\) produced in the decay of \(D^{\pm}_{s}\) meson. The number of bi\(\nu\)os produced is given by
\[N_{\tilde{B}}=N_{i}{\rm BR}(i\to\tilde{B}+X)\,, \tag{10}\]
where \(N_{i}\) is the number of mesons, \(N_{M}\), or \(\tau\) leptons, \(N_{\tau}\), produced and \({\rm BR}(i\to\tilde{B}+X)\) is the branching ratio for the meson or \(\tau^{\pm}\) decay to a bi\(\nu\)o and final states \(X\). The full expressions for the branching ratios of meson and \(\tau\) decays can be found in [25; 26].
In the left plot of Fig. 1 we show the number of bi\(\nu\)os produced at SHiP assuming 5 years of operation and a messenger scale \(\Lambda_{M}=1\) TeV. Note that, even though it is excluded by low energy observables, this value of \(\Lambda_{M}\) is chosen for illustration purposes and for ease of translating into other new physics scales. We see that the production rate of the bi\(\nu\)o in association with muons (dashed curves) is typically larger than that of electrons (solid curves) and tau leptons (dot-dashed curves) since \(f_{\mu}>f_{e,\tau}\).
### Bi\(\nu\)o Decays
Once produced, bi\(\nu\)os will decay via the weak interactions into two-body final states, \(\tilde{B}\to\ell^{\pm}M\), \(\tilde{B}\to\nu M\), where \(M\) is a meson, and three-body final states, \(\tilde{B}\to ff^{\prime}\nu_{i}\) for final state fermions \(f,f^{\prime}\). The partial widths for the various decays are given in [25; 26; 27; 28; 29]. Note that above \(M_{\tilde{B}}\sim 1\) GeV, the decay to hadrons is more appropriately described by quark production in the final state. Thus, for \(M_{\tilde{B}}<1\) GeV we determine the total hadronic decay rate by summing partial widths for exclusive decays to mesons. Above 1 GeV we switch to the inclusive decay rates to quarks \(\tilde{B}\to qq^{\prime}\nu\), following the approach of [25].
In the right plot of Fig. 1 we show the branching ratios (BRs) as a function of the bi\(\nu\)o mass \(M_{\tilde{B}}\). The most promising channels to search for are those involving charged particles in the final state. The BRs are independent of the ratio \(M_{\tilde{B}}/\Lambda_{M}\) and depend only on the mixing parameters \(f_{i}\), which are fixed by the neutrino mixing observables. When \(M_{\tilde{B}}\lesssim m_{\pi}\) the most important decay is \(\tilde{B}\to ee\nu\), while in the region \(m_{\pi}\lesssim M_{\tilde{B}}\lesssim 1\) GeV the decays \(\tilde{B}\to e^{\pm}\pi^{\mp}\) and \(\tilde{B}\to\mu^{\pm}\pi^{\mp}\) become dominant. The decay \(\tilde{B}\to\mu^{\pm}\rho^{\mp}\) becomes important between 1-2 GeV, while for \(M_{\tilde{B}}\gtrsim 2\) GeV the decay \(\tilde{B}\to e\mu\nu\) is the most relevant.
\begin{table}
\begin{tabular}{l l l l} \(f(c\to D^{\pm})\) & 0.207 & \(X_{\bar{c}c}\) & \(1.7\times 10^{-3}\) \\ \(f(c\to D^{\pm}_{s})\) & 0.088 & \(X_{\bar{b}b}\) & \(1.6\times 10^{-7}\) \\ \(f(b\to B^{\pm})\) & 0.417 & \(f^{c}_{\rm cascade}\) & 2.3 \\ \(f(b\to B^{\pm}_{c})\) & \(\leq 2.6\times 10^{-3}\) & \(f^{c}_{\rm cascade}\) & 1.7 \\ \end{tabular}
\end{table}
Table 1: Meson production fractions [22], \(q\bar{q}\) production fractions [23; 24], and cascade enhancement factors [21] for the heavy flavor mesons that are most relevant for bi\(\nu\)o production.
It is important to emphasize that the bi\(\nu\)o is directly related to neutrino mass generation, unlike minimal HNL scenarios that can be searched for at SHiP.2 As a result, it is possible to gain insights into the mechanism responsible for neutrino mass generation if a positive signal involving both electrons and muons, for example, are observed. Because the BRs are completely determined by the neutrino mixing parameters, ratios of BRs could tell us if the new particle is involved in neutrino mass generation and would be a smoking gun signal for the model described in Sec. II. We also emphasize that, unlike RPV SUSY models, the relavant bi\(\nu\)o phenomenology described here does not depend on sfermion masses. In this way, SHiP will able to probe this model even if the sfermions are much heavier than the LHC capabilities.
Footnote 2: By minimal HNLs we mean the existence of a single HNL that mixes with only one SM neutrino flavor, which is an important benchmark for future experimental searches [30].
## V Ship sensitivity to Bi\(\nu\)o
Given the SHiP experimental setup described in Sec. III, we can determine the geometrical acceptance and efficiency for detecting the bi\(\nu\)o decay products via Monte Carlo simulation. To determine the geometric acceptance, we require that the visible final states reach the detector after the bi\(\nu\)o has decayed anywhere in the decay volume. For a given position \(z\) along the decay volume, we require that the final states enter the detector, which has \(x\) and \(y\) dimensions of 5 m and 10 m, respectively [18]. These conditions can be quantified by the following inequalities [33]:
\[\begin{split} x_{f}=&\bigg{|}\frac{p_{x}^{\bar{B} }}{p_{z}^{\bar{B}}}z+\frac{p_{x}^{f}}{p_{z}^{f}}(\ell_{\rm sh}+\ell_{\rm decay }-z)\bigg{|}<2.5{\rm m}\,,\\ y_{f}=&\bigg{|}\frac{p_{y}^{\bar{B}}}{p_{z}^{\bar{B }}}z+\frac{p_{y}^{f}}{p_{z}^{f}}(\ell_{\rm sh}+\ell_{\rm decay}-z)\bigg{|}<5{ \rm m}\,,\end{split} \tag{11}\]
where \(p_{x,y,z}^{\bar{B}}\) are the bi\(\nu\)o momentum components and \(p_{x,y,z}^{f}\) are the final state fermion momentum components. Signal events are chosen to be those in which the bi\(\nu\)o decays into two charged particles in the fiducial decay volume enclosed by \(z_{\rm min}=\ell_{\rm sh}=64\) m and \(z_{\rm max}=\ell_{\rm sh}+\ell_{\rm decay}=114\) m, and satisfy the conditions in (11).
The geometric acceptance depends on the location where the bi\(\nu\)o decays. Therefore, the total efficiency is an integral over the length of the decay volume given by [33]
\[\text{eff}=M_{\bar{B}}\Gamma\int_{z_{\rm min}}^{z_{\rm max}}dz\sum_{\text{ events }\in\text{ geom.}}\frac{e^{-zM_{\bar{B}}\Gamma/p_{z}}}{N_{\rm MCP}z}\,, \tag{12}\]
where \(M_{\bar{B}}\), \(\Gamma\), and \(p_{z}\) are the mass, decay width, and \(z-\)component of the momentum of the bi\(\nu\)o, respectively. The sum is over the events that fall within the geometric acceptance of the detector, and \(N_{\rm MC}\) is the total number of simulated events. Efficiency plots for minimal HNLs can be found in [33; 34], for example.
Given the production, decay, and efficiency information discussed above, the total number of signal events is given by
\[N_{\rm sig}=N_{\bar{B}}\times\text{BR}(\bar{B}\to i)\times\text{eff}_{i}\,, \tag{13}\]
Figure 1: _Left_: Number of bi\(\nu\)os produced at SHiP assuming 5 years of operation and \(\Lambda_{M}=1\) TeV. (This messenger scale is taken as a benchmark for comparison to other relevant models.) _Right_: Branching ratios of the bi\(\nu\)o into different final states as a function of the bi\(\nu\)o mass \(M_{\bar{B}}\).
where \(N_{\tilde{B}}\) is the number of bi\(\nu\)os produced in a particular production mode, BR(\(\tilde{B}\to i\)) is the branching ratio for \(\tilde{B}\) decaying to a final states \(i\), and eff\({}_{i}\) is the efficiency for detecting the final states. Here we assume that the detector efficiency for reconstructing the charged final states is 100%.
To determine the reach of SHiP to GeV-scale bi\(\nu\)os we use the median expected exclusion significance [35, 36]:
\[Z_{\text{excl}}=\sqrt{2[s-b\ln(1+s/b)]}\,, \tag{14}\]
where \(s\) and \(b\) are the number of signal and background events, respectively. For 5 years of operation \(b=0.1\) background events are expected at SHiP [37]. To set 95% C.L. exclusion limits we require \(Z_{\text{excl}}>1.645\), which corresponds to \(s\approx 2\) signal events.
We use two methods to determine the SHiP sensitivity to GeV-scale long-lived bi\(\nu\)os. **(i)** In the first method, we perform a Monte Carlo simulation to simulate the production and decay of the bi\(\nu\)o. For the production, we use Pythia8[38, 39] to simulate a 400 GeV proton beam striking a proton at rest, and extract the four momenta of mesons that are produced. We use this output to determine the lab frame momentum of the bi\(\nu\)o.
For two-body decays of the bi\(\nu\)o, we can analytically solve for the final state momenta of its decay products in the rest frame of the bi\(\nu\)o. For three-body decays, we use the the publicly available code muBHNL which uses the differential decay distributions of HNLs to generate a weighted sample of final state momenta in the rest-frame of the HNL [40, 41, 42]. Note that, for this purpose, a bi\(\nu\)o with GeV-scale mass is a type of HNL with generic mixing with SM neutrinos, and the kinematic output of muBHNL is directly applicable.
For both two- and three-body decays, we then use the lab-frame momentum of the bi\(\nu\)o to boost the final state momenta into the lab frame. With the lab-frame momenta of the bi\(\nu\)o and its decay products, we can determine the geometric acceptance and efficiency using (12), calculate the signal rate using (13), and find the 95% C.L. exclusion limits from (14).
For each bi\(\nu\)o mass, we consider only the production and decay mode which leads to the best sensitivity, rather than a combination of all possible production and decay modes. These exclusion limits can thus be considered to be relatively conservative.
**(ii)** The second method we employ to obtain exclusion limits is by using the Mathematica based package HNLGSHiP[43] that determines the SHiP sensitivity to HNLs with arbitrary mixings with SM neutrinos; see [19] for more details. The HNL0SHiP package combines all production and decay modes for a given bi\(\nu\)o mass, and the exclusion limits with this method can be considered to be much more aggressive compared to the first method described above.
Figure 2: SHiP sensitivity to bi\(\nu\)os assuming 5 years of operation using our conservative method (solid black curve) and the HNL0SHiP method (dashed black curve). The greater sensitivity with the conservative method below \(M_{\tilde{B}}\sim 400\) MeV is due to the bi\(\nu\)o production from kaons, which is not included in the HNL0SHiP package. Existing limits from \(\mu\to e\gamma\)[31] are depicted by the gray shaded region below the dashed gray line labelled “\(\mu\to e\gamma\)”. We also show the projected exclusion limit from the proposed Mu2e experiment [32], which will look for \(\mu\to e\) conversion in nuclei. BBN constraints are shown by the orange shaded regions. See text for further details.
Another important difference between the two methods is the omission of kaons in the HNL@SHiP package. Initially, it was expected that the interaction length of kaons in the SHiP beam dump is much shorter than their decay length and that the kaons would be absorbed before decaying. However, it was shown in [20] that the kaon production rate can be substantial. We've included bi\(\nu\)o production from kaons in method **(i)**, which sets the leading SHiP constraints below \(M_{\tilde{B}}\simeq 400\) MeV.
## VI Results and Discussion
The exclusion limits derived using these two analyses are depicted in Fig. 2. The solid curve corresponds to the more conservative analysis, where we only consider the channels that lead to the best sensitivity rather than a combination of all production and decay channels. The dashed black curve depicts the exclusion limits derived using the HNL@SHiP code.
Additionally, we show current and future constraints from \(\mu\to e\gamma\)[31] and \(\mu\to e\) conversion in nuclei [32] by the horizontal dashed gray lines labeled "\(\mu\to e\gamma\)" and "Mu2e Projected", respectively. The current bound of \(\text{BR}(\mu\to e\gamma)<4.2\times 10^{-13}\) results in a lower bound on the messenger scale \(\Lambda_{M}\gtrsim 35\) TeV, while future projections of the sensitivity of the Mu2e experiment to \(\mu\to e\) conversion near nuclei is expected to probe upto \(\Lambda_{M}\lesssim 100\) TeV. These constraints are independent of the bi\(\nu\)o mass.
Finally, the shaded orange regions are constraints from BBN. To obtain these bounds, we assume that the bi\(\nu\)o mixes with a single neutrino flavor \(\nu_{i}\) with a mixing parameter \(U_{i}\sim f_{i}M_{\tilde{B}}/\Lambda_{M}\), with \(f_{i}\) given by (5). We then use the results of [44; 45] for BBN constraints on HNLs that mix with a single neutrino flavor to constrain the bi\(\nu\)o parameter space. This is depicted by the three orange shaded regions. The regions enclosed by the solid, dashed, and dot-dashed orange curves are BBN constraints for electron-, muon-, and tau-mixed HNLs, respectively. For the bi\(\nu\)o, which mixes with all three neutrino flavors, BBN constraints will generally lie in between the dashed and the dot-dashed curves enclosing the upper-left and lower-right orange shaded region.
We observe that already with our conservative analysis (the solid black curved in Fig. 2) SHiP can begin to probe the \(M_{\tilde{B}}=1-2\) GeV window that is not excluded by the current \(\mu\to e\gamma\) and BBN limits, with messenger scales up to \(\Lambda_{M}\sim 60\) TeV. Combining all possible production and decay modes is expected to lead to better exclusion limits allowing SHiP to probe the mass region above the \(D_{s}^{\pm}\) meson mass for the messenger scales not excluded by current limits. We also see that SHiP can even probe messenger scales beyond the projected sensitivity of the Mu2e experiment.
These results show that SHiP is highly complementary to experiments looking for charged lepton flavor violation, as well as constraints from BBN for the bi\(\nu\)o model. SHiP is able to probe new parameter space for bi\(\nu\)o masses in the \(M_{\tilde{B}}\sim 1-5\) GeV range, corresponding to a messenger scale in the \(\Lambda_{M}\sim 60-200\) TeV range.
## VII Conclusion
In this paper, we investigated the sensitivity of the SHiP experiment to the bino mass and the SUSY messenger scale in a \(U(1)_{R-L}\) extension of the MSSM which explains the origin of neutrino masses. (In this scenario, the bino and its Dirac partner singlino act like right-handed neutrinos and are responsible for generating the neutrino masses.) Previous studies have considered the production of bi\(\nu\)os at the LHC from the decays of squarks, and constrained a combination of the squark mass, the bi\(\nu\)o mass, and the messenger scale \(\lambda_{M}\). However, if the squarks are too heavy to be produced at the LHC, weak interactions might be too weak to produce bi\(\nu\)os at the LHC.
On the other hand, if the bi\(\nu\)o mass is in the MeV-GeV range, a large production rate can be achieved in high energy beam dump experiments. Mixing with SM neutrinos allows the bi\(\nu\)o to be produced from meson decays, which have large production rates at SHiP. We showed that SHiP can probe messenger scales up to 200 TeV and probe a parameter region complementary to experiments looking for charged-lepton flavor violation.
The model we investigated is directly related to neutrino mass generation that can be discovered at SHiP. If final states involving both muons and electrons are observed, their relative widths are fully predicted by neutrino-mass-mixing measurements. This would provide an incredible opportunity to search for the explanation of one of the most pressing problems with the SM.
## Acknowledgements
This work is supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada. The work of DT is supported by the Arthur B. McDonald Canadian Astroparticle Physics Research Institute.
|
2307.01570 | Machine Learning-Based Intrusion Detection: Feature Selection versus
Feature Extraction | Internet of things (IoT) has been playing an important role in many sectors,
such as smart cities, smart agriculture, smart healthcare, and smart
manufacturing. However, IoT devices are highly vulnerable to cyber-attacks,
which may result in security breaches and data leakages. To effectively prevent
these attacks, a variety of machine learning-based network intrusion detection
methods for IoT networks have been developed, which often rely on either
feature extraction or feature selection techniques for reducing the dimension
of input data before being fed into machine learning models. This aims to make
the detection complexity low enough for real-time operations, which is
particularly vital in any intrusion detection systems. This paper provides a
comprehensive comparison between these two feature reduction methods of
intrusion detection in terms of various performance metrics, namely, precision
rate, recall rate, detection accuracy, as well as runtime complexity, in the
presence of the modern UNSW-NB15 dataset as well as both binary and multiclass
classification. For example, in general, the feature selection method not only
provides better detection performance but also lower training and inference
time compared to its feature extraction counterpart, especially when the number
of reduced features K increases. However, the feature extraction method is much
more reliable than its selection counterpart, particularly when K is very
small, such as K = 4. Additionally, feature extraction is less sensitive to
changing the number of reduced features K than feature selection, and this
holds true for both binary and multiclass classifications. Based on this
comparison, we provide a useful guideline for selecting a suitable intrusion
detection type for each specific scenario, as detailed in Tab. 14 at the end of
Section IV. | Vu-Duc Ngo, Tuan-Cuong Vuong, Thien Van Luong, Hung Tran | 2023-07-04T08:48:01Z | http://arxiv.org/abs/2307.01570v1 | # Machine Learning-Based Intrusion Detection:
###### Abstract
Internet of things (IoT) has been playing an important role in many sectors, such as smart cities, smart agriculture, smart healthcare, and smart manufacturing. However, IoT devices are highly vulnerable to cyber-attacks, which may result in security breaches and data leakages. To effectively prevent these attacks, a variety of machine learning-based network intrusion detection methods for IoT networks have been developed, which often rely on either feature extraction or feature selection techniques for reducing the dimension of input data before being fed into machine learning models. This aims to make the detection complexity low enough for real-time operations, which is particularly vital in any intrusion detection systems. This paper provides a comprehensive comparison between these two feature reduction methods of intrusion detection in terms of various performance metrics, namely, precision rate, recall rate, detection accuracy, as well as runtime complexity, in the presence of the modern UNSW-NB15 dataset as well as both binary and multiclass classification. For example, in general, the feature selection method not only provides better detection performance but also lower training and inference time compared to its feature extraction counterpart, especially when the number of reduced features \(K\) increases. However, the feature extraction method is much more reliable than its selection counterpart, particularly when \(K\) is very small, such as \(K=4\). Additionally, feature extraction is less sensitive to changing the number of reduced features \(K\) than feature selection, and this holds true for both binary and multiclass classifications. Based on this comparison, we provide a useful guideline for selecting a suitable intrusion detection type for each specific scenario, as detailed in Tab. 14 at the end of Section IV. Note that such the comparison between feature selection and feature extraction over UNSW-NB15 as well as theoretical guideline have been overlooked in the literature.
Intrusion detection, UNSW-NB15, feature selection, feature extraction, PCA, machine learning, internet of things, runtime, binary/multiclass classification, NIDS, IoT.
## I Introduction
Internet of Things (IoT) has recently witnessed an explosive expansion in a broad range of daily life and industrial applications [1, 2, 3], such as healthcare, smart homes, smart cities, smart energy, smart agriculture, and intelligent transportation. The IoT networks aim to provide internet connections for transferring data among massive IoT devices, such as interconnected sensors, drones, actuators, smart vehicles and smart home appliances [2], using either wired or wireless communications. However, most of these IoT devices are low-cost, low-power and limited-resource, making them highly vulnerable to cyber attacks as well as intrusive activities. Therefore, it is vital to develop network intrusion detection systems (NIDS) that can promptly and reliably identify and prevent malicious attacks to IoT networks. For this, a wide range of machine learning-based intrusion detection techniques have been designed for IoT, along with a number of public network traffic datasets [2, 4]. These datasets often contain a large number of features, in which many are irrelevant or redundant, which adversely affect both the complexity and accuracy of machine learning algorithms. Thus, many feature reduction methods have been developed for NIDS, in which feature selection and feature extraction are two of the most popular ones [2, 4], as discussed next.1
Footnote 1: Note that several recent works that apply deep learning and blockchain to secure IoT networks can be found in [5, 6, 7], in the fields of healthcare system, unmanned aerial vehicle and Android malware.
In NIDS, feature selection has been widely used for reducing the dimensionality of original traffic data. For example, in [8], a mutual information (MI)-based feature selection algorithm was proposed in combination with a classifier called least square support vector machine, which achieves higher accuracy and lower runtime complexity than the existing schemes, over three datasets, namely, KDD99 [9], NSLKDD [10] and Kyoto 2006+ [11]. Before that a MI-based scheme was also proposed for NIDS in [12], which however suffers from higher computational complexity than the approach in [8]. Additionally, several approaches that rely on genetic algorithm (GA) as a search strategy to select the best subset of features can be found in [13, 14]. These methods provide lower false alarm rates than the baselines, where UNSW-NB15 [15] and KDD99 [9] datasets are used. In [16], a hybrid feature selection approach, which relies on the association rule mining and the central points of attribute values, was developed, showing that UNSW-NB15 dataset achieves a better evaluation than NSLKDD. In [17], another hybrid feature selection method that comprises particle swarm optimization (PSO), ant colony algorithm, and GA was proposed, learning to better detection performance than the baselines such as GA [13], in the presence of both NSLKDD and UNSW-NB15 datasets. In [18], a Pigeon inspired optimizer was used for selecting features of NIDS, which achieves higher accuracy than the PSO [13] and hybrid association rules methods [16]. Note that the aforementioned feature selection schemes often suffer from high computational cost, especially for those relying on GA, PSO or machine learning-based classifiers. For this, a correlation-based feature selection method that offers low computational cost was investigated for NIDS over KDD99 and UNSW-NB15 datasets in [19], taking the correlation level among features into account. Recently, this correlation-based method was combined with ensemble-based machine learning classifiers to significantly improve the accuracy of NIDS [20],
at the cost of higher complexity. Hence, aiming at a real-time and low-latency attack detection solutions, this work will more focus on the correlation-based feature selection method.2
Footnote 2: Note that several matrix factorization-based dimensionality reduction methods were developed for gene expression analysis in [21, 22].
Unlike feature selection, which retains a subset of the original features in NIDS, feature extraction attempts to compress a large amount of original features into a low-dimensional vector so that most of information is retained. There are a number of feature extraction techniques that have been applied for reducing data dimension in NIDS, such as principal component analysis (PCA), linear discriminant analysis (LDA), and neural network-based autoencoder (AE). For instance, in [23], PCA was applied to significantly reduce the dimension of KDD99 dataset, improving both accuracy and speed of NIDS, where support vector machine was used for attack classification. Then, several variants of PCA were adopted to intrusion detection, such hierarchical PCA neural networks [24] and kernel PCA with GA [25], which can enhance the detection precision for low-frequent attacks. Some of applications of PCA to recent network traffic datasets such as UNSW-NB15 and CICIDS2017 [26] can be found in [27, 28]. In addition to PCA, LDA was also employed as a feature reduction method for NIDS in [29], which remarkably reduces the computational complexity of NIDS. Then, in [30, 31], both PCA and LDA were combined into a two-layer dimension reduction, which is capable of reliably detecting low-frequency malicious activities, such as User to Root and Remote to Local, over NSLKDD dataset. To further improve the efficiency of feature extraction in NIDS, a AE-based neural network was used in a range of research works [32, 33, 34, 35, 36, 37]. In particular, a stacked sparse AE approach was developed in [32] to conduct a non-linear mapping between high-dimensional data and low-dimensional data over NSLKDD dataset. In [33], a deep stacked AE was used to noticeably reduce the number of features to 5 and 10 for binary and multiclass classification, respectively, leading to better accuracy than the previous methods. Additionally, a number of AE architectures based on long short-term memory (LSTM) were developed for dimensionality reduction of NIDS, such as variational LSTM [35] and bidirectional LSTM [34], which can efficiently address imbalance and high-dimensional problems. Note that these AE-based methods suffer from a high computational cost compared to PCA and LDA, both in training and testing phases. To address this issue, a network pruning algorithm has been recently proposed in [36] to considerably lower complexity of AE structures in extracting features of NIDS. In [37], a network architecture uses an autoencoder based on convolutional and recurrent neural networks to extract spatial and temporal features without human engineering.
It is worth noting that most of the aforementioned papers have focused on either improving the detection accuracy or reducing the computational complexity of NIDS, by using machine learning-based classifications in combination with feature engineering methods such as feature selection and feature extraction for reducing data dimensionality. However, a comprehensive comparison between these two feature reduction methods has been overlooked in the literature. Our paper appears to address this gap. In particular, we first provide an overview of NIDS, with a focus on the phase of feature reduction, where feature extraction with PCA and feature selection with correlation matrix are the two promising candidates for realistic low-latency operations of NIDS. Then, using the modern UNSW-NB15 dataset, we thoroughly compare the detection performance (precision, recall, F1-score) as well as runtime complexity (training time and inference time) of these two methods, taking into account both binary and multiclass classifications as well as the same number of selected/extracted features denoted as \(K\). Based on our extensive experiments, we found that feature selection generally achieves higher detection accuracy and requires less training and inference times when the number of reduced features \(K\) is large enough, while feature extraction outperforms feature selection when \(K\) gets smaller, such as \(K=4\) or less. Furthermore, in order to gain a deeper insight into detection behaviors of both methods, we investigate and compare their accuracy for each attack class when varying \(K\), based on their best machine learning classifiers, which revealed that feature extraction is not only less sensitive to varying the number of reduced features but also capable of detecting more diverse attack types than feature selection. Additionally, both tend to be able to detect more attacks, i.e., Abnormal classes, when having more features selected or extracted. Relying on such comprehensive observations, we provide a theoretical guideline for selecting an appropriate intrusion detection type for each specific scenario, as detailed in Tab. 14 at the end of Section IV, which is, to the best of our knowledge, not available in the literature.
The rest of this paper is organized as follows. Section II discusses machine learning-based network intrusion detection methods for IoT networks. The overview of UNSW-NB15 dataset and data pre-processing are explained in Section III. Section IV provides the experimental results and discussion. Finally, Section V concludes this paper.
## II Machine Learning-based Network Intrusion Detection Methods
In this section, we describe an overview of a network intrusion detection system (NIDS) based on machine learning, followed by details on the two major feature reduction methods, namely, feature selection and feature extraction.
### _Overview of NIDS_
A NIDS consists of three major components, namely, data pre-processing, feature reduction, and attack classification, as illustrated in Fig. 1. In particular, in the first phase, the raw data is denoted as the dataframe \(\mathbf{Z}\), whose features may include unexpected or non-numeric values, such as null or nominal. \(\mathbf{Z}\) is pre-processed in order to either replace these unexpected values with valid ones or transform them to the numeric format using one-hot encoding. Several features that do not affect detection performance, such as the source IP address and the source port number, are dropped out. Furthermore, depending on the classifier we use for identifying attacks, we may use the normalization technique, for example, to constrain the values of all features, i.e., the elements of the output vector of the first
phase \(\mathbf{X}\) in Fig. 1, to range from 0 to 1. We will discuss this in detail in Section III when presenting UNSW-NB15 dataset.
As such, after the first phase, the pre-processed data \(\mathbf{X}\in\mathbb{R}^{D\times N}\) is likely to have much more features than the original data \(\mathbf{Z}\), particularly due to the use of one-hot encoding, where \(D\) is the number of dimensions, or equivalently, the number of features of \(\mathbf{X}\), and \(N\) is the number of data samples. For example, when UNSW-NB15 dataset is used, the dimension of data increases from 45 to nearly 200, which is too large for classification techniques to quickly recognize the attack type. In order to address this fundamental issue, in the second phase, we need to reduce the number of features that will be used for the attack classification phase (the last phase in Fig. 1). For this, two feature reduction methods called feature selection and feature extraction are widely used to either select or extract a small number of most important features from pre-processed traffic data. This procedure also helps to remove a large amount of unnecessary features, which not also increase the complexity of NIDS, but also degrade its detection performance, as will be illustrated in experimental results in Section IV. Herein, the output data of the feature reduction block is denoted as vector \(\mathbf{U}\in\mathbb{R}^{K\times N}\) in Fig. 1, which is expected to have a much lower dimension than \(\mathbf{X}\), i.e., \(K\ll D\), while retaining its most important information.
Finally, in the third phase of NIDS, a number of binary and multiclass classification approaches based on machine learning, such as decision tree, random forest and multilayer perception neural networks, are employed to detect the attack type. Relying on attack detection results, the system administrators can promptly make a decision to prevent malicious activities, ensuring the security of IoT networks. Here, note that the detection performance and latency of a NIDS strongly depend on which classifier and which feature reduction method it employs. Therefore, in this contribution, we comprehensively investigate detection performance (in terms of recall, precision, F1-score) and latency (in terms of training time and inference time) of different detection methods in presence of both feature selection and feature extraction as well as different machine learning classifiers. We also focus more on the comparison between these two feature reduction methods, which will be described in detail in the following subsections.
### _Feature selection_
There are a number of feature selection techniques used in intrusion detection, namely, information gain (IG) [8] and feature correlation [19, 20, 38]. In this work, we focus on using feature correlation for selecting important features, since this method has been shown to achieve competitive detection accuracy and complexity compared to other selection counterparts. Using this correlation-based method, we aim to select features that are most correlated to other features based on the correlation matrix calculated from the training dataset. More specifically, the correlation coefficient between feature \(\Omega_{1}\) and feature \(\Omega_{2}\) is calculated based on the numeric pre-processed training dataset \(\mathbf{X}\) as follows [38]:
\[\mathcal{C}_{\Omega_{1},\Omega_{2}}=\frac{\sum_{i=1}^{N}\left(\alpha_{i}-E_{ \Omega_{1}}\right)\left(\beta_{i}-E_{\Omega_{2}}\right)}{\sqrt{\sum_{i=1}^{N} \left(\alpha_{i}-E_{\Omega_{1}}\right)^{2}}.\sqrt{\sum_{i=1}^{N}\left(\beta_{i }-E_{\Omega_{2}}\right)^{2}}}, \tag{1}\]
where \(\alpha_{i}\) and \(\beta_{i}\) are the values of these two features, \(E_{\Omega_{1}}=\sum_{i=1}^{N}\alpha_{i}/N\) and \(E_{\Omega_{2}}=\sum_{i=1}^{N}\beta_{i}/N\) are their means over \(N\) training data samples. Note that after preprocessing the raw data \(\mathbf{Z}\) to obtain \(\mathbf{X}\), all features of \(\mathbf{X}\) are now numeric, i.e., \(\alpha_{i}\) and \(\beta_{i}\) are numeric, making (1) applicable to process. By doing this, we obtain a \(D\times D\) correlation matrix \(\mathbf{C}\), whose elements are given by \(c_{ij}=\mathcal{C}_{\Omega_{i},\Omega_{j}}\) for \(i,j=1,2,...,D\). The average correlation of feature \(\Omega_{i}\) to other features is computed as follows:
\[C_{i}=\frac{\sum_{j=1}^{D}c_{ij}}{D}, \tag{2}\]
where \(c_{ii}=1\) for \(j=i\) and \(c_{ij}\in[-1;1]\) for \(j\neq i\). Note that the self-correlation coefficient \(c_{ii}\) does not affect selection results, since it contributes the same amount to all \(C_{i}\) for \(i=1,2,...,D\). Then, using a suitable threshold, as will be detailed in Section IV, we are able to select \(K\) most important features corresponding to \(K\) largest elements \(C_{i}\).
It is worth noting that we only need to calculate such feature correlation in the training phase, while in the testing phase, we simply pick up \(K\) features from the high-dimensional data \(\mathbf{X}\) to form the reduced-dimensional data \(\mathbf{U}\) in Fig. 1. This does not require much computational resource when compared with the feature extraction method, which is presented next.
### _Feature extraction_
Principal component analysis (PCA) [23] and autoencoder (AE) [36] are the two major feature extraction methods used in
Fig. 1: Block diagram of a network intrusion detection system.
the NIDS. Different from feature selection, whose selected features are identical to those appearing in the original data, these feature extraction techniques compress the high-dimensional data \(\mathbf{X}\) into the low-dimensional data \(\mathbf{U}\) using either a projection matrix or an AE-based neural network learned from training dataset. Note that the AE approach usually suffers from high computational complexity of a deep neural network (DNN), leading to higher latency than the PCA. Thus, in this work, we concentrate on the PCA-based feature extraction approach in order to fulfill a strict requirement on the latency of the NIDS for promptly preventing severe cyber attacks.
In what follows, we introduce the procedure of producing the \(D\times K\) projection matrix \(\mathbf{W}\) in the training phase, and how to utilize this matrix in the testing phase. In particular, based on the pre-processed training data \(\mathbf{X}\) of \(N\) samples, we normalize it by subtracting all samples of \(\mathbf{X}\) by its mean over all training samples, i.e., the normalized data is given as follows: \(\hat{\mathbf{X}}=\mathbf{X}-\hat{\mathbf{X}}\), where \(\hat{\mathbf{X}}\) is the mean vector. Then, we compute the \(D\times D\) covariance matrix of training data as follows: \(\mathbf{R}=\frac{1}{N}\hat{\mathbf{X}}\hat{\mathbf{X}}^{T}\). Based on this, we determine its eigenvalues and eigenvectors, from which, we select \(K\) eigenvectors corresponding to \(K\) largest eigenvalues for constructing the \(D\times K\) projection matrix \(\mathbf{W}\). Herein, these \(K\) eigenvectors are regarded as the principal components that create a subspace, which is expected to be significantly close to the normalized high-dimensional data \(\hat{\mathbf{X}}\). Finally, the compressed data is determined by \(\mathbf{U}=\mathbf{W}^{T}\hat{\mathbf{X}}\), which now has the size of \(K\times N\) instead of \(D\times N\) of the original data.
In the testing phase, for each new data point \(\mathbf{x}_{i}\in\mathbb{R}^{D}\), its dimension is reduced using PCA according to \(\mathbf{u}_{i}=\mathbf{W}^{T}\left(\mathbf{x}_{i}-\hat{\mathbf{X}}\right)\). This indicates that the output of the training phase of PCA includes both the projection matrix \(\mathbf{W}\) and the mean vector of all training samples \(\hat{\mathbf{X}}\). It should be noted that such projection matrix calculation would be computationally expensive, particularly when \(D\) and \(K\) are large.
## III Overview of UNSW-NB15 dataset
We now present some key information about UNSW-NB15 dataset, which will be used in our experiments in Section IV to compare between feature selection and feature extraction. Then, the data pre-processing for this dataset is also discussed.
### _Key information of UNSW-NB15 dataset_
UNSW-NB15 dataset was first introduced in [15], which offers better real modern normal and abnormal synthetical network traffic compared with the previous NIDS datasets such as KDD99 [9] and NSLKDD [10]. A total of 2.5 million records of data are included in the UNSW-NB15 dataset, in which there are one normal class and nine attack classes: Analysis, Backdoor, DoS, Exploits, Fuzzers, Generic, Reconnaissance, Shellcode, and Worms. Flow features, basic features, content features, time features, additional generated features, and labeled features are six feature groups, which consist of a total of 49 features in the original data [15]. However, in this work, we use a 10% cleaned dataset of UNSW-NB15, which includes a training set of 175,341 records and a test set of 82,332 records. There are a few minority classes with proportions of less than 2%, including Analysis, Backdoor, Shellcode, and Worms (see Fig. 2 and Fig. 3). In the 10% dataset, some unrelevant features were removed, such as _scrip_ (source IP address), _sport_ (source port number), _dstip_ (destination IP address), and _dsport_ (destination port number). Therefore, the number of features was reduced to 45, including 41 numerical features and 4 nominal features.
### _Pre-processing dataset_
As mentioned above, the 10% dataset of UNSW-NB15 has 45 features, including 41 numerical features and 4 nominal features. We remove the _id_ feature in numerical features, since it does not affect the detection performance. The _attack_cat_ nominal feature that contains the names of attack categories is also removed. Thus, there are 3 remaining helpful nominal features, namely, _proto, service, state_. In addition, null values
Fig. 3: Proportions of 10 classes in testing dataset of UNSW-NB15.
Fig. 2: Proportions of 10 classes in training dataset of UNSW-NB15.
appearing in the _service_ feature are treated as 'other' type of service.
One-hot encoding is used for transforming nominal features, i.e., _proto_, _service, state_, to numerical values. For example, assume that the _proto_ feature has a total of 3 different values, namely, A, B, C, then its one-encoding will result in 3 numerical features, namely, _proto_A_, _proto_C_, _proto_C_, whose values are 0 or 1, as illustrated in Tab. 1. As a result, after pre-processing data, the number of features will increase from 45 features in \(\mathbf{Z}\) to approximately 200 features in \(\mathbf{U}\) (see Fig. 1), where many of them are not really helpful in classifying attacks. Therefore, it is necessary to reduce such a large number of features to a few of the most important features, which allows to reduce the complexity of machine learning models in the classification phase. Finally, we note that when feature extraction is used, we normalize the input feature with the min-max normalization method [39] to improve the classification accuracy, while we do not use that data normalization for feature selection, since it does not improve the performance.
## IV Experimental results and discussion
We now present extensive experimental results for investigating the performance of the NIDS using both feature selection and feature extraction methods described in Section II, in combination with a range of machine learning-based classification models. More particularly, the performance metrics used for comparison include recall (R), precision (P), F1-score, training time and inference time, which will be explained in detail in Subsection IV-A. Both binary and multiclass classifications are considered. We also investigate the accuracy for each attack class to provide an insight into the behaviors of different detection methods. Last but not least, based on our extensive comparison between feature selection and feature extraction, we provide a helpful guideline on how to choose an appropriate detection technique for each specific scenario.
### _Implementation setting_
#### Iv-A1 Computer configuration
The configuration of our computer, its operation system as well as a range of Python packages used for implementing intrusion detection algorithms in this work are detailed in Tab. 2.
#### Iv-A2 Evaluation Metrics
We consider the following performance metrics: precision, recall, F1-score, as well as training time and inference time. In particularly, F1-score is calculated based on precision and recall as follows:
\[\text{F1-score}=2\times\frac{\text{precision}\times\text{recall}}{\text{ precision}+\text{recall}}, \tag{3}\]
which is regarded as a harmonic mean of precision and recall.
As shown in Fig. 1, the two feature reduction methods considered in this work go through the same pre-processing data step, so we do not take the time required for this step into account when estimating their training and inference time. Particularly, the training time consists of the training time of classification models and the time duration consumed by feature reduction in training (FR_train), as follows:
\[\text{training time}=\text{time}_{train}+\text{time}_{FR\_train}, \tag{4}\]
Meanwhile, the inference time consists of the prediction time of machine learning classifiers and the time duration required for feature reduction in the testing phase, given by
\[\text{inference time}=\text{time}_{predict}+\text{time}_{FR\_test}. \tag{5}\]
#### Iv-A3 Classification models
We use five machine learning models to do both binary and multiclass classification tasks, which are available in Python Scikit-learn library, namely, Decision Tree, Random Forest (max_depth = 5), K-nearest Neighbors (n_neighbors = 5), Multi-layer Perceptron (MLP) (max_iter = 100, hidden_layer_sizes = 200), and Bernoulli Naive Bayes. Additionally, for a better insight of feature selection, we provide lists of 4, 8 and 16 selected features in Tab. III, as well as the corresponding thresholds of the average correlation used to achieve those numbers of selected features.
### _Binary classification_
We first investigate the detection performance and runtime of feature selection and feature extraction methods when using binary classification in Tabs. 4, 5 and 6 for 4, 8, 16 selected/extracted features, respectively. In these tables, the best values (i.e. the maximum values of precision, recall, and F1-score, and the minimum values of training and inference times at each column of the tables) are highlighted in bold, especially the best values for both feature selection and feature extraction are highlighted both in bold and red color. The training time is measured in second (s), while the inference time for each data sample is measured in millisecond (\(\mu\)s).
In terms of detection performance, it is shown from Tab. IV, 5 and 6 that when the number of reduced features (i.e. extracted or selected) \(K\) increases, the detection performance of feature extraction generally improves, while that of feature selection
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline _proto_ & _proto\_A_ & _proto\_B_ & _proto\_C_ \\ \hline \hline A & 1 & 0 & 0 \\ \hline B & 0 & 1 & 0 \\ \hline C & 0 & 0 & 1 \\ \hline \end{tabular}
\end{table} TABLE I: An example of one-hot encoding for a nominal feature
\begin{table}
\begin{tabular}{c|c} \hline
**Unit** & **Description** \\ \hline Processor & Intel Core i5-10400F (2.66 Hz, 6 cores 12 threats, 12MB Cache, 65W) \\ \hline RAM & 16GB \\ \hline GPU & Nvidia GTX 1650 OC-4G \\ \hline Operating System & Ubuntu 20.04.4 LTS \\ \hline Packages & Numpy, Matplotlib, Pandas, Scipy, \\ & Scikit-learn, Scikit-plot and Time \\ \hline \end{tabular}
\end{table} TABLE II: Hardware and Environment Specification
does not improve when we increase \(K\) from 8 to 16. In fact, the precision, recall and F1-score of feature selection even slightly degrade from Tab. 5 to Tab. 6. This phenomenon is understandable due to the fact that if the number of selected features gets larger, it is likely to have more noisy or unimportant features appearing in the selected ones, which are expected to deteriorate the detection performance. Moreover, comparing the two feature reduction methods, we find that when the number of reduced features is small, i.e., \(K=4\), the detection performance of feature extraction is much better than that of feature selection. For instance, in Tab. 4, the highest F1-score of feature extraction is 85.42% when the KNeighbors classifier is used, while that of feature selection is lower with 81.94% when the Decision Tree classifier is used. However, for larger \(K\) such as 8 and 16 in Tab. 5 and Tab. 6, the feature selection method achieves better accuracy than its extraction counterpart, especially when using Decision Tree for classification. For example, when Decision Tree is employed in Tab. 5 to achieve the lowest inference time, the F1-score of feature selection is 87.47%, which is higher than that of feature extraction with 85.69%. It is also shown from Tab. 4, 5 and 6 that when using feature selection, the Decision Tree classification method always provides the best precision, recall as well as F1-score. By contrast, the feature extraction method would enjoy the KNeighbors classifier when \(K\) are small, i.e., 4 or 8, while Decision Tree is only its best classifier when \(K\) becomes larger, such as \(K=16\).
In terms of the runtime performance, Tabs. 4, 5 and 6 demonstrate that both the training time and the inference time of feature selection is lower than that of feature extraction. This is because of the fact that the feature extraction method requires additional computational resources when compressing the high-dimensional data into low-dimensional data, as explained in Section II, while the feature selection almost do not require any computing resources when just picking up \(K\) out of \(D\) features. More particularly, in Tab. 5, the best inference time of feature selection is 0.11 \(\mu\)s, which is 36 times lower than that of feature extraction being 3.95 \(\mu\)s, where the Decision Tree classifier is the best choice for both feature reduction methods for minimizing the inference time. Again, Decision Tree is one of the best classifiers for minimizing both training and inference times, in addition to the Naive Bayes classifier, which however does not achieve a good accuracy.
Finally, in order to better understand the attack detection performance of feature selection and feature extraction, in Tab. 7, we provide the accuracy comparison for each class in
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Models} & \multicolumn{8}{c|}{Feature Extraction} & \multicolumn{8}{c|}{Feature Selection} \\ \cline{2-10} & P & R & F1 & training (s) & inference (\(\mu\)s) & P & R & F1 & training (s) & inference (\(\mu\)s) \\ \hline Decision Tree & **86.43** & **85.47** & **88.95** & 37.38 & **5.38** & **87.41** & **86.61** & **87.01** & 15.18 & **0.17** \\ \hline Random Forest & 85.85 & 81.08 & 83.4 & 58.07 & 19.26 & 85.85 & 81.08 & 83.39 & 52.92 & 19.01 \\ \hline KNeighbors & 86.09 & 84.5 & 85.29 & 38.74 & 1421.15 & 86.09 & 84.5 & 85.29 & 37.68 & 1426.22 \\ \hline MLP & 86.36 & 84.67 & 83.04 & 1344.91 & 31.59 & 81.75 & 74.27 & 77.83 & 661.67 & 40.64 \\ \hline Naive Bayes & 78.2 & 75.59 & 76.87 & **36.64** & 5.82 & 75.66 & 73.88 & 74.76 & **14.46** & 0.55 \\ \hline \end{tabular}
\end{table} TABLE VI: Feature Selection versus Feature Extraction: 16 selected/extracted features and binary classification
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Models} & \multicolumn{8}{c|}{Feature Extraction} & \multicolumn{8}{c|}{Feature Selection} \\ \cline{2-10} & P & R & F1 & training (s) & inference (\(\mu\)s) & P & R & F1 & training (s) & inference (\(\mu\)s) \\ \hline Decision Tree & 85.38 & 84.35 & 84.84 & 22.73 & 3.73 & **84.09** & **79.89** & **81.94** & 14.53 & **0.07** \\ \hline Random Forest & 85.76 & 81.22 & 83.42 & 32.32 & 18.12 & 77.86 & 75.11 & 76.46 & 17.13 & 3.29 \\ \hline
**KNeighbors** & **86.19** & **84.67** & **88.42** & 23.04 & 38.47 & 52.38 & 47.98 & 50.08 & 14.75 & 259.03 \\ \hline MLP & 85.91 & 81.75 & 83.78 & 1011.31 & 39.37 & 75.76 & 74.75 & 75.25 & 1278.61 & 37.08 \\ \hline Naive Bayes & 72.62 & 71.95 & 72.28 & **20.37** & **3.62** & 75.47 & 73.63 & 74.54 & **14.3** & 0.24 \\ \hline \end{tabular}
\end{table} TABLE IV: Feature Selection versus Feature Extraction: 4 selected/extracted features and binary classification
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Class} & \multicolumn{8}{c|}{Feature Extraction (MLP/KNeighbors)} & \multicolumn{8}{c|}{Feature Selection (Decision Tree)} \\ \cline{2-10} & \(K=4\) & \(K=8\) & \(K=16\) & \(K=4\) & \(K=8\) & \(K=16\) \\ \hline Normal & 70.64 & 71.55 & **73.79** & 57.21 & **76.67** & 76.09 \\ \hline Abnormal & **96.14** & 96.02 & 94.99 & **98.4** & 95.55 & 95.21 \\ \hline Average & 84.68 & 85.02 & **85.47** & 79.89 & **87.07** & 86.61 \\ \hline \end{tabular}
\end{table} TABLE VII: Accuracy comparison for each class between feature selection and feature extraction using binary classification
binary classification, namely, Normal and Abnormal. Similar to Tabs. 4, 5 and 6, we consider the number of reduced features \(K\) being 4, 8 and 16. Besides, based on the results obtained from these three tables, we only include the classifiers that offer the highest F1-scores for accuracy comparison for each class in Tab. 7, namely, MLP and KNeighbors for feature extraction and Decision Tree for feature selection. Herein, the highest accuracy for each class with respect to \(K\) is highlighted in bold, while the highest values in both feature selection and feature extraction are highlighted in bold and red color. It is worth noting from this table that in both feature reduction methods, while the accuracy of detecting Normal class steadily improves when increasing \(K\), that of detecting Abnormal class gradually degrades. This interestingly indicates that in order to detect more attacks, we should select small \(K\) rather than large \(K\). In addition, Tab. 7 shows that for both feature reduction methods, the accuracy of Abnormal class is much higher than that of Normal class. Observe the average accuracy from this table, we find that the accuracy of feature extraction is less sensitive to varying \(K\) that that of feature selection, which varies significantly with respect to \(K\).
### _Multiclass classification_
We compare both the detection performance and runtime of feature selection and feature extraction in Tabs. 8, 9, and 10 for 4, 8, and 16 selected/extracted features, respectively, when multiclass classification is considered. Here, we still employ five machine learning models as in binary classification. As shown via these three tables, similar to the binary case, the precision, recall and F1-score of both methods generally improve when increasing the number of reduced features \(K\). For example, the highest F1-scores of feature extraction are 74.11%, 75.39%, and 75.52%, while that of feature selection are 65.43%, 78.36% and 77.64%, when \(K\) = 4, 8, and 16 reduced features, respectively. As such, feature extraction outperforms its counterpart when \(K\) is small such as \(K=4\), however, this is no longer true when \(K\) gets larger such as \(K=8\) and \(16\), where feature selection performs much better than feature extraction. Again, akin to the binary classification, it is shown from Tabs. 9 and 10 that the detection performance of feature selection degrades when \(K\) increases from 8 to 16, mostly due to the impact of noisy or irrelevant features when having more features selected.
Besides, unlike the binary case, where KNeighbors is the best classifier for feature extraction when \(K\) is small such as 4 and 8, with multiclass classification, MLP now provides the best detection performance of feature extraction for any values of \(K\), as shown via Tabs. 8, 9, and 10. Meanwhile, feature selection still enjoys the Decision Tree classifier to achieve the highest detection performance, similar to the binary classification analyzed in the previous subsection, while MLP does not offer a good detection performance for feature selection. Additionally, the Naive Bayes classifier achieves the worst accuracy for both feature reduction methods.
With regard to the runtime comparison, again, Tabs. 8, 9, and 10 demonstrate that the training and inference times of feature selection are significantly lower than that of feature extraction. For example, using the same Decision Tree model for achieving the lowest runtime, in Tab. 9 when \(K=8\), the inference time of feature selection is 0.19 \(\mu\)s, which is 26 times lower than that of feature extraction with 5.04 \(\mu\)s. Similarly, it is shown from this table that the training time of feature selection is also 2 times lower than that of its extraction counterpart. In addition, the Decision Tree model provides the lowest inference time for both feature reduction methods, while the neural network-based MLP classifier exhibits both the highest inference and training times for them.
Finally, we compare the accuracy for detecting each attack type (including 9 attack classes and 1 normal class, as described in Section III) between feature selection and feature extraction in Tab. 11, where the values of \(K\) are 4, 8, and 16 reduced features. Herein, we employ MLP and Decision Tree classifiers for feature extraction and feature selection, respectively, in order to achieve the best detection performance, as analyzed in the previous discussions. It is observed from Tab. 11 that feature selection performs better than feature extraction in most of classes, except for Exploits and Fuzzers classes. This table also shows that both methods
are capable of achieving higher accuracy for Exploits, Generic, Normal and Reconnaissance classes than the remaining ones. Additionally, similar to the binary classification discussed in Tab. 7, the multiclass classification accuracy of feature extraction is less sensitive to the number of reduced features \(K\) than that of its selection counterpart. More importantly, feature selection with MLP is unable to correctly detect any samples of Analysis and Backdoor, even for all three values of \(K\). By contrast, feature selection with Decision Tree classifier is capable of correctly detecting samples from all classes. We found that this is mainly due to the machine learning classifier rather than the feature reduction method we choose. In order to clarify this issue, we compare the accuracy for each class between the two feature reduction methods using the same Decision Tree and MLP classifiers in Tab. 12 and Tab. 13, respectively. It is shown via Tab. 13 that using the same MLP, similar to feature extraction, feature selection is unable to detect any samples of Analysis and Backdoor correctly. Observe these two tables, we found that if the same classifier is employed, feature extraction tends to be able to detect more diverse attack types than feature selection. This is due to the fact that feature extraction can extract key information from all available features, leading to more diverse attack types, instead of relying solely on a subset of selected features as in the feature selection approach. In other words, feature selection tends to detect only attack types, which are highly correlated to the features it selects.
In summary, considering both binary and multiclass classification for the NIDS, the feature selection method not only provides better detection performance but also lower training and inference time compared to its feature extraction counterpart, especially when the number of reduced features \(K\) increases. However, the feature extraction method is much more reliable than its selection counterpart, particularly when \(K\) is very small, such as \(K=4\). Additionally, among five considered classifiers, while Decision Tree is the best classifier for improving the accuracy of feature selection, a neural network-based MLP is the best one for feature extraction. Last but not least, feature extraction is less sensitive to changing the number of reduced features \(K\) than feature selection, and this
holds true for both binary and multiclass classifications. For more details, we provide a comprehensive comparison between feature selection and feature extraction in intrusion detection systems in Tab. 14.
## V Conclusions
We have compared two typical machine learning-based intrusion detection methods, namely, feature selection and feature extraction, in the presence of the modern UNSW-NB15 dataset, where both binary and multiclass classifications were considered. Our extensive comparison showed that when the number of reduced features is large enough, such as 8 or 16, feature selection not only achieves higher detection accuracy, but also requires less training and inference times than feature extraction. However, when the number of reduced features is very small, such as 4 or less, feature extraction notably outperforms its selection counterpart. Besides, the detection performance of feature selection tends to degrade when the number of selected features becomes too large, while that of feature extraction steadily improves. We also found that while MLP is the best classifier for feature extraction, Decision Tree is the best one for feature selection for achieving the highest attack detection accuracy. Finally, our accuracy analysis for each attack class demonstrated that feature extraction is not only less sensitive to varying the number of reduced features but also capable of detecting more diverse attack types than feature selection. Both tend to be able to detect more attacks, i.e., Abnormal classes, when having more features selected or extracted. We believe that such insightful observations about the performance comparison between two feature reduction methods give us a helpful guideline on choosing a suitable intrusion detection method for each specific scenario. Finally, note that our study evaluated the effectiveness of feature reduction methods only on the UNSW-NB15 dataset. In the future, we intend to explore whether our observations with UNSW-NB15 are applicable to other intrusion detection datasets, such as, NSL-KDD, KDD99, CICIDS2017, and DARPA1998. We also plan to thoroughly investigate the performance of various deep learning classification models for NIDS, and compare with existing machine learning models.
**Declarations**
**Author Contribution** Vu-Duc Ngo and Tuan-Cuong Vuong wrote the main manuscript. Thien Van Luong and Hung Tran reviewed and corrected the manuscript.
**Ethical Approval** Not applicable.
**Acknowledgement** This work was supported in part by the SSF Framework Grant Serendipity and R&D project of Brighter Gates AB, Sweden.
**Data Availability** The paper does not include any supporting data.
**Conflicts of interests** All authors declare that they do not
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Class} & \multicolumn{3}{c|}{Feature Extraction (MLP)} & \multicolumn{3}{c|}{Feature Selection (MLP)} \\ \cline{2-7} & \(K=4\) & \(K=8\) & \(K=16\) & \(K=4\) & \(K=8\) & \(K=16\) \\ \hline Analysis & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline Backdoor & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline DoS & 0.81 & 10.61 & **11.2** & **6.6** & 0.83 & 0.42 \\ \hline Exploits & **86.79** & 85.92 & 85.59 & 19.16 & 25 & **26.23** \\ \hline Fuzzers & **67.6** & 66.46 & 58 & **30.86** & 18.51 & 11.86 \\ \hline Generic & 96.22 & 96.24 & **96.3** & **96.96** & 96.2 & 96.21 \\ \hline Normal & 62.47 & 62.5 & **68.79** & 68.66 & 75.5 & **93.08** \\ \hline Reconnaissance & 50.69 & 60.55 & **66.5** & 0.11 & **73.31** & 36.61 \\ \hline Shellcode & 0 & 7.67 & **15.08** & 0 & **10.6** & 0 \\ \hline Worms & 0 & 2.27 & **9.09** & 0 & 0 & 0 \\ \hline Average & 69.03 & 69.79 & **72.29** & 58.38 & 63.88 & **69.88** \\ \hline \end{tabular}
\end{table} TABLE XIII: Accuracy comparison for each class between feature selection and feature extraction using multiclass classification and the same MLP classifier
have any conflict of interest.
**Funding** Not applicable.
|
2305.07778 | Accelerator-Aware Training for Transducer-Based Speech Recognition | Machine learning model weights and activations are represented in
full-precision during training. This leads to performance degradation in
runtime when deployed on neural network accelerator (NNA) chips, which leverage
highly parallelized fixed-point arithmetic to improve runtime memory and
latency. In this work, we replicate the NNA operators during the training
phase, accounting for the degradation due to low-precision inference on the NNA
in back-propagation. Our proposed method efficiently emulates NNA operations,
thus foregoing the need to transfer quantization error-prone data to the
Central Processing Unit (CPU), ultimately reducing the user perceived latency
(UPL). We apply our approach to Recurrent Neural Network-Transducer (RNN-T), an
attractive architecture for on-device streaming speech recognition tasks. We
train and evaluate models on 270K hours of English data and show a 5-7%
improvement in engine latency while saving up to 10% relative degradation in
WER. | Suhaila M. Shakiah, Rupak Vignesh Swaminathan, Hieu Duy Nguyen, Raviteja Chinta, Tariq Afzal, Nathan Susanj, Athanasios Mouchtaris, Grant P. Strimel, Ariya Rastrow | 2023-05-12T21:49:51Z | http://arxiv.org/abs/2305.07778v1 | # Accelerator-Aware Training for Transducer-Based Speech Recognition
###### Abstract
Machine learning model weights and activations are represented in full-precision during training. This leads to performance degradation in runtime when deployed on neural network accelerator (NNA) chips, which leverage highly parallelized fixed-point arithmetic to improve runtime memory and latency. In this work, we replicate the NNA operators during the training phase, accounting for the degradation due to low-precision inference on the NNA in back-propagation. Our proposed method efficiently emulates NNA operations, thus foregoing the need to transfer quantization error-prone data to the Central Processing Unit (CPU), ultimately reducing the user perceived latency (UPL). We apply our approach to Recurrent Neural Network-Transducer (RNN-T), an attractive architecture for on-device streaming speech recognition tasks. We train and evaluate models on 270K hours of English data and show a 5-7% improvement in engine latency while saving up to 10% relative degradation in WER.
Suhaila M. Shakiah, Rupak Vignesh Swaminathan, Hieu Duy Nguyen, Raviteja Chinta\({}^{\dagger}\), Tariq Afzal\({}^{\dagger}\), Nathan Susanj, Athanasios Mouchtaris, Grant P. Strimel, Ariya Rastrow Alexa Machine Learning, Amazon, USA; \({}^{\dagger}\)Hardware Compute Group, Amazon, USA
978-1-6654-7189-3/22/$31.00 @2023 IEEE
Accelerator-aware training, model compression, automatic speech recognition (ASR), recurrent neural network transducer (RNN-T).
## 1 Introduction
The task of transcribing an audio to the corresponding text transcriptions constitutes the Automatic Speech Recognition (ASR) component of voice assistants such as Alexa, Google Home or Siri. ASR solutions have evolved from traditional hybrid Deep Neural Network (DNN) - Hidden Markov Model (HMM) systems to modern end-to-end neural architectures including various transducer-based systems [1, 2, 3, 4, 5]. While many of these approaches have demonstrated high accuracy, an important differentiation comes from their streaming capability, which reduces the user's perceived latency (UPL). A streaming ASR system is able to start transcribing the audio even before the user has finished speaking an utterance, i.e., the system does not require future context to inform the transcription results. Among streaming architectures, the Recurrent Neural Network-Transducer (RNN-T) model (see Figure 1) [6] stands out as an all-neural, end-to-end (E2E) method with both low latency and high accuracy, and it is widely adopted in modern speech recognition systems.
Apart from the current trend of moving to E2E architectures for machine learning (ML) applications, leading ML solution providers also utilize on-device processing to improve user experience and reduce UPL. As a result, hardware consisting of NNA chips have been gradually deployed to support on-device computer vision, natural language understanding (NLU), and ASR tasks. However, this comes with additional challenges when moving from floating point to fixed point operations supported by NNAs. Since computers are finite state machines, real numbers are represented and manipulated in floating point format in computer memory. Floating point numbers are characterized by a mantissa, an exponent and the sign bit, which enable the representation of a wide range of values with a floating decimal point. For example, a 32-bit floating point (FP-32) number can represent a total of \(2^{32}\) unique values with a higher precision than their fixed point counterparts. In the context of performing machine learning on the edge, floating point computations are time consuming and memory expensive, especially for deep learning models used for E2E speech recognition which involve millions (and sometimes billions) of multiply-and-accumulate (MAC) operations for a single inference cycle. To address the time and memory complexity, neural accelerator chips thus employ fixed point operations, in which each value is normally represented by a reduced number of bits.
It is worth noting that neural ML models are highly sensitive to such reductions in the precision of weights and activations, and this effect is even more pronounced in recurrent architectures since the errors accumulate across multiple time steps. To address these problems, while also being cognizant of the low latency, power and memory requirements for on-device systems, the on-device chips adopt a hybrid architecture which includes general-purpose central processing units (CPUs) as well as specialized neural processing unit (NPU) cores, integrated into a single System-On-Chip (SoC) design (see Figure 2). In order to acheive the best performance, certain computations are performed in highly efficient NPUs whereas others requiring a higher precision are computed on CPUs. This trade-off results in additional on-chip computing and data transfer latency between the NPU and CPU, creating bottlenecks during inference. For example, on-device ASR models need to use the CPU to compute tanh and sigmoid activations, which are not only quantization error-prone
on NPU, but also computation and memory intensive. In this work, we aim to reduce the latency incurred due to moving data between the NPU and CPU to perform non-linear activations. Using an ASR task and RNN-T architecture as a proof of concept, we show that we are able to improve model inference speed by 20% on-device with negligible, less than 1% relative, accuracy degradation by performing accelerator-aware training (AAT), thus making the models more robust to the NNA and the hardware activation functions.
The rest of the paper is structured as follows. Section 2 describes the related work for latency reduction in RNN-T speech models, followed by an overview of quantization in neural networks and NNAs in Section 3, as well as details of accelerator quantization schemes in Section 4. Section 5 describes our proposed accelerator-aware training technique. We then dive into the experimental details in Section 6 and the performance and latency results in Section 7. Section 8 concludes our paper with some remarks on the advantages of the proposed approach.
## 2 Related Work
There is a large body of work focussing on improving the Word Error Rate (WER) and runtime latency for on-device RNN-T models. Architectural modifications to recurrent neural networks, such as CIFG-LSTM [7] and Simple Recurrent Units [8] have been used in RNN-T saving 30-40% compute while having negligible impact on the WER. Knowledge distillation techniques specific to RNN-T have also been studied [9, 10, 11]. Employing sparsity-based pruning on the weight matrices of LSTMs have been studied for both structured and unstructured sparse matrices [12, 5]. In [13], the authors proposed an additional regularization term in the RNN-T loss to penalize blank token prediction so that the model emits the labels faster, leading to a reduction in latency while maintaining the WER. In [14], the authors introduced a bifocal encoder architecture for RNN-T to improve streaming latency, where the low entropy wakeword segment of an utterance is processed by a small encoder allowing the larger encoder to catch up during decoding of the rest of the utterance as the frames become available. As a more general approach to switching between encoders of different compute capacity, [15] proposed an arbitrator network that could dynamically choose the encoder network on a per-frame basis.
In addition to modified architectures and loss functions, there have also been many advances in training methods that emulate inference; quantization-aware training (QAT) and sparse pruning methods have been proposed to improve on-device runtime latency without hurting accuracy [16, 17, 18, 19, 20, 21]. In [22], the authors propose an accelerator-aware neural design where the architecture search space is explored to optimize performance on NNA. More relevant to this work is [23], in which the authors describe a quantization strategy for RNN-T based speech recognition systems using 16-bit activations. This work focuses on accelerators that uses 8-bit activations, demonstrating on-par performance with unquantized baselines even for models with smaller number of parameters, which are more susceptible to quantization.
## 3 Overview of Quantization
In this section, we provide a brief overview of the quantization schemes, notations and definitions used in this paper. Quantization is the process of converting floating point values to a smaller set of discrete fixed point values, effectively reducing the number of bits used to represent the numbers. We will use the \(Q\)-notation to define the parameters of a signed fixed point number. A fixed point number denoted by \(Q_{m.n}\) has the following properties:
* Total bit width is \(m+n\), including \(m\) integer and sign bits plus \(n\) fractional bits.
* Consider a signed representation, i.e., the most significant bit (MSB) is \(1\) for negative values. The minimum
Figure 1: Model diagram of Recurrent Neural Network Transducer (RNN-T).
Figure 2: A representative SoC showing CPU and NPU cores with local memory and data buffers
possible fractional value that can be represented, \(f_{min}\), is \(-2^{m-1}\), while the maximum representable fractional value, \(f_{max}\), is \(2^{m-1}-2^{-n}\).
* The resolution is \(2^{-n}\), yielding a maximum quantization error of \(2^{-(n+1)}\) between a number and its quantized counterpart.
For example, \(Q3.2\) has 5 bits in total, 3 integer bits with the MSB indicating the sign bit, 2 fractional bits and can represent fractional numbers in the interval [-4, 3.75] with a resolution of \(0.25\), thus able to represent a total of \(2^{5}\) unique values.
### Static Quantization
In static quantization, the input value \(R_{i}\), is first clipped to be within the quantizable integer range, thus accumulating clipping error. Afterwards, the resulting FP-32 value is scaled and rounded to the nearest integer, which is then scaled back to its floating point equivalent. This ensures that the numbers, during inference on-device, will have the lowest quantization (rounding) errors. For a \(Q_{m.n}\) fixed point quantization scheme, the quantized floating point equivalent, \(R_{q}\), of an input FP-32 value \(R_{i}\) is given by,
\[R_{q}=R\left(C\left(R_{i},f_{min},f_{max}\right)2^{n}\right)2^{-n} \tag{1}\]
in which \(R\) denotes \(Round\) function, which rounds the values to the nearest integer or towards zero depending on the implementation and \(C\) denotes \(Clip\) function which clips the values at \(f_{min}\) and \(f_{max}\). In this work, we round the weights to their nearest value and round the inputs and hidden states towards zero as implemented on the accelerator.
### Dynamic Quantization
In dynamic quantization, there is an additional degree of freedom, in which the input values can be scaled with a scaling factor, so that the range fits into the quantizable range of integers. This scaling factor needs to be a power of 2 and is calculated dynamically for every sample and time step for the NNA inference. Simply put, the scaling factor, \(S\), is the smallest power of 2 that scales the incoming tensor and fits all values within the quantizable range of the given target \(Qm.n\) scheme. A clipping error is incurred if a larger scaling factor is required. After the quantization, the outputs are scaled back up. The modified equation is as follows:
\[R_{q}=S\times R\left(C\left(\frac{R_{i}}{S},f_{min},f_{max}\right)2^{n}\right)2 ^{-n} \tag{2}\]
where \(S\in\{1,2,4,8,16\}\), such that \(f_{min}\leq\frac{R_{i}}{S}\leq f_{max}\).
## 4 Neural Network Accelerator
In this section, we briefly discuss how neural network weights and activations are quantized in our NNA experiments. The number of bits for the quantization of weights and activations, the type of quantizations performed, supported layers, the specific data paths between the various on-chip components, etc., differ from one chip version to another. In our experiments, we use the following schemes:
* **LSTM Cells**: The hidden states are statically quantized to \(Q1.7\) in all LSTM cells in the encoder and decoder.
* The inputs to the first LSTM layer in the encoder and decoder are dynamically quantized to \(Q1.7\) (because it directly follows a CPU data path, hence will not incur additional latencies to compute dynamic scaling factors). The inputs to all other LSTM layers in the encoder and decoder are statically quantized to \(Q1.7\)
* The sigmoid and tanh activations for calculating the gate values \(i\), \(f\), \(g\) and \(o\) in the LSTM cells are non-uniformly quantized 8-bit values as shown in Figs. 2(a) and 2(b). As sigmoid and tanh are non-linear, non-uniform quantizations are more suitable to reduce the means-squared error (MSE) than uniform quantization.
* **Dense Layers**: The inputs to all dense layers in the encoder, decoder and joint network are dynamically quantized to \(Q1.7\).
* **Weights**: All weights in the embedding layer, encoder, decoder and joint network are statically quantized to \(Q1.7\)
### Hidden States and Inputs Quantization
The NNA uses a symmetric linear quantization scheme to map 32-bit floating point numbers to 8-bit integers, rounded toward zero. Considering our previous example of \(Q3.2\), the range of representable fractional values is [-4, 3.75], which on the hardware is represented as integers in the range [-16, +15]. Any value less than -4 or greater than 3.75 is clipped to these limiting values, thus accumulating clipping error during static quantization. For dynamic quantization, the allowable dynamic scales are 1, 2, 4, and 16, which are calculated on the CPU and used to scale the original values into the quantizable range. The outputs are uniformly quantized with equal step sizes and quantization intervals as illustrated in Fig. 2(c).
### Non-linear Activation Quantization
The hyperbolic tangent and sigmoid functions are the commonly used non-linear activation functions in neural networks and are integral components in learning long range memory. They are expensive to compute even on CPUs, and thus are
approximated on the NNA hardware through careful digital circuit design and error analysis. An efficient hardware implementation of activation functions is required to meet the performance, area, power and cost targets of neural accelerators. Multiple designs and approximation algorithms have been proposed to balance this trade-off [24, 25, 26]. Usually, a combination of linear interpolators, shifting operations, look-up tables and multiplexers are used in the digital circuit to approximate the activation values. We are interested in translating the on-device operators into the model training workflow to train accelerator-aware models.
For the accelerator considered in this work, activation functions are implemented as a piece-wise linear approximator, which gives non-uniformly quantized 8-bit values as outputs. The design, modeling and analysis of these digital circuits are outside the scope of this work (please refer to [24, 25, 26]). The approximation algorithm is optimized to balance the trade-off between accuracy and on-chip area, and leverages the fact that mathematically, the tanh and sigmoid functions are shifted and scaled versions of one another. It also reduces quantization error by carefully choosing non-uniform quantization centers to pack more quantization bins at parts of the functions with higher gradients. Despite the careful design, it is not possible to circumvent the information loss due to quantization. Given that the activation functions play a key role in learning long-term memory, as we show later in the results, switching from high precision to 8-bit values leads to a large performance degradation. To alleviate this performance degradation, these values need to be computed on the CPU, which requires constant transfer of data on the chip to the CPU in every time step of the LSTM cell to perform activation functions, and back to the NNA to perform MAC operations, incurring additional processing latency. A major contribution of this paper is to incorporate the bit-exact hardware operators into the training workflow with meaningful gradient backpropagation and further fine-tune the model to non-uniformly quantized activations, yielding 5-7% latency reduction and negligible, less than 1% relative WER degradation.
## 5 Accelerator-Aware Training
We propose a two-stage method to address the errors propagated by the activation and intermediate quantizations in the model. Before diving into the implementation details, we first discuss the motivation for choosing such an approach.
The sigmoid and tanh functions saturate approximately at +/-7 and +/-4 respectively (Figs. 3a and 3b). However, in the process of quantizing the activations, a large clipping and precision error is accumulated between +/-7 and +/-4, for the activations when compared to the FP-32 (full precision) values. This is because a significant portion of the inputs to the activation functions in the encoder and decoder LSTMs of a fully trained RNN-T model lie in this error-prone range. Due to the large number of values present in the saturation range of these functions, the vanishing gradient problem hinders learning using backpropagated gradients, which is already affected due to the addition of quantization. To overcome this issue, we add an activity regularization loss on the inputs to the activation functions as described in Section 5.1.
For tuning the inputs, hidden states and activation function values into the quantization levels, we perform a bit-exact quantization in the forward pass and approximate the gradients in the backward pass using straight-through estimation. This is detailed for the various components in Section 5.2.
### Activity Regularization
In Stage I of training, we initialize the model with random weights and train it from scratch with an activity regularization loss on the outputs \(z_{0}\), \(z_{1}\), \(z_{2}\), and \(z_{3}\) of the LSTM cell [27] to alleviate the quantization errors as discussed above. The additional loss term aims to restrict the range of the inputs to the activations functions by penalizing the values proportional to their distance outside the allowable range \([z_{min},z_{max}]\). We achieve this by using a regularization loss, \(L_{activity}\), defined by a shifted ReLU function.
\[L_{activity}=ReLU(z+z_{min})+ReLU(z-z_{max}) \tag{3}\]
\[L_{total}=L_{model}+\lambda L_{activity} \tag{4}\]
where the Rectified Linear Unit, \(ReLU(x)\) = \(max(0,x)\) and \(z\) is the concatenated array [\(z_{0}\), \(z_{1}\), \(z_{2}\), \(z_{3}\)]. For values of \(z\) outside the allowable range, the loss adds a proportional penalizing term, thus restricting their values. The loss is differentiable with respect to the input except at the range extrema. However, the derivative at these extrema is approximated to be 0, making it a loss function compatible for backpropagation. We train the model with \(L_{activity}\) added as an additional term in the total loss \(L_{total}\) (see Eq. 4), where \(\lambda\) is used to weigh the activity regularization term.
### Quantization of Inputs, Hidden States and Activations
In Stage II of training, we initialize the model with the trained weights from stage I and apply accelerator-aware training for the linear quantization of inputs and hidden states, and the non-linear quantization of the tanh and sigmoid activation functions.
#### 5.2.1 Bit-exact Quantization Replication during Forward Pass
* For the linear quantization of inputs and hidden states, we use Eq. 1 in the forward pass.
* For the non-linear activation functions, the on-device linear interpolator in Fig. 3 is replicated during the forward pass. We denote this bit-exact quantization function implemented in the training workflow as \(\mathcal{Q}_{act}\) (see Algorithm 1). Although we demonstrate it here for the \(sigmoid\) and \(tanh\) functions for a particular hardware implementation, this method can be generally applied to any other similar linearly or non-linearly quantized functions.
#### 5.2.2 Backpropagation Using Meaningful Gradients
Since quantization is not a differentiable process, we employ the straight-through estimation method [28] for the forward pass and backpropagate meaningful gradients through the various quantization nodes in the backward pass.
* For the inputs and hidden states, we use clipped cosine gradients as illustrated in Fig. 3c. Instead of passing a unity gradient throughout the range of the inputs at the quantization node as proposed in [28], we propose to backpropagate through the quantization node a periodic clipped cosine gradient to disincentivize values to occupy unwanted areas that lie outside the quantized levels. The clipped cosine gradients drive the values into surrounding quantization bins, while leaving the values around the center of the bins unchanged.
* For the activations, we use the full precision \(sigmoid\) and \(tanh\) gradients.
## 6 Experimental Setup
We conduct our experiments under two setups A and B. In setup A, we train a small RNN-T model with 5 layers of an encoder and 2 layers of a decoder with an additive and feedforward joint network. The number of hidden units in the LSTM cell for setup A is 640, yielding a model with 26M parameters. Setup B is a larger RNN-T variant with the same number of encoder, decoder and joint network layers, in which the LSTMs have 1024 hidden units, yielding a model with 66M parameters. For both setups, the baseline models are trained for a total of 600k steps, with 5k steps of warmup to a learning rate (LR) of 5e-4 which was held constant for 150k steps, followed by an exponential decay to a learning rate of 1e-5 for the remaining steps. The Accelerator-aware training (AAT) models are trained in Stage I for 500k steps with \(\lambda=2\) (see Eq. 4), and the same LR schedule as the baseline. This was followed by 5k-10k steps of training in Stage II with a constant learning rate of 1e-4. The acoustic features are 64-dimensional Log Mel Filterbank Energies (LFBE) with a window size of 25 ms and 10 ms overlap between frames.
All models are trained with absolute cosine quantization aware regularization for 8-bit on-device quantization of weights [16]. To train models under both setups, we use an in-house collection of a far-field English training dataset with 270k hours of audio. For evaluation, we use 6 test sets: 3
Figure 3: Quantization of Sigmoid, Tanh, Hidden States and Inputs
for Setup A, and 3 for Setup B. Since Setup A has a smaller model, we evaluate it on a smaller number of intents. The number of utterances in each test set is available in Table 1.
## 7 Results and Discussion
In Table 1, we compare the relative WER reduction (WERR) performance between the baseline and AAT models. Note that a positive WERR number indicates a WER improvement, whereas negative numbers signify WER degradation. Furthermore, in order to evaluate the effectiveness of each proposed stage, we present WER for each stage in Table 1. Here, Q denotes quantized WER numbers. Model A-I and A-2 represent the model in Setup A after Stage 1 and Stage II trainings, respectively. All WERR numbers are computed between the quantized (Q) version of the respective model and the un-quantized version of the baseline model. For example, the WERR of quantized Model A-II with respect to the un-quantized Baseline-A is +0.3%.
As shown in the Table 1, quantizing the inputs, hidden states and activations for the baseline models without the proposed AAT approach leads to 6-10% relative WER degradation for Setup A, and 3-7% for Setup B across our testing datasets. This degradation is reduced to be within 1% relative to the baseline performance after the two-stage AAT. In particular, as expected, we observed a larger degradation for the model in Setup A than the model in Setup B since the former has smaller number of parameters and is more susceptible to quantization than the latter. It is observed that by using the two-stage approach, we can effectively reduce the WER gap between quantized and unquantized versions for small and large models alike.
To demonstrate the latency gains, we provide the ASR engine latency (EL) and UPL numbers in Table 2. Here, EL measures the time elapsed between the user completing the utterance and the ASR recognition result being available in the ASR engine. UPL denotes the time elapsed between the user completing the utterance, and when the system responds with an appropriate dialogue / action. Thus, UPL includes EL plus the other server and device side processing required to fulfill the user's request. We conduct on-device tests with 500 utterances for both baseline and AAT models in setups A and B, and report both EL and UPL numbers at the 50th, 90th and 99th percentiles. As before, positive relative numbers signify reductions in latency whereas negative numbers signify degradation. We see that by compiling the AAT models, we can get approximately 4 - 9% EL and UPL reductions at p50. AAT enables this by allowing us to shift to a more quantized, faster NNA data path during inference with negligible degradation in performance.
## 8 Conclusions
In this work, we introduce a two-stage AAT approach to alleviate the performance degradation due to performing post-training model conversion and quantized inference on the NNA. In particular, we incorporate the NNA quantization operators into model training and apply regularization as well as cosine gradients at the quantization nodes to reduce the performance gap between the in-training and post-training models. It is observed that with our proposed approach, there is little to no WER performance degradation between the unquantized baseline models and the quantized AAT counterparts. Compared to the original quantized baseline, the proposed two-stage quantized AAT models save 3-10% relative WER degradation. Furthermore, AAT enables NPU/hardware execution of activation functions in runtime, leading to 5-7% relative latency reductions.
\begin{table}
\begin{tabular}{c|c|c|c} \hline \multirow{2}{*}{Setups} & \multirow{2}{*}{Statistic} & \multicolumn{2}{c|}{Baseline} & AAT \\ & & **EL** & **UPL** & **EL** & **UPL** \\ \hline \multirow{3}{*}{Setup A} & **p50** & 1.0 & 1.0 & 0.94 & 0.95 \\ & **p90** & 1.17 & 1.49 & 1.11 & 1.35 \\ & **p99** & 1.34 & 7.21 & 1.28 & 7.21 \\ \cline{2-5} & **p50** & 1.0 & 1.0 & 0.94 & 0.94 \\ \cline{1-1} \cline{2-5} & **p90** & 1.53 & 1.43 & 1.42 & 1.35 \\ \cline{1-1} & **p99** & 2.41 & 1.90 & 2.16 & 2.36 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Normalized Latency Measurements for Setup A and B. All values are normalized with respect to the baseline latencies at p50, for the respective setups.
\begin{table}
\begin{tabular}{c|c|c} \hline Datasets & \multicolumn{2}{c}{WERR (\%)} \\ \cline{2-3} (Num. Utts) & Baseline (Q) & B-I (Q) \\ \hline B-D1 (155936) & -5.4 & -0.4 \\ B-D2 (46530) & -7.2 & -0.6 \\ B-D3 (20279) & -3.0 & +1.3 \\ \hline
**Stage I** & ✗ & ✓ \\
**Stage II** & ✗ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Relative WER Reductions (WERR) for Setup A and B. WERR numbers are computed between the quantized (Q) version of the respective model and the un-quantized version of the baseline model. For example, the WERR of quantized Model A-II with respect to the un-quantized Baseline-A is +0.3%.
\begin{table}
\begin{tabular}{c|c|c} \hline Datasets & \multicolumn{2}{c}{WERR (\%)} \\ \cline{2-3} (Num. Utts) & Baseline (Q) & B-I (Q) \\ \hline B-D1 (155936) & -5.4 & -0.4 \\ B-D2 (46530) & -7.2 & -0.6 \\ B-D3 (20279) & -3.0 & +1.3 \\ \hline
**Stage I** & ✗ & ✓ \\
**Stage II** & ✗ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Setup A |
2308.09971 | Disposable Transfer Learning for Selective Source Task Unlearning | Transfer learning is widely used for training deep neural networks (DNN) for
building a powerful representation. Even after the pre-trained model is adapted
for the target task, the representation performance of the feature extractor is
retained to some extent. As the performance of the pre-trained model can be
considered the private property of the owner, it is natural to seek the
exclusive right of the generalized performance of the pre-trained weight. To
address this issue, we suggest a new paradigm of transfer learning called
disposable transfer learning (DTL), which disposes of only the source task
without degrading the performance of the target task. To achieve knowledge
disposal, we propose a novel loss named Gradient Collision loss (GC loss). GC
loss selectively unlearns the source knowledge by leading the gradient vectors
of mini-batches in different directions. Whether the model successfully
unlearns the source task is measured by piggyback learning accuracy (PL
accuracy). PL accuracy estimates the vulnerability of knowledge leakage by
retraining the scrubbed model on a subset of source data or new downstream
data. We demonstrate that GC loss is an effective approach to the DTL problem
by showing that the model trained with GC loss retains the performance on the
target task with a significantly reduced PL accuracy. | Seunghee Koh, Hyounguk Shon, Janghyeon Lee, Hyeong Gwon Hong, Junmo Kim | 2023-08-19T10:13:17Z | http://arxiv.org/abs/2308.09971v1 | # Disposable Transfer Learning for Selective Source Task Unlearning
###### Abstract
Transfer learning is widely used for training deep neural networks (DNN) for building a powerful representation. Even after the pre-trained model is adapted for the target task, the representation performance of the feature extractor is retained to some extent. As the performance of the pre-trained model can be considered the private property of the owner, it is natural to seek the exclusive right of the generalized performance of the pre-trained weight. To address this issue, we suggest a new paradigm of transfer learning called disposable transfer learning (DTL), which disposes of only the source task without degrading the performance of the target task. To achieve knowledge disposal, we propose a novel loss named Gradient Collision loss (GC loss). GC loss selectively unlearns the source knowledge by leading the gradient vectors of mini-batches in different directions. Whether the model successfully unlearns the source task is measured by piggyback learning accuracy (PL accuracy). PL accuracy estimates the vulnerability of knowledge leakage by retraining the scrubbed model on a subset of source data or new downstream data. We demonstrate that GC loss is an effective approach to the DTL problem by showing that the model trained with GC loss retains the performance on the target task with a significantly reduced PL accuracy.
## 1 Introduction
Transfer learning [28] (TL) is one of the bedrocks in the success of deep neural networks (DNNs). The core idea of TL is to build a strong generic model that can adapt to a broad range of downstream tasks with much less amount of data. The scale of data collection for pre-training becomes the key objective to build a model with competitive performance on their target tasks [15]. Nowadays, a lot of organizations are interested in collecting an internal dataset to build their own generic model.
For example, JFT-300M [6, 26, 27], a large-scale private dataset exclusively available to Google, is used for pre-training to reach the state-of-the-art performance of downstream target tasks [13, 29] with relatively small target datasets. IG-3.5B-17k [19], an internal Facebook AI Research dataset, is also used to train a weakly supervised model. Such datasets and trained models have a high economic value in that collecting data and training a model is time-consuming and costly, and a proficient model is readily adaptable to various commercial services.
However, those private properties are exposed to unauthorized customizing when released. Specifically, we denote piggyback learning (PL) (Figure 1) as a kind of extra fine-tuning on other downstream tasks for leveraging the benefits of the transfer-learned model with much less effort. As shown in Figure 2, the performance of TL (blue) and PL (green) on the downstream tasks is comparable, and is
Figure 1: Illustration of the proposed Disposable Transfer Learning (DTL) framework. DTL extends the existing Transfer Learning (TL) paradigm with an additional knowledge disposal stage that scrubs off the prior knowledge irrelevant to the target task. The goal of DTL is to prevent Piggyback Learning (PL) which maliciously exploits the representation performance of a pre-trained model for a piggyback task by simply performing an extra fine-tuning step on top of the published transfer-learned model.
considerably improved over the model trained from scratch (red). In other words, it enables anyone to exploit the pre-text knowledge even when one does not have access to the pre-trained model by accessing the released transfer-learned model. PL is profitable to those free riders, so they may launch a new service or product by just exploiting the proficient model. It conflicts with the model owner's interest.
To alleviate this potential risk, we propose a novel TL paradigm that temporarily utilizes and then _disposes of_ the source task knowledge after transfer learning, coined _Disposable Transfer Learning_ (DTL). DTL aims to protect the exclusive license of generic performance on the internal pre-training data while achieving a powerful downstream performance.
To address the DTL problem, we propose a novel loss function that scrubs the source task knowledge, named _Gradient Collision loss_ (GC loss). GC loss guides the model towards abnormal convergence on the source task by minimizing inner-products between sample gradients. GC loss deals with a non-typical unlearning problem where the scale of data to be unlearned is much larger and the dataset to be unlearned and the dataset to be retained are heterogeneous, whereas existing unlearning literature [2, 3, 8, 9] mainly focus on forgetting only a small portion of a single kind of training data.
After DTL, we measure the model's susceptibility to unwanted PL using _Piggyback Learning accuracy_ (PL accuracy). We define the PL accuracy of a model as the test accuracy measured by learning an additional piggyback task. A low PL accuracy indicates that the model successfully unlearned the source knowledge so that it is resistant to a small portion of source re-training or fine-tuning on other downstream tasks. We will show the importance of PL accuracy as a measurement of unlearning for validating knowledge disposal.
We demonstrate that the model scrubbed with GC loss retains the target performance while effectively preventing the exploitation of the performance of the pre-trained model. To the best of our knowledge, DTL with GC loss is the first work in making transfer learning and unlearning compatible.
Our main contributions are summarized as follows:
* We propose a novel forgetting problem, DTL, in which we try to dispose of the generalization power of a pre-trained model while adapting the model only to a specific target downstream task.
* We propose GC loss, which is a novel loss that achieves knowledge disposal of the source task. We also provide an extremely efficient implementation of GC loss that also allows for distributed training.
* We propose PL accuracy as an evaluation metric that can estimate the performance of a DTL model.
## 2 Related Work
### Unlearning in deep learning
Unlearning is a training mechanism to controllably forget specific data from the knowledge of a DNN and is gaining attention in the deep learning community owing to increased public awareness of digital privacy. The existing methods come with settings to bypass restrictions of the non-convex and stochastic nature of DNN, which make the perfect unlearning almost impossible. [3] combined distributed training and ensemble training to reduce retraining time excluding samples to be unlearned. Class-wise unlearning had been attempted [2], but it only removed a class from the classifier output, not from the model parameters. [9] tried to scrub a subset of a class or an entire class with the assumption that the size of the dataset to be forgotten is much smaller than the dataset to be retained, and it uses the mean squared error loss to make the model partially convex to guarantee the perfect forgetting from the model. [10] approximated the weight difference using Neural Tangent Kernel theory [14]. [8] approximated the model to a linear model, with pre-determining a subset of the training dataset not to be unlearned and using it in pre-training.
### Gradient-based learning methods
The gradient vector of the learned model on specific input data characterizes the relationship between the model and input. In representation learning, a gradient of the learned model can be used as a feature for downstream tasks [20]. In continual learning, [4, 18] take advantage of gradient vectors to reduce catastrophic forgetting of a previ
Figure 2: Performance comparison between models trained from scratch, models trained by transfer learning (TL), and models trained by piggyback learning (PL). The horizontal axis indicates the datasets used for performance evaluations. PL model (green) achieves performance comparable to TL model (blue) and both models perform much better than the models trained from scratch (red). For both TL and PL, CIFAR-100 is used as the source task. For PL, CIFAR-10-1% is additionally used as the target task before the model is piggybacked.
ously learned task. Also, it has been used to select samples for episodic memory [1]. The algorithm for composing the episodic memory aims to minimize the cosine similarity of the gradient for each sample pair in candidate memory, and the authors have shown that the solution of the minimization problem is consequently maximizing the variance of the gradient of selected samples.
### Readout function
Readout functions [2, 8, 9] are used to test whether a model has been successfully unlearned. Entropy, retraining time, or the success rate of membership inference attacks (MIAs) are usually used for the readout function. Entropy is used to measure the increase in uncertainty of the model after unlearning. This is based on the assumption that the prediction of a model gets less confident as it forgets the knowledge of interest. Retraining time quantifies the number of training steps required to restore the previous performance. MIA is a kind of attack method to detect whether a given sample was used in training a model, concerning the information leakage issue of training data from a trained model. Black-box MIAs only observe the input-output relationship, and white-box attacks fully exploit the model's architecture, weight, and gradient. We report the success rate of MIA against unlearned models using the strategies reported and implemented in [11]. Note that we use PL accuracy as the readout function for knowledge disposal.
## 3 Method
**Notations** Let \(\mathcal{D}\) be a dataset for a classification task with input space \(\mathcal{X}\) and label space \(\mathcal{Y}\). We consider a DNN model \(P(y|x;\theta)\) parameterized by \(\theta\) that takes in an input \(x\) and predicts a categorical probability distribution. We denote a training procedure as \(\theta^{out}=\textsc{Train}_{\mathcal{A}}\left(\theta^{in},\mathcal{L}( \theta;\mathcal{D})\right)\) which represents a model learned from an initialization \(\theta^{in}\), an objective function \(\mathcal{L}(\theta;\mathcal{D})\), and a training scheme \(\mathcal{A}\). For example, \(\mathcal{A}\) can be a stochastic gradient descent optimizer with decaying learning rate scheduling.
### Disposable transfer learning
Disposable transfer learning (DTL) is a training paradigm for selective unlearning of the source task upon completion of transfer learning. As described in Figure 1, it consists of two stages: the transfer learning (TL) stage and the knowledge disposal stage of the source data.
DTL is conducted by the owner of a private source data \(\mathcal{D}_{s}\), so \(\mathcal{D}_{s}\) is to be unlearned and accessible during unlearning. Also, the size of the target dataset \(\mathcal{D}_{t}\) is much smaller, _i.e._, \(|\mathcal{D}_{s}|\gg|\mathcal{D}_{t}|\), such that transfer learning is required to achieve competitive target task performance.
**Transfer learning stage** The transfer-learned model \(\theta^{tl}\) is obtained by pre-training and fine-tuning. The model is initially pre-trained on a source dataset \(\mathcal{D}_{s}\) from scratch, and the output model \(\theta^{pre}\) is then easily adaptable to the target task. Then \(\theta^{pre}\) is fine-tuned to get \(\theta^{tl}=\textsc{Train}_{\mathcal{A}}\left(\theta^{pre},\mathcal{L}( \theta;\mathcal{D}_{t})\right)\). Due to the small scale of \(\mathcal{D}_{t}\), fine-tuning has a reasonable training cost and the fine-tuned weight \(\theta^{tl}\) is not perturbed significantly from \(\theta^{pre}\).
**Knowledge disposal stage** Knowledge disposal stage is the last stage of DTL for building \(\theta^{dtl}\) by disposing of the source task from the transfer-learned model \(\theta^{tl}\) with the DTL loss \(\mathcal{L}_{DTL}\), which we denote as \(\theta^{dtl}=\textsc{Train}_{\mathcal{A}}\left(\theta^{tl},\mathcal{L}_{DTL }(\theta;\mathcal{D})\right)\).
For a successful disposable _transfer learning_ where good generalization to the target task is one of the key factors, \(\theta^{dtl}\) has to retain the performance on the target task of \(\theta^{tl}\). Simultaneously, it is essential to dispose of the source performance not to be recovered through PL, as discussed in Section 3.3. We formulate those factors into the retaining loss and the unlearning loss by combining them into a single objective as Equation (1). Here, \(\lambda\) is a scalar hyperparameter that controls the level of unlearning.
\[\mathcal{L}_{DTL}(\theta)=(1-\lambda)\cdot\mathcal{L}_{retain}(\theta)+ \lambda\cdot\mathcal{L}_{unlearn}(\theta) \tag{1}\]
### Retaining loss and unlearning loss
#### 3.2.1 Retaining by knowledge distillation loss
To effectively retain the transferred knowledge on the downstream task, we adopt knowledge distillation loss (KD loss) [13]. KD loss is for transferring knowledge between different models by setting the output of the teacher model to a soft target and minimizing the KL divergence of the soft target and the output of a student model, as formulated in Equation (2). In this paper, we choose \(\mathcal{D}=\mathcal{D}_{s}\) to prevent a risk of over-fitting due to the small size of \(\mathcal{D}_{t}\).
\[\mathcal{L}_{retain}(\theta)=\mathop{\mathbb{E}}_{x\sim\mathcal{D}}\left[D_{ KL}\left(P(y|x;\theta^{tl})||P(y|x;\theta)\right)\right] \tag{2}\]
Figure 3: A conceptual diagram of gradients after transfer learning (left) and knowledge disposal (right). The gradients from the source data examples are marked as blue arrows. As depicted in Figure 1, \(\theta^{pre}\), \(\theta^{tl}\) and \(\theta^{dtl}\) correspond to the pre-trained model, transfer-learned model, and disposable transfer-learned model, respectively.
#### 3.2.2 Unlearning by gradient collision loss
Consider a dataset of training examples \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{N}\). The cross-entropy loss of a model \(\theta\) on an example \((x_{i},y_{i})\) is denoted by \(\ell(\theta;x_{i},y_{i})\), which is abbreviated to \(\ell_{i}(\theta)\) when clear from the context.
In the context of unlearning, a key concept is how the value of \(\ell_{n}(\theta)\) changes when the model weights are updated to reduce \(\ell_{m}(\theta)\). In gradient descent, the model weight is optimized to minimize the loss function by updating \(\theta\) by \(\Delta\theta\) which is \(\nabla\ell(\theta)\) scaled by the learning rate \(\eta\):
\[\Delta\theta=-\eta\nabla\ell(\theta;x_{m},y_{m}) \tag{3}\]
The change of \(\ell_{n}(\theta)\) over \(\Delta\theta\) is approximated by its first-order Taylor expansion,
\[\Delta\ell_{n}(\theta) =\ell_{n}(\theta+\Delta\theta)-\ell_{n}(\theta) \tag{4}\] \[\simeq\nabla\ell_{n}(\theta)^{\top}\Delta\theta. \tag{5}\]
We get the following relationship between a pair of gradients on the loss by combining Equation (3) and Equation (5).
\[\Delta\ell_{n}(\theta)\simeq-\eta\nabla\ell_{m}(\theta)^{\top}\nabla\ell_{n}(\theta) \tag{6}\]
Equation (6) states that the value of \(\ell_{n}(\theta)\) is affected by the relationship between \(\nabla\ell_{m}\) and \(\nabla\ell_{n}\). As illustrated in the left side of Figure 3, the gradients of a pair of examples point in similar directions, _i.e_. the inner-product of the gradients is positive, the model update for reducing \(\ell_{m}(\theta)\) also causes a decrease in \(\ell_{n}(\theta)\). On the other hand, as in the right side of Figure 3, the update for reducing the loss on a sample does not imply the other's learning or can even hinder the other's learning. Hence, we claim that the behavior of the model to learn the knowledge can be hindered by colliding the gradient of the examples of the dataset, which corresponds to reducing the inner product of the gradient vectors. Finally, we propose the gradient collision loss (GC loss).
**Definition 3.1** (Gradient Collision loss).: The full gradient collision loss defined for a dataset \(\mathcal{D}\) of size \(N\) is given by,
\[\mathcal{L}_{gc}(\theta,\mathcal{D})=\frac{1}{{N\choose 2}}\sum_{m\neq n} \nabla\ell_{m}(\theta)^{\top}\nabla\ell_{n}(\theta). \tag{7}\]
When the model is unlearned, each gradient vector changes direction as seen in the Figure 3 and they converge towards perpendicular angles which results in suppressed inner-product value. Note that GC loss does not have a zero minimum. The value is negative when the per-sample gradients point in opposing directions. In this case, a gradient descent step on one sample leads to a loss increase on another sample.
This ideal form aggregates every pair of possible combinations of \(\ell_{i}\), however, the computation quickly becomes intractable as the size \(N\) grows. Therefore, we calculate the inner products within a stochastically sampled mini-batch by dividing a mini-batch into \(c\) number of chunks and colliding the gradients of the chunks. We calculate the unlearning loss for every chunk pair, which are \({c\choose 2}\) number of pairs per mini-batch.
**Definition 3.2** (Stochastic Gradient Collision loss).: We define a variant of GC loss for mini-batch training. The stochastic GC loss is the sum of inner products between pairs of chunk-averaged gradients. Here, \(\nabla\bar{\ell}_{i}\) is the gradient averaged over the \(i\)-th chunk in a mini-batch, and \(c\) is the number of chunks per mini-batch.
\[\mathcal{L}_{gc}(\theta,\mathcal{D})=\frac{1}{{c\choose 2}}\sum_{m\neq n} \nabla\bar{\ell}_{m}(\theta)^{\top}\nabla\bar{\ell}_{n}(\theta) \tag{8}\]
We compute GC loss on \(\mathcal{D}_{s}\), for disposing of the source knowledge by guiding the gradient of source data colliding. We use Equation (8) as our unlearning objective along with a custom gradient computation for efficient training.
Efficient training of GC lossTraining with a naive implementation of GC loss is expensive because it requires calculating the gradient of each chunk in series, summing up the inner product of all pairs of the gradient vectors, and then conducting back-propagation.
For computational efficiency, we re-formulate the derivative of GC loss to a sum of Hessian-vector products (HVPs), as in Equation (10).
Figure 4: Illustration of distributed computation for GC loss. The computation of GC loss and its gradient can be optimized by rearranging Equation (9). The rearrangement makes the HVP computation parallelized and enables distributed processing of the backward pass. The pseudo-code is reported in Algorithm 1.
\[\nabla\mathcal{L}_{gc} =\frac{1}{\binom{c}{2}}\sum_{m\neq n}\nabla\left(\nabla\bar{\ell}_{m} ^{\top}\nabla\bar{\ell}_{n}\right) \tag{9}\] \[=\frac{1}{\binom{c}{2}}\sum_{m=1}^{c}\nabla^{2}\bar{\ell}_{m}^{\top }\sum_{m\neq n}\nabla\bar{\ell}_{n} \tag{10}\]
This reduces repeated computation of intermediate vectors and parallelizes computing the derivative of each chunk and each HVP along the chunk axis for distributed processing across multiple GPUs as depicted in Figure 4.
A detailed description of the algorithm is provided in Algorithm 1. We use DistributedDataParallel[24] primitives of PyTorch which provide communication for multiprocessing. First, each process calculates the chunk-wise gradient of cross-entropy loss (Line 3). The gradients are aggregated across processes using gather operation to compute the GC loss (Line 4). detach allows for calculating the partial derivative of gradient which is required for computing the HVP. Finally, the GC loss is then partially back-propagated, and we reduce the values to obtain the gradient of GC loss as Equation (10) to update the model parameters.
While a naive combinatorial implementation for GC loss has \(\mathcal{O}(c^{2})\) complexity, our re-formulation greatly reduces the cost to \(\mathcal{O}(c)\). Note that this matches the complexity of typical loss functions. This is because our method only needs products between individual vectors against an averaged value, not pair-wise products, which enables data parallelism for multiprocessing. An extra cost is a backward-on-backward step for propagating the gradient of GC loss in Line 6. Fortunately, this only requires a similar computational cost and memory footprint to a typical backpropagation.
#### 3.2.3 Baseline unlearning methods
As baselines, we consider three additional naive unlearning losses adopted from [9]. Random target fooling loss updates the model using cross-entropy loss with \(\bar{\mathcal{D}}\), a dataset constructed from \(\mathcal{D}\) with random target labels. This leads the model to memorize wrong answers so that the model unlearns the related knowledge.
\[\mathcal{L}_{rand}(\theta;\mathcal{D})=\operatorname*{\mathbb{E}}_{(x,y) \sim\bar{\mathcal{D}}}[-\log P(y|x;\theta)] \tag{11}\]
Uniform target fooling loss guides the model to output uniform distribution, therefore the model loses the ability to make any prediction. Here, \(\mathcal{U}(y)\) is an uniform distribution over \(\mathcal{Y}\).
\[\mathcal{L}_{unif}(\theta;\mathcal{D})=\operatorname*{\mathbb{E}}_{x\sim \mathcal{D}}[D_{KL}\left(\mathcal{U}(y)||P(y|x;\theta)\right)] \tag{12}\]
Negative cross-entropy loss flips the learning signal by increasing the cross-entropy. The concept is that increasing loss through gradient ascent steps lets a model be forgotten.
\[\mathcal{L}_{neg}(\theta;\mathcal{D})=\operatorname*{\mathbb{E}}_{(x,y)\sim \mathcal{D}}[\log P(y|x;\theta)] \tag{13}\]
We will compare the proposed unlearning loss \(\mathcal{L}_{gc}\) against \(\mathcal{L}_{rand}\), \(\mathcal{L}_{unif}\), and \(\mathcal{L}_{neg}\) on \(\mathcal{D}_{s}\) in Section 4.
### Evaluation by piggyback learning accuracy
In this section, we establish an evaluation protocol for DTL performance using piggyback learning accuracy (PL accuracy). While measuring the source task accuracy may seem like the most direct approach for evaluating knowledge disposal, we have found that it is less effective measure due to trivial solutions such as last-layer fooling. Specifically, if the source task accuracy is used as an unlearning metric, a trivial solution is to simply collapse the source classifier since there are separate classifiers for the unlearned task and the retained task in DTL. Designing a proper evaluation protocol for DTL is crucial, as the ultimate goal is to prevent the model from learning an unknown piggyback task after transfer learning is completed. To address these challenges, we propose to benchmark DTL using PL accuracy.
**Definition 3.3** (Piggyback learning and piggyback learning accuracy).: We define Piggyback Learning (PL) as adapting a model \(\theta^{0}\) on a piggyback task \(\mathcal{D}_{p}\) using a fine-tuning scheme \(\mathcal{A}\).
\[\theta^{pl}=\textsc{Train}_{\mathcal{A}}\left(\theta^{0},\mathcal{L}(\theta; \mathcal{D}_{p}^{train})\right) \tag{14}\]
where \(\mathcal{D}_{p}^{train}\) is the train split of \(\mathcal{D}_{p}\). Piggyback learning accuracy \(Acc_{pl}(\theta^{0})\) of a model \(\theta^{0}\) is the test accuracy of \(\theta^{pl}\) on the piggyback task. \(\theta^{0}\) is set to either \(\theta^{tl}\) or \(\theta^{dtl}\) in our main experiments.
We use multiple datasets for measuring PL accuracy. Unlike a typical test accuracy, which is measured on a certain dataset, our protocol estimates the performance by fine-tuning the model obtained from DTL and testing the performance on multiple datasets.
The PL accuracy quantifies the susceptibility of a model to various kinds of possible downstream tasks. In other words, it measures the model's transferability as a pre-trained weight.
When \(\mathcal{D}_{p}\) is substituted with the source dataset \(\mathcal{D}_{s}\), PL accuracy can be also used to estimate the recoverability of the scrubbed model. Lower PL accuracy implies a lower risk of catastrophic leakage of generic performance because the difficulty of relearning depends on the amount of knowledge remaining in the unlearmed model.
## 4 Experiment
### Experimental setting
We denote a DTL training scheme as \(\mathcal{D}_{s}\xrightarrow{DTL}\mathcal{D}_{t}\) where \(\mathcal{D}_{s}\) is the source task and \(\mathcal{D}_{t}\) is the target task. We used CIFAR-10/100, STL-10, SVHN, and TinyImageNet [5, 16, 17, 23] to construct the benchmarks. Additionally, we reduced the scale of the target training datasets by sub-sampling. The training sets are reduced from the original size by class-balanced random sampling. Sub-sampled datasets have their name suffixed by "-\(\gamma\)%", where \(\gamma\) is the sampling ratio. Reducing the dataset size sets heavier emphasis on transfer learning as TL brings more impact to the target task performance (\(Acc_{t}\)). The unlearned models are evaluated by the target accuracy and PL accuracy (\(Acc_{pl}\)). For measuring the PL accuracy, the datasets are subsampled. Columns marked with an up-arrow symbol (\(\uparrow\)) indicate that higher is better, while a down-arrow symbol (\(\downarrow\)) indicates the opposite. We used ResNet-18 [12] architecture for all experiments.
### Baseline methods
The most naive baseline is TGT model, which is trained only with \(\mathcal{D}_{t}\) from scratch. In addition, we compared the effect of the baseline unlearning losses in Section 3.2.3 by conducting the knowledge disposal stage from the transferred model (TL). We named each model following the name of unlearning loss. The model unlearned using the proposed GC loss (Equation (8)) is marked with GC. We use random target fooling loss (RAND, Equation (11)), uniform target fooling loss (UNIF, Equation (12)), and negative cross-entropy loss (NEG, Equation (13)). Refer to the supplementary materials for the hyperparameter \(\lambda\) in Equation (1) used in our main experiments. For a fair comparison of unlearning methods, we adjusted \(\lambda\) so that the differently unlearned models are compared with similar target accuracy.
### Piggyback learning accuracy of unlearned models
Table 1 shows the comparison of four DTL models, which have similar target accuracy (\(Acc_{t}\)). \(Acc_{t}\) is improved significantly in CIFAR-100 \(\xrightarrow{DTL}\) CIFAR-10-1% experiment than TGT model, with negligible performance degradation compared to TL model. A similar trend is observed in CIFAR-100 \(\xrightarrow{DTL}\) STL-10-10% case, where \(Acc_{t}\) is significantly higher than the TGT model with only a negligible penalty. Likewise, GC loss is superior to the others in TinyImageNet \(\xrightarrow{DTL}\) CIFAR-100 experiment.
The model with lower PL accuracy (\(Acc_{pl}\)) successfully disposes of the source knowledge so is less susceptible to other piggyback tasks. Our experiments show that the proposed method (GC model) is the most effective in preventing piggyback learning. Notably, when the pre-training task is CIFAR-100 and the piggyback task is CIFAR-100-10%, the GC model successfully unlearns the source task knowledge by a significant gap in PL accuracy with other baselines. The PL accuracy of the GC model showed a decrease of 24.53 percentage points compared to the TL model, whereas the PL accuracy of the best-performing baseline (RAND) was only 14.16 percentage points lower than that of the TL model.
In addition, when the SVHN-1% dataset is piggyback, only the GC model successfully prevents the full recov
\begin{table}
\begin{tabular}{c|c c c c c|c c c c|c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{CIFAR-100 \(\xrightarrow{DTL}\) CIFAR-10-1\%} & \multicolumn{3}{c|}{CIFAR-100 \(\xrightarrow{DTL}\) STL-10-10\%} & \multicolumn{3}{c|}{TinyImageNet \(\xrightarrow{DTL}\) CIFAR-100-100\%} \\ \cline{2-13} & \(\Delta Acc_{t}\) & \(\Delta Acc_{t}\) & \(\Delta Acc_{pl}\) vs TL \(\downarrow\) & \(\Delta Acc_{t}\) & \(\Delta Acc_{t}\) & \(\Delta Acc_{pl}\) vs TL \(\downarrow\) & \(\Delta Acc_{pl}\) vs TL \(\downarrow\) & \(\Delta Acc_{t}\) & \(\Delta Acc_{t}\) & \(\Delta Acc_{pl}\) vs TL \(\downarrow\) \\ & vs TGT \(\uparrow\) & vs TL \(\uparrow\) & \(\uparrow\) & vs TGT \(\uparrow\) & vs TGT \(\uparrow\) & vs TL \(\uparrow\) & \(\uparrow\) & \(\uparrow\) & vs TGT \(\uparrow\) & vs TL \(\uparrow\
ery of the PL accuracy of the TL model whereas others surpass the PL accuracy of the TL model. The proposed GC model significantly outperforms the baselines all while achieving target task performance comparable with the TL model across all benchmarks including the TinyImageNet experiments.
### Robustness to membership inference attacks
We investigated the potential security issue of membership information leakage on the private source data by MIA strategies, as mentioned in Section 2.3. The success rate for white-box MIAs is reported in Table 2 to evaluate the robustness of the DTL models. The result shows that all unlearned models are significantly more robust to MIAs than the TL model which has not been unlearned. Among them, the GC model is the most resistant to MIA attacks, demonstrating the lowest success rate across most attack strategies. Notably, in the case of WB attack [22] which utilizes intermediate features for fitting the attack model, the GC model remains considerably robust while other baselines are shown more vulnerable. It is because while other baselines focus on perturbing the output layer, GC loss directly perturbs the hidden layer representations through the gradient vectors, which results in significantly better resilience against multiple MIA-based privacy attacks.
### Effectiveness of piggyback learning accuracy
We inspected the effectiveness of PL accuracy as an evaluation metric of knowledge disposal from two perspectives. First, our results show that the source accuracy of the DTL model cannot be used as a representative measure for estimating knowledge disposal. As shown in Table 3, the UNIF model and GC model exhibit similar source accuracy (\(Acc_{s}\)), yet their PL accuracy on source data (\(Acc_{pl}\)) differs significantly in both CIFAR-100\(\xrightarrow{DTL}\)CIFAR-10-1% and CIFAR-100\(\xrightarrow{DTL}\)STL-10-10% experiments. This is consistent with our observations that trivial unlearning can occur by simply choosing to disrupt the classification layer, resulting in degraded performance while the feature extractor remains intact.
Furthermore, we have observed that PL accuracy stays an effective metric across an arbitrary size of the piggyback dataset. In Figure 5, we examined the PL accuracy on the source data, CIFAR-100 (Figure 4(a)), and new downstream data, SVHN (Figure 4(b)), in CIFAR-100\(\xrightarrow{DTL}\) CIFAR-10-1% experiment. We simulated with an arbitrary size of the piggyback dataset by sampling \(\gamma\)% of the original training data.
Interestingly, we found that the ranking of the PL accuracy on source data stays consistent across all ranges of sampling ratio (Figure 4(a)), which means that the data size has little effect on the validation of knowledge disposal. Measuring PL accuracy on another downstream task (Figure 4(b)) has a similar effect on measuring the vulnerability of unlearned models on extra fine-tuning. In both experiments, the TGT model (orange) shows the weakest transfer for PL since it has not learned \(\mathcal{D}_{s}\). The GC model (blue) behaves relatively similarly to TGT while the TL model is much more susceptible to PL. This is because the TL does not forget the source task knowledge. Note that when the sampling ratio \(\gamma\) approaches 100%, the PL accuracy metrics converge and become less discriminative.
### Sensitivity analysis for \(\lambda\)
In this section, we discuss how the trade-off between model performance on the target accuracy and PL accuracy behaves across varying values of \(\lambda\) in Equation (1) and characterizes the models. In Figure 6, we show the PL accuracy vs. target accuracy curve by varying \(\lambda\). In Figure 5(a), we vary the unlearning losses while the KD loss is fixed as the source knowledge retaining loss. In Figure 5(b), the unlearning loss is fixed to GC loss while we vary the knowledge retaining loss. Both experiments are conducted on the CIFAR-100\(\xrightarrow{DTL}\)CIFAR-10-1% experiment with CIFAR-100-10% as the piggyback dataset.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{5}{c}{MIA strategy \(\downarrow\)} \\ \cline{2-6} & Adv. Dist & \(\dagger\)Grad \(w\) & \(\dagger\)Grad \(x\) & \(\dagger\)WB & Avg. \\ \hline TL & 63.63 & 65.33 & 65.09 & 65.29 & 64.84 \\ RAND & 50.23 & 50.90 & 50.58 & 51.49 & 50.80 \\ UNIF & 50.40 & 51.10 & 50.64 & 53.14 & 51.32 \\ NEG & **50.00** & 51.91 & 50.74 & 56.93 & 52.40 \\ GC & 50.11 & **50.69** & **50.45** & **50.52** & **50.44** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Membership inference attack (MIA) accuracy on CIFAR-100\(\xrightarrow{DTL}\)CIFAR-10 experiment. A lower value indicates better robustness to MIA, therefore more success in unlearning. MIA strategies used are [11, 22, 25]. \(\dagger\) additionally involves training an attacker model.
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{CIFAR-100 \(\xrightarrow{DTL}\)CIFAR-10-1%} & \multicolumn{2}{c}{CIFAR-100 \(\xrightarrow{DTL}\)} & \multicolumn{2}{c}{CIFAR-100 \(\xrightarrow{DTL}\)} & \multicolumn{2}{c}{CIFAR-100\(\xrightarrow{DTL}\)} \\ \cline{2-7} & \(Acc_{s}\) & \(Acc_{t}\) & \(Acc_{pl}\) & \(Acc_{s}\) & \(Acc_{t}\) & \(Acc_{pl}\) \\ \hline TL & 67.46 & 70.93 & 68.10 & 65.41 & 63.45 & 68.15 \\ RAND & 1.64 & 69.00 & 53.94 & 2.02 & 62.47 & 56.88 \\ UNIF & 2.74 & 71.41 & 55.06 & 7.03 & 63.04 & 55.67 \\ NEG & 0.02 & 71.17 & 60.92 & 0.02 & 60.86 & 57.75 \\ GC & 2.41 & 68.96 & 43.57 & 3.14 & 61.53 & 45.16 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of source task accuracy (\(Acc_{s}\)) and PL accuracy (\(Acc_{pl}\)) after DTL. \(Acc_{s}\) does not serve as a proxy for estimating the degree of knowledge disposal (\(Acc_{pl}\)).
Our results show that the powerful DTL performance of the combined loss of KD loss on the source data and GC loss is not a coincidence of a single value of \(\lambda\), but it works for different levels of target and PL accuracy. In both experiments, the model which fits the unlearning goal of DTL is with high target accuracy and low PL accuracy, as it is plotted on the upper left region. As \(\lambda\) increases, the target accuracy and PL accuracy decrease as the unlearning loss becomes dominant. In Figure 5(a), it is observed that GC loss (blue) performs much better on the target task with the same level of PL accuracy. The GC loss retains target performance with significantly lower PL accuracy compared to the others which show catastrophically degraded target performance, even at a high PL accuracy.
Meanwhile, retaining the target task knowledge is another important factor in successful knowledge disposal. In Figure 5(b), we compare different knowledge retaining losses. We compare the adopted KD loss with \(\mathcal{D}_{s}\) and the fine-tuned model for retaining the target knowledge (SRC-KD) as in Equation (2) against three other knowledge retaining baselines: KD loss with the target dataset (TGT-KD), training the model jointly with the typical cross-entropy loss with the target dataset (TGT-CE), cross-entropy loss replaced with A-GEM (TGT-A-GEM) [4, 18], which is a constrained optimization technique used for continual learning. See Section C.1 in the supplementary materials for details on TGT-A-GEM.
We found that KD-based knowledge retaining losses (SRC-KD, TGT-KD) outperform the non-KD methods. They show significantly higher target accuracy than other baselines with the same level of PL accuracy. This is because KD not only retains the target knowledge but also transfers dark knowledge from the TL model. The results provide justifications for using SRC-KD over TGT-KD in knowledge retaining. As discussed in Section 3.1 and Section 3.2.1, the amount of the target data is insufficient to represent the whole target distribution, whereas the larger source data facilitates better knowledge distillation from the TL model.
## 5 Conclusion
We propose a novel transfer learning scheme, named disposable transfer learning (DTL), which is designed to address the risk of piggybacking on a transfer-learned model when the model is released to the public. Our results highlight the promising potential of DTL towards preventing unauthorized exploitation of pre-trained weight for performance gain once the target task is adapted through transfer learning. To selectively dispose of the source knowledge, we propose a novel unlearning loss, coined gradient collision loss (GC loss). We have demonstrated that a combination of KD loss and GC loss successfully achieves DTL. Further, we propose an evaluation protocol named piggyback learning accuracy (PL accuracy) which verifies the susceptibility against piggyback learning. We have demonstrated that GC loss unlearns a model to make it less susceptible to malicious piggybacking through low PL accuracy, emphasizing the effectiveness of our method.
### Limitations and Future Works
Our work focused on establishing the DTL paradigm and its implementation. While our GC loss has shown superiority compared to the basic methods, there remains considerable room for improving the unlearning objective. Specifically, better integration of the knowledge retaining and unlearning objectives needs to be further explored; in our approach, we chose to simply optimize the sum of two separate objectives. Additionally, our studies were confined to relatively small-to-medium-scale datasets and model architecture. We believe that our work provides a foundation for future research for DTL. We encourage subsequent studies to design a better-integrated objective and investigate DTL within larger, more realistic contexts.
Figure 5: PL accuracy in CIFAR-100\(\xrightarrow{DTL}\)CIFAR-10-1% experiment under varying dataset sampling rate \(\gamma\).
Figure 6: Trade-off of target accuracy and PL accuracy with varying \(\lambda\in[0,1]\). We experimented with different knowledge-retaining losses and unlearning losses.
## Acknowledgements
This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2022-0-00951, Development of Uncertainty-Aware Agents Learning by Asking Questions).
|
2301.04786 | Distinct topological configurations of equatorial timelike circular
orbit for spherically symmetric (hairy) black holes | Topology is a promising approach toward to the light ring in a generic black
hole background, and equatorial timelike circular orbit in a stationary black
hole background. In this paper, we consider the distinct topological
configurations of the timelike circular orbits in static, spherically
symmetric, and asymptotic flat black holes. By making use of the equation of
motion of the massive particles, we construct a vector with its zero points
exactly relating with the timelike circular orbits. Since each zero point of
the vector can be endowed with a winding number, the topology of the timelike
circular orbits is well established. Stable and unstable timelike circular
orbits respectively have winding number +1 and -1. In particular, for given
angular momentum, the topological number of the timelike circular orbits also
vanishes whether they are rotating or not. Moreover, we apply the study to the
Schwarzschild, scalarized Einstein-Maxwell, and dyonic black holes, which have
three distinct topological configurations, representations of the radius and
angular momentum relationship, with one or two pairs timelike circular orbits
at most. It is shown that although the existence of scalar hair and
quasi-topological term leads to richer topological configurations of the
timelike circular orbits, they have no influence on the total topological
number. These results indicate that the topological approach indeed provides us
a novel way to understand the timelike circular orbits. Significantly,
different topological configurations can share the same topology number, and
hence belong to the same topological class. More information is expected to be
disclosed when other different topological configurations are present. | Xu Ye, Shao-Wen Wei | 2023-01-12T02:31:43Z | http://arxiv.org/abs/2301.04786v2 | Topological study of equatorial timelike circular orbit for spherically symmetric (hairy) black holes
###### Abstract
Topological approach has been shown a promising study toward to the light ring in a generic black hole background. We in this paper generalize it to the study of timelike circular orbits in static, spherically symmetric, and asymptotic flat black holes. By making use of the equation of motion of the massive particles, we construct a vector with its zero points exactly relating with the timelike circular orbits. Since each zero point of the vector can be endowed with a winding number, the topology of the timelike circular orbits is established. Stable and unstable timelike circular orbits respectively have winding number +1 and -1. In particular, different from that of the light ring, the timelike circular orbits possess vanishing topological number and appear in pairs for given angular momentum. Moreover, we apply the study to the Schwarzschild, scalarized Einstein-Maxwell, and dyonic black holes, which have distinct topological configurations. It is shown that although the existence of scalar hair and quasi-topological term leads to richer topological configurations of the timelike circular orbits, whereas they have no influence on the topological number. These results indicate that the topological approach indeed provides us a novel way to understand the timelike circular orbits. More information is expected to be disclosed when other different topological configurations are present.
Classical black hole, timelike circular orbit, topology pacs: 04.20.-q, 04.25.dg, 04.70.Bw
## I Introduction
The gravitational wave detections by LIGO and Virgo Collaborations [1; 2; 3] provide strong evidence that astrophysical black holes exist and merge. Via such binary black hole mergers, the nature of the black hole can be well studied with the inspiral, merger, and ringdown waveforms. On the other hand, through the observations of shadow imaging [4; 5; 6], the information near the black hole horizon geometry can be tested.
Extensive studies have shown that the ringdown and shadow observables are both intimately connected to a special set of null circular orbits, known as the light rings (LRs) [7; 8]. Apart from the null geodesics, the timelike geodesics of massive particles can also form the circular orbit around the black holes. Such timelike circular orbits (TCOs) are also one kind fundamental characteristic orbits. These massive particles dropped from far away from the black hole accumulate on these stable TCOs and form an accretion disk with its inner edge measuring by the innermost stable circular orbit (ISCO) [9].
On account of that these characteristic orbits are directly related to the motion of particles that hide valuable information on spacetime background, several different methods are developed to deal with these circular orbits. The most common one is to solve the geodesic equations via Lagrangian, and obtain the circular orbit by formulating the effective potential. This treatment has a wide range of applications such as studying the photon sphere in static, stationary, or dynamical spacetime [10; 11; 12]. The other one is called the quasi-local approach, through which the first quasi-local definition of the photon surface was given by Claudel, Virbhadra, and Ellis [13], and then it is extended to the trapped surface [14].
Quite differently, the topological method can also be applied in the analysis of circular orbit whether its center is a
black hole or a horizonless ultracompact object. In Ref. [15], Cunha, Berti, and Herdeiro first proposed a topological approach and proved a theorem that if an axisymmetric and stationary solution of the Einstein field equation obeys the null energy condition, the ultracompact objects formed from the classical gravitational collapse of matter must have at least two LRs, one of which is stable and the other is unstable. Such study showed the great success of topological approach without knowing the specific locations of LRs. Subsequently, such treatment was generalized to a stationary axisymmetric, asymptotically flat black hole [16]. The result stated that there is at least one standard unstable LR outside the black hole horizon for each rotation sense. For the static, spherically symmetric black holes in asymptotically flat, dS, and AdS spacetime, such property still held [17]. Even when more LRs or photon spheres are presented, there is always one more unstable LR or photon sphere. Other relevant studies can also be found in Refs. [18; 19; 20; 21].
On the other hand, the equatorial circular orbits for the photons and massive particles are closely related to each other. In Refs. [22; 23], it was found that an unstable (stable) LR delimits a region of unstable (stable) TCOs radially above (below) it. Moreover, the corresponding corollary was discussed for both horizonless ultracompact objects and black holes. However one significant difference of TCO from LR is that it not only depends on the black hole parameters, but also on the angular momentum and energy of the particles. It seems that such feature makes it impossible to establish the topology for the TCOs.
However, very recently, it was first noted in our previous work Ref. [24] that by constructing an appropriate vector, the topology can be well-behaved for the TCOs. Although the angular momentum and energy of the particles modify the location of the TCO or the zero points of the vector, they do not alter the asymptotic behaviors at the boundaries of the parameter space, and thus the corresponding topological argument was meaningful. Considering a stationary black hole background, we found that the topological number of TCOs \(W=0\), quite different from \(W\)=-1 of LRs. This suggests that if there exist TCOs, they must appear in pairs for given angular momentum. In particular, stable and unstable TCOs, respectively, have positive or negative winding numbers. These results were exactly confirmed by further applying the topology to the Kerr black holes, where there will be no TCO, a pair TCOs with the increase of the angular momentum.
As pointed out in Ref. [17], the static black holes have richer LR structures than the stationary black holes. Here it is interesting to examine whether these topological results of TCOs still hold for the static, spherically symmetric black holes when different topological configurations are given.
An outline of the present paper is as follows. In Sec. II, we start with a general spherically symmetric, asymptotic flat black hole and obtain the effective potential for the motion of massive particles via its Lagrangian. Then, we construct a vector \(\phi\). Its zero points are exactly related to the TCOs, which allows us to establish a corresponding topology. After examining the asymptotic behavior of the vector, the topological number is found to vanish, implying that the TCOs always come in pairs for given angular momentum. Furthermore, the relation between the stability and the winding number is analyzed. In Sec. III, this generic study is applied to the Schwarzschild black holes. As expected, the result is consistent with the general analysis. In Sec. IV, the topological study is carried out for the scalarized Einstein-Maxwell black holes. There exhibit five different topological situations of TCOs. For small angular momentum, the TCO does not exist. However for large angular momentum, two pairs TCOs can be observed. Nevertheless, the topological number always keeps zero. In Sec. V, the dyonic black hole with a quasi-topological term is investigated. Although the quasi-topological term leads to the existence of TCOs for arbitrarily small angular momentum, the total topological charge still vanishes. Finally, we summarize and discuss our results in Sec. VI. In this paper, the geometrized unit system is chosen \(c=\hbar=G=1\).
## II General set up
In four-dimensional asymptotic flat spacetime, the static and spherically symmetric black holes can be described by the following line element
\[ds^{2}=g_{tt}dt^{2}+g_{rr}dr^{2}+r^{2}d\Omega_{2}^{2}, \tag{1}\]
where \(d\Omega_{2}^{2}=d\theta^{2}+\sin^{2}\theta d\phi^{2}\) describes unit 2-sphere. In this black hole background, the Lagrangian of a free test particle reads
\[\mathcal{L}=\frac{1}{2}g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}=-\frac{1}{2}\mu^{ 2}, \tag{2}\]
where the dot denotes the derivative with respect to an affine parameter, and \(\mu^{2}=1,\,0,\,-1\) are for the timelike, null, and spacelike geodesics, respectively. Via the Legendre transformation, the Hamiltonian of the test particle can be
obtained
\[\mathcal{H} = \pi_{\mu}\dot{x}^{\mu}-\mathcal{L} \tag{3}\] \[= \frac{1}{2}\left(g_{tt}\dot{t}^{2}+g_{rr}\dot{r}^{2}+g_{\theta \theta}\dot{\theta}^{2}+g_{\phi\phi}\dot{\phi}^{2}\right),\]
where \(\pi_{\mu}\equiv\frac{\partial\mathcal{L}}{\partial\dot{x}^{\mu}}=g_{\mu\nu} \dot{x}^{\nu}\) is the corresponding conjugate momentum of the canonical coordinate \(x^{\mu}\).
After a simple rearrangement, the Lagrangian (2) is reexpressed as
\[g_{rr}\dot{r}^{2}+g_{\theta\theta}\dot{\theta}^{2}=-g_{tt}\dot{t}^{2}-g_{\phi \phi}\dot{\phi}^{2}-\mu^{2}. \tag{4}\]
Since \(\dot{r}\) and \(\dot{\theta}\) in (4) are related to the radial motion and polar angle motion, we can regard the left-hand side of the equation as the kinetic energy term of a test particle. Outside the horizon, both \(g_{rr}\) and \(g_{\theta\theta}\) are positive, leading to a non-negative kinetic energy term as expected. At the same time, the right-hand side can be defined as the effective potential
\[\mathcal{V}=-g_{tt}\dot{t}^{2}-g_{\phi\phi}\dot{\phi}^{2}-\mu^{2}. \tag{5}\]
It should be emphasized that the motion of particles is completely governed by the effective potential. On the other hand, there are two Killing vectors \(\xi^{\mu}=(\partial_{t})^{\mu}\) and \(\psi^{\mu}=(\partial_{\phi})^{\mu}\) associating with two conservation quantities, the energy \(E\) and angular momentum \(l\) of the test particle,
\[-E =g_{\mu\nu}u^{\mu}\xi^{\nu}=g_{tt}\dot{t}, \tag{6}\] \[l =g_{\mu\nu}u^{\mu}\psi^{\nu}=g_{\phi\phi}\dot{\phi}, \tag{7}\]
where \(u^{\mu}\) is the four-velocity of a particle with respect to the affine parameter.
In terms of energy (6) and angular momentum (7), the effective potential becomes
\[\mathcal{V}=-\frac{E^{2}}{g_{tt}}-\frac{l^{2}}{g_{\phi\phi}}-\mu^{2}. \tag{8}\]
If \(\mu^{2}=0\), this expression would precisely reduce to the corresponding effective potential of photon. Another feature is that the effective potential is symmetric under \(l\rightarrow-l\). Without loss of generality, we only focus on the positive angular momentum \(l\). On the other hand, the formula (8) is a quadratic form of energy \(E\), and can be factorized as [24]
\[\mathcal{V}=-\frac{1}{g_{tt}}(E-e_{1})(E-e_{2}), \tag{9}\]
with
\[e_{1}=\sqrt{\frac{-g_{tt}\left(l^{2}+g_{\phi\phi}\mu^{2}\right)}{g_{\phi\phi} }},\quad e_{2}=-\sqrt{\frac{-g_{tt}\left(l^{2}+g_{\phi\phi}\mu^{2}\right)}{g_ {\phi\phi}}}. \tag{10}\]
The timelike circular orbit of a massive particle (\(\mu^{2}=1\)) requires
\[\mathcal{V}(r)=0,\qquad\frac{\partial\mathcal{V}(r)}{\partial r}=0. \tag{11}\]
For given \(l\), one can obtain the radius of the TCOs through the second condition. Then the energy of the particle relating with the TCOs will be calculated via the first condition. Considering that \(e_{2}\) is negative, we abandon it here. As a result, the conditions (11) turn to
\[E=e_{1},\quad\partial_{r}\,e_{1}=0. \tag{12}\]
### Vector and asymptotic behaviors
As previously stated, the TCOs are completely controlled by \(e_{1}\). In this subsection, by making use of \(e_{1}\), we would like to construct a vector relating with the topology of TCOs. Then the asymptotic behaviors of the vector will be examined at the boundaries of (\(r\), \(\theta\)) plane.
Following Ref. [16], it is convenient to introduce the following vector
\[\phi^{r}=\frac{\partial_{r}e_{1}}{\sqrt{g_{rr}}},\qquad\phi^{\theta}=\frac{ \partial_{\theta}e_{1}}{\sqrt{g_{\theta\theta}}}. \tag{13}\]
In a spacetime with \(\mathcal{Z}_{2}\) symmetry of \(\theta\), the zero points of vector \(\phi\) locate at \(\theta=\pi/2\) and \(\partial_{r}e_{1}=0\), which exactly correspond to the equatorial TCOs.
Here, we consider one closed loop \(\mathcal{I}\) displayed in Fig. 1(a) in \((r,\theta)\) plane which consists of four line segments, \(I_{1}\), \(I_{2}\), \(I_{3}\), and \(I_{4}\). Under the infinity limit (\(r\rightarrow\infty\)), axis limit (\(\theta\to 0\), \(\pi\)), and horizon limit (\(r\to r_{h}\)), the loop \(\mathcal{I}\) tends to the boundary \(\partial\Sigma\), where \(\Sigma\) denotes the complete parameter space in (\(r\), \(\theta\)) plane.
**Case 1: axis-limit** (\(\theta\to 0\) and \(\pi\), or \(\delta\to 0\))
In spherical polar coordinates (\(\rho\), \(\phi\), \(z\)), \(z=r\cos\theta\) is an axis perpendicular to the equatorial plane and \(\rho=r\sin\theta\) is an orthogonal projection distance from the fixed origin. Performing the expansion of the metric function near \(\rho=0\), one has [16]
\[g_{tt}\sim g_{tt}^{0}+\mathcal{O}(\rho),\quad g_{rr}\sim g_{rr}^{0}+\mathcal{ O}(\rho),\quad g_{\theta\theta}=r^{2},\quad g_{\phi\phi}=r^{2}\sin^{2}\theta, \tag{14}\]
where \(g_{tt}^{0}\) and \(g_{rr}^{0}\) are the zeroth terms and are constants. Using (10), we obtain
\[e_{1}\sim\frac{|l|\sqrt{-g_{tt}^{0}}}{r\sin\theta}. \tag{15}\]
The ratio \(\phi^{\theta}/\phi^{r}\) is
\[\frac{\phi^{\theta}}{\phi^{r}}=\frac{\sqrt{g_{rr}}}{\sqrt{g_{\theta\theta}}} \frac{\partial_{\theta}e_{1}}{\partial_{r}e_{1}}=\frac{\sqrt{g_{rr}}}{\sqrt{g _{\theta\theta}}}\frac{\partial_{\rho}e_{1}}{\partial_{\rho}e_{1}}\frac{ \partial_{\theta}\rho}{\partial_{r}\rho}=\frac{\sqrt{g_{rr}}}{\sqrt{g_{\theta \theta}}}\;r\cot\theta\propto\sqrt{g_{rr}^{0}}\cot\theta. \tag{16}\]
Since \(|\cot\theta|=\infty\) for \(\theta=0\) or \(\pi\), it is straightforward to confirm \(\phi^{\theta}\gg\phi^{r}\). The argument of \(\phi\) will rely on the sign of \(\phi^{\theta}\)
\[\arg\phi\propto\arctan\phi^{\theta}\propto\arctan(-\frac{|l|\sqrt{-g_{tt}^{0}} \cot\theta\csc\theta}{r^{2}})=\begin{cases}-\frac{\pi}{2},&\theta\to 0,\\ +\frac{\pi}{2},&\theta\rightarrow\pi.\end{cases} \tag{17}\]
Therefore, the orientation of the vector \(\phi\) is always downward along the path \(I_{4}\) and upward along path \(I_{2}\) for axis limit \(\delta\to 0\).
**Case 2: horizon-limit** (\(r\to r_{h}\))
Now, let us move to the horizon limit. In this case, path \(I_{3}\) approaches horizon when \(r_{0}\to r_{h}\) shown in Fig. 1(b). The parametric equation \(\vec{R}(\theta)\) corresponding to path \(I_{3}\) is given by
\[\vec{R}(\theta)=\frac{\vec{x}}{\sin(\theta)}. \tag{18}\]
Figure 1: (a) Closed loop \(\mathcal{I}\) and its asymptotic boundary \(\partial\Sigma\) in spherical symmetric and asymptotic flat spacetime. (b) Path \(I_{3}\) for horizon limit in spherical coordinate. Vector \(\vec{R}\) is used to describe path \(I_{3}\).
Taking \(\vec{x}\to 0\), the horizon limit will be realized. The argument of \(\phi\) is
\[\arg\phi=\arctan\frac{\phi^{\theta}}{\phi^{r}}=\arctan\left(\frac{ \sqrt{g_{rr}}}{\sqrt{g_{\theta\theta}}}\frac{d\vec{r}}{d\theta}\right)\propto \arctan\left(\frac{\sqrt{g_{rr}^{0}}}{r}\frac{d\vec{R}(\theta)}{d\theta}\right), \tag{19}\]
where we have used \(\vec{r}=\vec{r}_{h}+\vec{R}\). Performing the derivative of \(\vec{R}(\theta)\), one gets
\[\arg\phi=\arctan\left(-\frac{\sqrt{g_{rr}^{0}}}{r}x\cot\theta \csc\theta\right)\bigg{|}_{\vec{x}\to 0}=0, \tag{20}\]
which obviously implies that \(\phi^{\theta}\ll\phi^{r}\). In order to determine the direction of \(\phi\), we clearly show the component \(\phi^{r}\) by using (13)
\[\phi^{r}=\frac{2l^{2}g_{tt}\csc^{2}\theta-rg_{tt}^{\prime}(r) \left(l^{2}\csc^{2}\theta+\mu^{2}r^{2}\right)}{2r^{2}\sqrt{-g_{tt}\;g_{rr} \left(l^{2}\csc^{2}\theta+\mu^{2}r^{2}\right)}}. \tag{21}\]
Notice that \(g_{tt}>0\) and \(<0\) inside and outside horizon, it is reasonable to infer that \(g_{tt}(r_{h})=0\) and \(g_{tt}^{\prime}(rh)<0\) at the neighborhood of the horizon. So \(\phi^{r}\) becomes
\[\phi^{r}\propto\frac{-rg_{tt}^{\prime}(r)}{\sqrt{-g_{tt}\;g_{rr} }}\bigg{|}_{r\to r_{h}}>0. \tag{22}\]
Combined (20) and (22), it is easy to know that the direction of vector \(\phi\) is to the right on the path \(I_{3}\).
**Case 3: infinity limit \((r\rightarrow\infty)\)**
Finally, let us examine the infinity limit corresponding to the path \(I_{1}\). Given that the spacetime is asymptotic flat, the metric can be expanded as
\[g_{tt}\sim-1+\frac{2m}{r}+\mathcal{O}(\frac{1}{r^{2}}),\quad g _{rr}\sim 1+\frac{2m}{r}+\mathcal{O}(\frac{1}{r^{2}}),\quad g_{\theta\theta}=r^ {2},\quad g_{\phi\phi}=r^{2}\sin^{2}\theta, \tag{23}\]
when \(r\rightarrow\infty\). Positive constant \(m\) is relevant to the mass of a spherically symmetric black hole. Employing (10) and (13), we obtain
\[e_{1}\sim\sqrt{\frac{(r-2m)(r^{2}+l^{2}\csc^{2}\theta)}{r^{3}}}, \tag{24}\] \[\phi^{\theta}\sim\frac{-l^{2}\cot\theta\csc^{2}\theta}{r^{3}}+ \mathcal{O}(\frac{1}{r^{4}}),\qquad\phi^{r}\sim\frac{m}{r^{2}}+\mathcal{O}( \frac{1}{r^{3}}). \tag{25}\]
The argument of the vector \(\phi\) is evaluated as
\[\arg\phi\propto\arctan\left(\frac{-l^{2}\cot\theta\csc^{2}\theta}{ m\,r}\right)\bigg{|}_{r\rightarrow\infty}\sim 0. \tag{26}\]
Considering that \(\phi^{r}>0\) and \(\arg\phi=0\), the direction of the vector \(\phi\) on the path \(I_{1}\) is to the right as the same as the horizon limit.
In summary, we obtain the direction of the vector \(\phi\) at the boundaries \(I_{1-4}\) of \((r,\,\theta)\) plane sketched in Fig. 2(a) in spherically symmetric spacetime. As we shall show, this result has an important application on the topological approach of the TCOs.
### Topology, stability, and bifurcation point
In this subsection, we will study the topological properties of TCOs by utilizing Duan's \(\phi\)-mapping topological current theory, and then examine the relationship between the stability and the topological charge of TCOs. Furthermore, the marginally stable circular orbits (MSCOs) acting as bifurcation points will be explored.
In Duan's \(\phi\)-mapping topological current theory [25], a moving point-like particle corresponds to the zero point of the vector \(\phi\). Moreover, the conservation of particle number is guaranteed by the total topological charge. As shown above, the TCOs exactly locate at the zero points of vector \(\phi\). Thus, we can endow each TCO with a topological
charge. This allows us to establish the topology for the TCOs. Then, the topological properties will be uncovered as expected. Following Ref. [25], the topological current relating with the topological charge reads
\[j^{\mu}=\frac{1}{2\pi}\epsilon^{\mu\nu\rho}\epsilon_{ab}\frac{\partial n^{a}}{ \partial x^{\nu}}\frac{\partial n^{b}}{\partial x^{\rho}}, \tag{27}\]
where \(x^{\mu}=(t,r,\theta)\) and \(n^{a}=(\frac{\phi^{r}}{|\phi|},\frac{\phi^{\theta}}{|\phi|})\) is the unit vector of \(\phi\). It is easy to check that this current is conserved, i.e., \(\partial_{\mu}j^{\mu}\)=0. After a simple algebra, one reaches
\[j^{\mu}=\delta^{2}(\phi)J^{\mu}\left(\frac{\phi}{x}\right), \tag{28}\]
with Jacobi tensor \(\epsilon^{ab}J^{\mu}\left(\frac{\phi}{x}\right)=\epsilon^{\mu\nu\rho}\partial _{\nu}\phi^{a}\partial_{\rho}\phi^{b}\). Significantly, \(j^{\mu}\) is nonzero only at the zero points of the vector \(\phi\). Denoting the \(i\)-th zero point as \(\vec{x}=\vec{z}_{i}\), we have the density \(j^{0}\) of the topological current [25]
\[j^{0}=\sum_{i}^{N}=\beta_{i}\eta_{i}\delta^{2}(\vec{x}-\vec{z}_{i}), \tag{29}\]
where \(\beta_{i}\) and \(\eta_{i}\) are the Hopf index and Brouwer degree of the \(i\)-th zero point. By integrating the density \(j^{0}\) of the topological current over the giving region \(\Sigma\), one obtains the topological number
\[W=\int_{\Sigma}j^{0}d^{2}x=\sum_{i}^{N}\beta_{i}\eta_{i}=\sum_{i}^{N}w_{i}. \tag{30}\]
Here \(w_{i}\) denotes the winding number of the \(i\)-th zero point of the vector \(\phi\) enclosed in \(\Sigma\). For each zero point, there is a winding number acting as a topological charge. Employing it, these zero points can be classified by the values of their winding numbers. This gives the local topological property for the zero point. While if \(\Sigma\) covers all the possible regions, the global topological property will be disclosed by the number \(W\).
Considering the change of the vector direction, the topological number can also be calculated by
\[W=\frac{1}{2\pi}\oint_{\mathcal{I}}\mathrm{d}\Omega, \tag{31}\]
where \(\Omega\) is a deflection angle of vector \(\phi\) along the counterclockwise closed path \(\mathcal{I}=\partial\Sigma\). If taking \(\Sigma\) covering all the region of (\(r\), \(\theta\)) plane, we easily have
\[W=0, \tag{32}\]
Figure 2: (a) The direction (purple arrows) of the vector \(\phi\) at the asymptotic boundaries. (b) Function \(\phi^{r}(r)\). The blue dots stand for the locations of TCOs. One is unstable with the winding number \(-1\) and the other is stable with winding number \(+1\).
by making use of the asymptotic behaviors of vector \(\phi\) sketched in Fig. 2(a). This result strongly implies that, for each angular momentum \(l\), the TCOs always come in pairs for the static, spherically symmetric, and asymptotic flat black holes. This is quite different from the topology of LRs, which have topological number \(W=-1\) indicating that there exists at least one standard LR.
Now we turn to investigate the local stability of the TCOs. Compared with LR, the TCO depends not only on the black hole parameter, but also on the energy and angular momentum of the particle. Solving \(\mathcal{V}(r)=\partial_{r}\mathcal{V}(r)=0\), one obtains the energy and angular momentum of TCOs on the equatorial plane [26]
\[l_{t}=\sqrt{\frac{r^{3}g^{\prime}_{tt}(r)}{rg^{\prime}_{tt}(r)-2g_{tt}(r)}}, \qquad E_{t}=\sqrt{\frac{2g_{tt}(r)^{2}}{rg^{\prime}_{tt}(r)-2g_{tt}(r)}}. \tag{33}\]
More importantly, the stability of TCOs can be determined by the second derivative of the effective potential
\[\mathcal{V}^{\prime\prime}(r)=\frac{(e_{1}(r)-e_{2}(r))}{g_{tt}(r)}e_{1}^{ \prime\prime}(r)=\frac{(e_{1}(r)-e_{2}(r))\sqrt{g_{rr}}}{g_{tt}(r)}\partial_{ r}\phi^{r}(r). \tag{34}\]
Since \(e_{1}>0\), \(e_{2}<0\), and \(g_{tt}<0\) outside the horizon, we have
\[\begin{cases}\mathcal{V}^{\prime\prime}(r)>0\Leftrightarrow\text{unstable},& \text{if }\partial_{r}\phi^{r}(r)<0,\\ \mathcal{V}^{\prime\prime}(r)<0\Leftrightarrow\text{stable},&\text{if } \partial_{r}\phi^{r}(r)>0.\end{cases} \tag{35}\]
This states that the stable and unstable TCOs, respectively, correspond to \(\partial_{r}\phi^{r}>0\) and \(\partial_{r}\phi^{r}<0\). So, one can gain the local stability of TCOs based on this result.
Next, let us examine the relationship between the stability and the topological charge for the TCOs. As we shown, for each angular momentum, the TCOs always come in pairs. For convenience, we consider that there are two TCOs denoted by \(\text{TCO}_{1}\) and \(\text{TCO}_{2}\) with radii \(r_{1}\) and \(r_{2}\) in Fig. 2(b). Without loss of generality, we assume \(r_{1}<r_{2}\). Further considering that \(\phi^{r}(r)\) is positive at the black hole horizon and infinity, one obtains the result that \(\phi^{r}\) turns from positive to negative at \(r_{1}\) and from negative to positive at \(r_{2}\). As a result, the following result holds
\[\partial_{r}\phi^{r}(r_{1})<0\quad\text{and}\quad\partial_{r}\phi^{r}(r_{2})>0. \tag{36}\]
Combining with (35), it is easy to know that \(\text{TCO}_{1}\) is unstable while \(\text{TCO}_{2}\) is stable. In particular, following the study of Ref. [24], one can find the winding number of \(\text{TCO}_{1}\) and \(\text{TCO}_{2}\) are \(-1\) or \(+1\), respectively. Therefore, the TCO with positive or negative winding number is locally stable or unstable.
If more than one pair TCOs is present, we can further obtain an interesting generalization that the TCOs nearest and farthest to the horizon have negative and positive winding numbers, and thus they are locally unstable and stable, respectively. More interestingly, any two arbitrary nearby TCOs have the opposite winding numbers.
Among the TCOs, there is a special kind, the MSCO, which satisfies [22]
\[\mathcal{V}(r)=0,\quad\mathcal{V}^{\prime}(r)=0,\quad\mathcal{V}^{\prime \prime}(r)=0, \tag{37}\]
or equivalently,
\[E=e_{1},\quad\partial_{r}e_{1}(r)=0,\quad\partial_{r,r}e_{1}(r)=0. \tag{38}\]
Solving the second condition, we get \(l=l_{t}(r)\) corresponding to the zero points of \(\phi\), and its explicit form is given in (33). Formulating the third condition, one has \(\partial_{r}l_{t}(r)=0\), which exactly gives the MSCO with \((l_{t}^{*},r^{*})\). As shown in Ref. [24], the MSCOs can be treated as bifurcation points [27]. Near it, we can carry out the Taylor's expansion for \(l_{t}(r)\)
\[l_{t}=l_{t}^{*}+\frac{1}{2}\frac{d^{2}l_{t}}{d\,r^{2}}\bigg{|}_{r^{*}}(r-r^{*} )^{2}+\mathcal{O}\left((r-r^{*})^{3}\right). \tag{39}\]
Supposing that expansion coefficient \(l_{t}^{\prime\prime}(r^{*})>0\), one has
\[l>l_{t}^{*}\Leftrightarrow\text{Exist TCOs},\quad l<l_{t}^{*}\Leftrightarrow \text{No TCOs}, \tag{40}\]
which implies that the TCOs can exist when \(l>l^{*}\), while absent when \(l<l^{*}\). So such MSCO corresponds to the generated point. Conversely, negative \(l_{t}^{\prime\prime}(r^{*})<0\) leads to the annihilated point.
Here, we would like to address some comments on the MSCO and ISCO. Both them are characteristic kinds of TCOs. ISCO is the stable TCO with the smallest radius among all the stable TCOs. While MSCO denotes the stable TCO, which has the smallest radius and can be continuously connected to spatial infinity by a set of stable TCOs. In general, the MSCO and ISCO coincide with each other, for example in Schwarzschild or Kerr black holes. However, for the hairy black holes or generic ultracompact objects, they do not coincide [22].
To summarize, the study of the topological charge of TCOs reveals that, for each angular momentum, the TCOs come in pairs and possess opposite values of the winding numbers. Positive or negative winding number corresponds to that the TCO is local stable or unstable. Moreover, whether the TCOs present or not, the topological number \(W\) always vanishes. Of particular interest, MSCOs act as the generated points with positive \(l_{t}^{\prime\prime}(r^{*})\), or annihilated points with negative \(l_{t}^{\prime\prime}(r^{*})\).
In the following sections, we would like to examine the topology for three characteristic black hole examples, and to see whether our general results hold.
## III Schwarzschild black holes
In this section, we shall carry out the topological study of the TCOs for the Schwarzschild black hole.
### Effective potential and asymptotic behaviors
The Schwarzschild black hole is a static spherically symmetric vacuum solution of the Einstein field equation, and it can be described by the element (1) with
\[g_{tt}=-\left(1-\frac{2M}{r}\right),\quad g_{rr}=\left(1-\frac{2M}{r}\right)^{ -1}, \tag{41}\]
where the black hole mass \(M\) corresponds to the event horizon as \(r_{h}=2M\).
From (8), the effective potential \(\mathcal{V}(r)\) reads
\[\mathcal{V}(r)=-\frac{l^{2}\csc^{2}\theta}{r^{2}}-\frac{rE^{2}}{2M-r}-\mu^{2}. \tag{42}\]
Reformulating it, one gets \(e_{1}\) and \(e_{2}\) via the equation (10)
\[e_{1,2}=\pm\sqrt{\frac{(r-2M)(r^{2}\mu^{2}+l^{2}\csc^{2}\theta)}{r^{3}}}. \tag{43}\]
Following the definition (13), the vector \(\phi\) is
\[\phi^{r}=\frac{Mr^{2}+(3M-r)l^{2}\csc^{2}\theta}{r^{3}+\sqrt{r^{2}+l^{2}\csc^ {2}\theta}},\quad\phi^{\theta}=-\frac{l^{2}\cot\theta\csc^{2}\theta\sqrt{r-2M }}{r^{5/2}\sqrt{r^{2}+l^{2}\csc\theta}}, \tag{44}\]
where we have taken \(\mu^{2}=1\) for the timelike geodesics.
Here we would like to examine the asymptotic behavior of the vector \(\phi\) in Schwarzschild black hole background and confirm whether it is consistent with our analysis in Sec. II.
**Case 1: axis-limit \((\theta\to 0\) and \(\pi)\)**
First, let us consider the limit \(\theta\to 0\). Expanding the vector near \(\theta=0\), we have
\[\phi^{r} =\frac{l(3M-r)}{\theta r^{3}}+\frac{\theta\left(-3Mr^{2}+3Ml^{2} +3r^{3}-rl^{2}\right)}{6r^{3}l}+\mathcal{O}\left(\theta^{3}\right), \tag{45}\] \[\phi^{\theta} =-\frac{l\sqrt{r-2M}}{\theta^{2}r^{5/2}}+\frac{\sqrt{r-2M}\left( 3r^{2}+l^{2}\right)}{6r^{5/2}l}-\frac{\theta^{2}\sqrt{r-2M}\left(45r^{4}+30r^ {2}l^{2}-7l^{4}\right)}{120r^{5/2}l^{3}}+\mathcal{O}\left(\theta^{3}\right). \tag{46}\]
Abandoning the higher \(\theta\) term, the ratio \(\phi^{\theta}/\phi^{r}\) can be calculated as
\[\frac{\phi^{\theta}}{\phi^{r}}=-\frac{\sqrt{r(r-2M)}}{\theta(3M-r)}\bigg{|}_{ \theta\to 0}\rightarrow\infty. \tag{47}\]
Combining with \(\phi^{\theta}<0\), one can see that the direction of vector \(\phi\) when \(\theta\to 0\) is downward which exactly agrees with the above result (17).
On the other hand, expanding the vector \(\phi\) near \(\theta=\pi\), one gets
\[\phi^{r} =-\frac{l(3M-r)}{(\theta-\pi)r^{3}}+\frac{(\theta-\pi)\left(3Mr^{2} -3Ml^{2}-3r^{3}+rl^{2}\right)}{6r^{3}l}+\mathcal{O}\left((\theta-\pi)^{3} \right), \tag{48}\] \[\phi^{\theta} =\frac{l\sqrt{r-2M}}{(\theta-\pi)^{2}r^{5/2}}-\frac{\sqrt{r-2M} \left(3r^{2}+l^{2}\right)}{6r^{5/2}l}+\frac{(\theta-\pi)^{2}\sqrt{r-2M}\left(4 5r^{4}+30r^{2}l^{2}-7l^{4}\right)}{120r^{5/2}l^{3}}+\mathcal{O}\left((\theta -\pi)^{3}\right). \tag{49}\]
The ratio reads
\[\frac{\phi^{\theta}}{\phi^{r}}=\frac{\sqrt{r(r-2M)}}{(\pi-\theta)(3M-r)} \bigg{|}_{\theta\rightarrow\pi}\rightarrow\infty. \tag{50}\]
As a result, the direction of vector \(\phi\) is upward when \(\theta=\pi\) by considering \(\phi^{\theta}>0\).
**Case 2: horizon-limit \((r\to r_{h})\)**
Taking \(r=r_{h}+\epsilon\), the horizon limit will turn to \(\epsilon\to 0\). Under this limit, the vector reads
\[\phi^{r} =\frac{\sqrt{4M^{2}+l^{2}\csc^{2}\theta}}{8M^{2}}+\frac{\epsilon \left(-8M^{2}-5l^{2}\csc^{2}\theta\right)}{16M^{3}\sqrt{4M^{2}+l^{2}\csc^{2} \theta}}+\mathcal{O}(\epsilon^{2}), \tag{51}\] \[\phi^{\theta} =\frac{-\epsilon^{1/2}\left(l^{2}\cot\theta\csc^{2}\theta\right) }{4M^{5/2}\sqrt{8M^{2}+2l^{2}\csc^{2}\theta}}+\mathcal{O}(\epsilon^{3/2}). \tag{52}\]
The corresponding outcome of \(\phi\) is
\[\frac{\phi^{\theta}}{\phi^{r}}=-\frac{\sqrt{2}l^{2}\sqrt{\epsilon}\cot(\theta )\csc^{2}(\theta)}{\sqrt{M}\left(l^{2}\csc^{2}(\theta)+4M^{2}\right)}\bigg{|} _{\epsilon\to 0}=0,\quad\text{and}\quad\phi^{r}>0, \tag{53}\]
leading to the result that the direction of vector \(\phi\) at the event horizon is to the right.
**Case 3: infinity limit \((r\rightarrow\infty)\)**
When \(r\rightarrow\infty\), one has
\[\phi^{r} =\frac{M}{r^{2}}-\frac{l^{2}\csc\theta^{2}}{r^{3}}+\mathcal{O} \left(\frac{1}{r^{4}}\right), \tag{54}\] \[\phi^{\theta} =\frac{-l^{2}\cot\theta\csc\theta^{2}}{r^{3}}+\mathcal{O}\left( \frac{1}{r^{4}}\right). \tag{55}\]
After a simple algebra calculation, we have
\[\frac{\phi^{\theta}}{\phi^{r}}=\frac{-l^{2}\cot\theta\csc^{2}\theta}{Mr} \bigg{|}_{r\rightarrow\infty}=0,\quad\text{and}\quad\phi^{r}>0, \tag{56}\]
indicating that the direction of vector \(\phi\) at infinity is to the right.
In summary, we see that the asymptotic behaviors of the vector is the same as that we obtained above for general black hole backgrounds. Therefore, we have the topological number of TCOs for the Schwarzschild black hole
\[W=0. \tag{57}\]
So, for a certain angular momentum, if there are TCOs, they must appear in pairs.
### Topology of TCOs and evolution of control parameter
Solving the zero points of the vector, we obtain the angular momentum of the TCOs
\[l_{t}=\sqrt{\frac{r^{2}M}{r-3M}}, \tag{58}\]
for the Schwarzschild black holes. Further solving \(\partial_{r}l_{t}=0\), one obtains the radius and angular momentum of the ISCO or MSCO
\[r_{ISCO}=6M\quad\text{and}\quad l_{ISCO}=2\sqrt{3}M. \tag{59}\]
Note that for the Schwarzschild black hole, the ISCO and MSCO coincide, and thus we will not distinguish them here. In what follows, we would like to figure out three characteristic cases according to the angular momentum:
* \(0\leq l<l_{ISCO}\),
* \(l=l_{ISCO}\),
* \(l_{ISCO}<l<\infty\).
For simplicity, we shall take \(M=1\) for our following study.
First, let us consider \(0<l=3.2<l_{ISCO}\). The effective potential \(\mathcal{V}(r)\) is plotted in Fig. 3 when \(E=0.9\), \(0.92\), \(0.94\), and \(0.96\) from the bottom to top, respectively. For each curve, we find that, \(\mathcal{V}(r)\) decreases monotonically with \(r\). So for this case, there is no TCO. Also, we show the unit vector field \(n\) on a portion of the \(\theta\)-\(r\) plane in Fig. 3. Obviously, the vector is outwards at \(\theta=0\) and \(\pi\). On the equatorial plane \(\theta=\pi/2\), the direction of the vector is always toward to the right, and no zero point can be found. So the topological number must vanish, i.e., \(W=0\).
For the second characteristic case, we take \(l=l_{ISCO}=3.4641\). The effective potential and the unit vector field \(n\) on a portion of the \(\theta\)-\(r\) plane are shown in Figs. 4 and 4. The ISCO exactly locates at \(r=6\) with \(E=2\sqrt{2}/3\). From Fig. 4, it can be found that for \(E>2\sqrt{2}/3\), no extremal point is present for the effective potential \(\mathcal{V}(r)\), while two extremal points are present for \(E<2\sqrt{2}/3\). In Fig. 4, it is easy to find that the unit vector field \(n\) has a similar behavior as that given in Fig. 3. Although the direction of \(n\) seems to be always towards to the right on the equatorial plane, it indeed vanishes at the point marked with the dot. Nevertheless, \(\partial_{r,r}\mathcal{V}=0\) is satisfied. Here we wonder whether the winding number still vanishes as that case of \(l=3.2\). In order to answer this question, we turn to evaluate its winding number by constructing a closed loop \(C_{1}\) with following parameterized form [28]
\[\begin{cases}r=c_{1}\cos\psi+c_{0},\\ \theta=c_{2}\sin\psi+\frac{\pi}{2},\end{cases} \tag{60}\]
where (\(c_{0}\), \(c_{1}\), \(c_{2}\))=(3.4641, 0.4, 0.5). Note that all the closed loops will be parameterized with this form while with different values of (\(c_{0}\), \(c_{1}\), \(c_{2}\)). Then along the closed loop, the deflection angle \(\Omega(\psi)\) can be calculated by
\[\Omega(\psi)=\int_{C}\epsilon_{ab}\,n^{a}\,\mathrm{d}n^{b}. \tag{61}\]
The winding number shall be \(w=\Omega(2\pi)/2\pi\) as expected. We list the deflection angle \(\Omega(\psi)\) in Fig. 4. With the increase of \(\psi\) from \(0\) to \(2\pi\), we see that \(\Omega(\psi)\) first increases, then decreases, and finally increases. Nevertheless, \(\Omega(2\pi)\) vanishes, strongly implying that the winding number \(w=0\) for the ISCO.
Figure 3: The effective potential \(\mathcal{V}(r)\) and unit vector field of \(\phi\) for the Schwarzschild black holes for the case one with \(l=3.2\). (a) \(\mathcal{V}(r)\) as a function of \(r\) with \(E=0.9\), \(0.92\), \(0.94\), and \(0.96\) from the bottom to top. (b) The unit vector field \(n\) on a portion of the \(\theta\)-\(r\) plane.
Next we turn our attention to \(l=3.7>l_{ISCO}\). The effective potential \(\mathcal{V}\) is plotted in Fig. 4(b). For different values of energy, two extremal points can be observed on each curve. However, they do not denote the TCOs unless they have vanished potential. According to it, we find that there are two TCOs marked with dots at \(r\)=4.44 and 9.25, which correspond to \(E=0.9535\) and 0.9649, respectively. Also, from the behaviors of the effective potential, one easily reaches that the TCO at \(r=4.44\) is local unstable, while the other one is stable. As we shall see, this result will also be confirmed by their winding numbers.
The unit vector field \(n\) is also described on a portion of the \(\theta\)-\(r\) plane in Fig. 4(d). Obviously, there are two zero points, which are exactly consistent with these shown in Fig. 4(b). In order to calculate their winding numbers, we construct two closed loops \(C_{2}\) and \(C_{3}\) parametrized by the form (60) with (\(c_{0}\), \(c_{1}\), \(c_{2}\))=(4.44, 0.4, 0.5) and (9.25, 1.2,
Figure 4: The effective potential \(\mathcal{V}\), unit vector field \(n\) and deflection angle \(\Omega(\psi)\) for the Schwarzschild black holes. (a) Effective potential \(\mathcal{V}\) for case two with \(l=l_{ISCO}\). The energy \(E=0.91\), 0.925, 0.9428 and 0.96 from bottom to top. (b) Effective potential \(\mathcal{V}\) for case three with \(l=3.7\). The energy \(E=0.942\), 0.9535, 0.9649 and 0.978 from bottom to top. (c) The unit vector field \(n\) on a portion of the \(\theta\)-\(r\) plane with \(l=l_{ISCO}\). “\({}^{l}\)\({}^{r}\)\({}_{1}\)” denotes the ISCO at \(r_{ISCO}\)=6. The closed loop \(C_{1}\) has parametric coefficients (\(c_{0}\), \(c_{1}\), \(c_{2}\))=(\(r_{ISCO}\), 0.4, 0.5). (d) The unit vector field \(n\) on a portion of the \(\theta\)-\(r\) plane with \(l=3.7\). “\({}^{l}\)\({}_{1}\)” and “\({}^{l}\)\({}_{2}\)” are two TCOs located at \(r_{t}\)=4.44 and 9.25. The closed loops \(C_{2}\) and \(C_{3}\) have parametric coefficients (\(c_{0}\), \(c_{1}\), \(c_{2}\))=(4.44, 0.4, 0.5) and (9.25, 1.2, 0.2). (e) Deflection angle \(\Omega(\psi)\) along \(C_{1}\). (f) Deflection angle \(\Omega(\psi)\) along \(C_{2}\) and \(C_{3}\).
0.2). Then we show \(\Omega(\psi)\) for them in Fig. 4. With the increase of \(\psi\), \(\Omega(\psi)\) increases along \(C_{3}\), while decreases \(C_{2}\). The winding number is easily got, for example \(w\)=1 for \(TP_{2}\) and -1 for \(TP_{1}\), which implies that a positive or negative winding number corresponds to a local stable or unstable TCO as expected. As a result, the total topological number \(W\)=+1-1=0, keeping the same with the first two cases.
After studying the topological charge of ISCO and TCO, we concentrate on the evolution of TCO radius \(r_{t}(l)\), from the idea that the angular momentum can be treated as a time control parameter [24]. Expanding the angular momentum \(l_{t}\) of the zero points of the vector \(\phi\) at \(r_{ISCO}\), one has
\[l_{t}=2\sqrt{3}+\frac{1}{12\sqrt{3}}(r-6)^{2}+\mathcal{O}\left((r-6)^{3} \right). \tag{62}\]
Since \(l_{t}^{\prime\prime}(r_{ISCO})=1/(12\sqrt{3})>0\), the bifurcation point must be a generated point. In order to make it more clear, we display the radius \(r_{t}\) of the TCOs as a function of \(l\) in Fig. 5. For small \(l\), no branch of the TCO can be found. While after the ISCO point, two TCO branches emerge from the ISCO with the opposite winding numbers. Such behavior obviously addresses that the ISCO is a generated point with the angular momentum. More interestingly, whether there are TCOs or not, the total topological number always vanishes for arbitrary values of the angular momentum. Furthermore, we sketch the behavior of the winding number and topological number in Fig. 5.
In summary, we in this section show that the topological number \(W\)=0 of the TCOs in the Schwarzschild black hole background. Positive or negative winding number indeed indicates that the TCO is local stable or unstable. These results exactly support our general conclusion in Sec. II.
## IV Scalarized Einstein-Maxwell black holes
In this section, we would like to consider another characteristic example, the scalarized Einstein-Maxwell black holes. Even though their total topological number is the same as the Schwarzschild black holes, the topological configurations of the TCOs are quite different.
### Scalarized Einstein-Maxwell black holes
The Einstein-Maxwell-scalar model describes a real scalar field \(\phi\) coupling to Einstein's gravity and Maxwell's electromagnetism. The scalarized Einstein-Maxwell black hole can be described by the following action [29; 30]
\[\mathcal{S}=\int d^{4}x\sqrt{-g}(R-2g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu }\phi-f(\phi)F_{\mu\nu}F^{\mu\nu}), \tag{63}\]
where \(R\) and \(\phi\) are the Ricci scalar and scalar field. Maxwell tensor \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\). The last term represents a non-minimal coupling term between scalar field and the Maxwell electric field.
Figure 5: (a) The evolution of the TCO radius \(r_{t}\) as a function of angular momentum \(l\) for the Schwarzschild black holes. Obviously, the point \(\text{IP}_{1}\) is a generated point. The signs \(\pm\) denote the winding number are \(\pm 1\) for these TCO branches. (b) The topological number (black line) and winding number of TCO branches (green lines). For different values of \(l\), we see the topological number \(W\) always vanishes.
The line element of the spherically symmetric scalarized Einstein-Maxwell black holes is assumed to be
\[ds^{2}=-N(r)e^{-2\delta(r)}dt^{2}+\frac{dr^{2}}{N(r)}+r^{2}d\Omega^{2}, \tag{64}\]
where the metric function \(N(r)\) and \(\delta(r)\) are only radially dependent. The four-potential \(A_{\mu}\) of the electromagnetic field is
\[A_{\mu}(x)dx^{\mu}=V(r)dr. \tag{65}\]
The effective Lagrangian in the Einstein-Maxwell-scalar model are [31]
\[\mathcal{L}_{eff}=-\frac{1}{2}e^{-\delta}\left(rN^{\prime}+N-1 \right)-\frac{1}{2}e^{-\delta}r^{2}N\phi^{\prime}(r)^{2}+\frac{1}{2}e^{\delta }f(\phi)r^{2}V^{\prime}(r)^{2}. \tag{66}\]
Here the radially dependent is omitted for notation simplicity. By making use of the Lagrangian, the equations of motion are [31]
\[N^{\prime}-\frac{1-N}{r}=-\frac{Q^{2}}{r^{3}f(\phi)}-r(\phi^{ \prime})^{2}N,\qquad\delta^{\prime}=-r(\phi^{\prime})^{2}, \tag{67}\] \[(r^{2}N\phi^{\prime})^{\prime}=-\frac{f^{\prime}(\phi)Q^{2}}{2f ^{2}(\phi)r^{2}}-r^{3}(\phi^{\prime})^{3}N,\qquad V^{\prime}=\frac{Q}{f(\phi) r^{2}}e^{-\delta}. \tag{68}\]
Further, we assume this black hole spacetime is asymptotic flat
\[\lim_{r\rightarrow\infty}N(r)=1,\qquad\lim_{r\rightarrow\infty} \delta(r)=0. \tag{69}\]
To numerically solve the differential equations (67) and (68), the exponential coupling are chosen as [32]
\[f(\phi)=e^{\alpha\phi^{2}},\quad\text{where }\alpha=0.9, \tag{70}\]
and the value of scalar field at the event horizon
\[\phi(r_{h})=2.25859. \tag{71}\]
The numerical results of the metric functions \(N(r)\), \(\delta(r)\), and \(\phi(r)\) are exhibited in Fig. 6 by taking \(r_{h}=1\). We observe that \(\delta(r)\) and \(\phi(r)\) decrease with \(r\), while \(N(r)\) shows a nonmonotonic behavior. After obtaining these functions, the explicit form of \(\mathcal{V}(r)\) and energy \(e_{1}\), as well as the vector \(\phi\) will be given. However, we cannot express them analytically, so we will not show them here. Other studies concerning the black hole solutions and potential observations of multi-sphoton sphere can also be found in Refs. [32; 33; 34; 35; 36]
Figure 6: The numerical results of functions \(N(r)\), \(\phi(r)\), and \(\delta(r)\) for scalarized Einstein-Maxwell black holes.
### Topology of TCOs and winding number
Now, let us turn to examine the topology of the TCO for the scalarized Einstein-Maxwell black holes.
As we have shown in the last section, ISCOs or MSCOs acting as bifurcation points, have an important impact on the topological configurations of TCOs. So it is key to determine them. By solving the conditions \(\mathcal{V}(r)=\mathcal{V}^{\prime}(r)=0\) and \(\mathcal{V}^{\prime\prime}(r)=0\), or alternatively (38),
\[\phi^{r}(r)=0\quad\text{and}\quad\phi^{r\prime}(r)=0, \tag{72}\]
we obtain the locations of ISCO and MSCO, which are given by
\[r_{ISCO} =2.3294\quad\text{and}\quad l_{ISCO}=6.4043,\] \[r_{MSCO} =15.753\quad\text{and}\quad l_{MSCO}=12.314. \tag{73}\]
Further, they must satisfy
\[\mathcal{V}^{\prime\prime}(r)=0\quad\text{and}\quad\mathcal{V}^{ \prime\prime\prime}(r)<0, \tag{74}\]
or,
\[\phi^{r\prime}(r)=0\quad\text{and}\quad\phi^{r\prime\prime}(r)>0. \tag{75}\]
A simple algebra gives
\[\mathcal{V}^{\prime\prime\prime}(r)=\frac{\left(e_{1}(r)-e_{2}(r) \right)\sqrt{g_{rr}(r)}\phi_{r}^{\prime\prime}(r)}{g_{tt}}, \tag{76}\]
which implies that \(\mathcal{V}^{\prime\prime\prime}(r)\sim-\phi_{r}^{\prime\prime}(r)\). Note that the ISCO also satisfies condition (74), but it has the smallest radius among all stable TCOs. Besides, the significant discrepancy between the ISCO and MSCO is that the MSCO can be continuously connected to spatial infinity by a set of stable TCOs, whereas the ISCO fails. If only one ISCO occurs such as for a Schwarzschild black hole, the ISCO will coincide with MSCO.
To check whether the ISCO and MSCO hold the condition (75), we calculate the derivatives of \(\phi^{r}(r)\) at the \(r_{ISCO}\) and \(r_{MSCO}\)
\[\phi^{r\prime}(r_{ISCO}) =0,\quad\phi^{r\prime\prime}(r_{ISCO})=0.0455786, \tag{77}\] \[\phi^{r\prime}(r_{MSCO}) =0,\quad\phi^{r\prime\prime}(r_{MSCO})=0.00007001.\]
To show this result more clearly, we show \(\phi^{r}(r)\) and its first and second derivatives in Fig. 7.
According to the values of \(l_{ISCO}\) and \(l_{MSCO}\), we divide the parameter region of the angular momentum into the following five types:
* \(0\leq l<l_{ISCO}\),
* \(l=l_{ISCO}\),
* \(l_{ISCO}<l<l_{MSCO}\),
* \(l=l_{MSCO}\),
* \(l_{MSCO}<l\).
For the first case \(0\leq l<l_{ISCO}\), it is easy to find that there is no TCO, quite similar to that of the Schwarzschild black hole with small angular momentum, see Fig. 3. Therefore the topological number \(W=0\). For the second case, we set the angular momentum \(l=l_{ISCO}\). Then the unit vector field \(n\) and deflection angle \(\Omega(\psi)\) are shown in Figs. 8(a) and 8(b). From Fig. 8(a), one can find that the direction of the vector does not change when it crosses the ISCO point. Further by constructing the closed loop \(C_{4}\), the winding number \(w=\Omega(2\pi)\)=0. When increasing the angular momentum such that \(l_{ISCO}<l<l_{MSCO}\), we shall see that two TCOs emerge from the ISCO. As an example, we take \(l=10\). The unit vector \(n\) is displayed in Fig. 9(a). Two zero points \(TP_{3}\) and \(TP_{4}\) corresponding to the TCOs are easily observed at \(r\)= 1.944 and 3.189. By, respectively, constructing two closed loops \(C_{5}\) and \(C_{6}\), we get that the small radius TCO has \(w\)=-1 and the large radius TCO has \(w\)=+1, see Fig. 9(b). As expected, such pattern gives the vanishing topological number \(W=-1+1=0\), the same with that of the Schwarzschild black hole.
Now we consider the fourth case with \(l=l_{MSCO}\). The unit vector \(n\) is plotted near the TCOs and MSCO in Figs. 10(a) and 10(b). Obviously, these certain orbits are the zero points of \(n\) marked with black dots in the figures. Further constructing these closed loops \(C_{7}\), \(C_{8}\), and \(C_{9}\), we show the deflection angle \(\Omega(\psi)\) along them in Figs. 10(c)
Figure 8: Case two with \(l=l_{ISCO}\) for the scalarized Einstein-Maxwell black holes. (a) The unit vector field \(n\) on a portion of the \(\theta\)-\(r\) plane. “\(IP_{2}\)” denotes the ISCO of the black hole with \(r\)=2.329. The closed loops \(C_{4}\) has parametric coefficients (\(c_{0}\), \(c_{1}\), \(c_{2}\))=(2.329, 0.6, 0.3). (b) Deflection angle \(\Omega(\psi)\) along \(C_{4}\).
Figure 9: Case three with \(l_{ISCO}<l=10<l_{MSCO}\) for the scalarized Einstein-Maxwell black holes. (a) The unit vector field \(n\) on a portion of the \(\theta\)-\(r\) plane. “\(TP_{3}\)” and “\(TP_{4}\)” denote the TCOs at \(r\)=1.944 and 3.189. The closed loops \(C_{5}\) and \(C_{6}\) have parametric coefficients (\(c_{0}\), \(c_{1}\), \(c_{2}\))=(1.944, 0.5, 0.2) and (3.189, 0.3, 0.4). (b) Deflection angle \(\Omega(\psi)\) along \(C_{5}\) and \(C_{6}\).
and 10(d), which implies that the small and large radii TCOs denoted with \(TP_{5}\) and \(TP_{6}\), respectively, have \(w\)=-1 and 1 while MSCO's winding number vanishes. Summing up, we have the topological number \(W=-1+1+0=0\), which is still the same with the previous cases.
Finally, we take \(l=12.42>l_{MSCO}\) for the fifth case. Similarly, we exhibit the unit vector \(n\) and the deflection angle \(\Omega(\psi)\) in Fig. 11. More clearly, there are four zero points located at \(r\)=1.911, 3.405, 13.813, and 18.253. Taking advantage of \(\Omega(\psi)\), we see the winding number \(w\)=-1, +1, -1, and +1 for these zero points with values of \(r\) from small to large. Summing these winding numbers, we have the topological number \(W=-1+1-1+1=0\). This result is the same as that of the Schwarzschild black hole and our above analysis in Sec. II.
### Evolution of TCOs
Here by considering the angular momentum as a control parameter, we depict the evolution of the zero point of the vector corresponding to the TCOs in Fig. 12(a). The ISCO and MSCO are marked with the dots. Near them, we expand the angular momenta as follows
\[\begin{split} l_{t}&=l_{ISCO}+7.26668(r-r_{ISCO})^{ 2}+\mathcal{O}\left((r-r_{ISCO})^{3}\right),\\ l_{t}&=l_{MSCO}+0.021906(r-r_{MSCO})^{2}+\mathcal{ O}\left((r-r_{MSCO})^{3}\right).\end{split} \tag{78}\]
Obviously, both \(l_{t}^{\prime\prime}(r_{ISCO})\) and \(l_{t}^{\prime\prime}(r_{MSCO})\) are positive, which suggests that both the bifurcation points are generated points. It is also can be seen that, for each of them, two TCO branches emerge, one of which has positive winding number \(w=1\) while another has negative winding number \(w=-1\) marked with "+" and "-" in the figure. Nevertheless, the topological number \(W\) always vanishes.
Meanwhile, the winding number with respect to the angular momentum is illustrated in Fig. 12(b). For \(l_{ISCO}<l<l_{MSCO}\), two TCOs with opposite winding numbers are described as a green line, resulting in \(W=0\). When
Figure 10: Case four \(l=l_{MSCO}\) for the scalarized Einstein-Maxwell black holes. (a) and (b) are for the unit vector field \(n\) on a portion of the \(\theta\)-\(r\) plane near the TCOs marked with \(TP_{5}\) and \(TP_{6}\), and MSCO with \(MP_{1}\). The closed loops \(C_{7}\), \(C_{8}\), and \(C_{9}\) are respectively, surround them. Their parametric coefficients (\(c_{0}\), \(c_{1}\), \(c_{2}\))=(1.911, 0.4, 0.25), (3.394, 0.3, 0.4), and (15.753, 0.5, 0.2). (c) and (d) are for the deflection angle \(\Omega(\psi)\) along these closed loops.
\(l>l_{MSCO}\), Four TCOs appear, two stable of them have \(W=+2\), whereas the other two unstable ones have \(W=-2\), thus their total winding number is still zero.
In total, despite the existence of the MSCO and ISCO, the topological number \(W\)=0 always holds for the scalarized Einstein-Maxwell black holes.
Figure 11: Case five with \(l=12.42\) for the scalarized Einstein-Maxwell black holes. (a) and (b) are for the unit vector field \(n\) on a portion of the \(\theta\)-\(r\) plane. (c) and (d) are for the deflection angle \(\Omega(\psi)\). For this case, there are four TCOs at \(TP_{7}\), \(TP_{8}\), \(TP_{9}\), and \(TP_{10}\). These closed loops \(C_{10-13}\) have parametric coefficients (\(c_{0}\), \(c_{1}\), \(c_{2}\))=(1.910, 0.3, 0.4), (3.404, 0.5, 0.25), (13.813, 0.6, 0.25), and (18.283, 0.3, 0.4).
Figure 12: (a) The evolution of TCO radius \(r_{t}\) vs. the angular momentum \(l\) for the scalarized Einstein-Maxwell black holes. IP\({}_{2}\) and MP\({}_{1}\) are two generated points. The “\(\pm\)” denotes that the winding number is \(\pm 1\) for these TCO branches. (b) The total topological number (black line) and winding number of TCO branches (green and purple lines).
Dyonic black holes
As shown above, for the scalarized Einstein-Maxwell black holes, the MSCO and ISCO are present, which gives a topology configuration of TCOs different from that of the Schwarzschild black holes, see Figs. 12(a) and 5(a). In this section, we would like to exhibit another characteristic topological configuration of TCOs, where the ISCO needs not to satisfy the condition \(\partial_{r,r}e_{1}(r)=0\). For this characteristic case, we also expect to check whether the topological number \(W\) still vanishes.
### Dyonic black holes
Here we focus on the Dyonic black holes with the quasi-topological electromagnetism, while it has no contributions to the Maxwell equation and energy-momentum tensor.
The Lagrangian with the quasi-topological electromagnetism is written as [37]
\[\mathcal{L}=\sqrt{-g}(R+\alpha_{1}U^{(1)}-\alpha_{2}U^{(2)}), \tag{79}\]
where \(\alpha_{1}\) and \(\alpha_{2}\) are coupling constants. \(U^{(1)}=-F^{2}\) is the conventional Maxwell Lagrangian and \(U^{(2)}=-2F^{4}+(F^{2})^{2}\) is a quasi-topological electromagnetism term. Here \(F^{2}=F^{\mu\nu}F_{\mu\nu}\) and \(F^{4}=F^{\mu}_{\ \nu}F^{\nu}_{\ \ \ \rho}F^{\rho}_{\ \ \ \sigma}F^{\sigma}_{\ \mu}\). From the Lagrangian (79), the Bianchi identity and Maxwell equation of motion read
\[\begin{split}&\text{BI:}\ \nabla_{[\mu}F_{\nu\rho]}=0,\qquad \text{EOM:}\ \nabla_{\mu}\tilde{F}^{\mu\nu}=0,\\ &\tilde{F}^{\mu\nu}=4\alpha_{1}F^{\mu\nu}+8\alpha_{2}(F^{2}F^{\mu \nu}-2F^{\mu\rho}F^{\sigma}_{\ \ \ \sigma}F^{\nu}_{\ \ \ \sigma}).\end{split} \tag{80}\]
Simultaneously, the Einstein field equations are
\[\begin{split}& R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}=T_{\mu\nu},\\ & T_{\mu\nu}=\alpha_{1}(2F_{\mu\rho}F^{\ \rho}_{\nu}-\frac{1}{2}F^{2}g_{\mu\nu})+ \alpha_{2}(4F^{2}F_{\mu\rho}F^{\ \rho}_{\nu}-8F_{\mu\rho}F^{\rho}_{\ \sigma}F^{\sigma}_{\ \lambda}F^{\lambda}_{\ \nu}-\frac{1}{2}((F^{2})^{2}-2F^{4})g_{\mu\nu}). \end{split} \tag{81}\]
The quasi-topological electromagnetism admits spherically symmetric dyonic black hole solution described by the following line element [37]
\[ds^{2}=-f(r)dt^{2}+\frac{1}{f(r)}dr^{2}+r^{2}d\Omega^{2}, \tag{82}\] \[f(r)=1-\frac{2M}{r}+\frac{\alpha_{1}p^{2}}{r^{2}}+\frac{q^{2}}{ \alpha_{1}r^{2}}\ _{2}F_{1}\left(\frac{1}{4},1;\frac{5}{4};-\frac{4p^{2}\,\alpha_{2}}{r^{4}\, \alpha_{1}}\right), \tag{83}\]
where \(p\) and \(q\) correspond to the magnetic charge and electric charge of the black holes. \(\,{}_{2}F_{1}\) is the hypergeometric function. Coupling constants \(\alpha_{1}\) and \(\alpha_{2}\) are associated with Maxwell theory and quasi-topological electromagnetism term, respectively. For a characteristic case, we set \(\alpha_{1}=1\), \(q=6.85\), \(p=\sqrt{\frac{396}{443}}\), \(\alpha_{2}=\frac{196249}{1584}\), and \(M=6.7\). Accordingly, the effective potential reads
\[\mathcal{V}(r)=\frac{\alpha_{1}E^{2}r^{2}}{q^{2}\,{}_{2}F_{1}\left(\frac{1}{4 },1;\frac{5}{4};-\frac{4p^{2}\,\alpha_{2}}{r^{4}\alpha_{1}}\right)+\alpha_{1} \left(-2Mr+\alpha_{1}p^{2}+r^{2}\right)}-\frac{l^{2}\csc^{2}\theta}{r^{2}}- \mu^{2}, \tag{84}\]
by making use of Eq. (8).
### Topology of TCOs and winding number
In order to investigate the topology of the TCO for the dyonic black hole, we evaluate \(e_{1}\) and \(e_{2}\) via Eq. (10)
\[e_{1,2}=\pm\sqrt{\frac{(l^{2}\csc^{2}\theta+\mu^{2}r^{2})\left(q^{2}\,{}_{2}F_ {1}\left(\frac{1}{4},1;\frac{5}{4};-\frac{4p^{2}\alpha_{2}}{r^{4}\alpha_{1}} \right)+\alpha_{1}\left(-2Mr+\alpha_{1}p^{2}+r^{2}\right)\right)}{\alpha_{1}r ^{4}}}. \tag{85}\]
Based on them, the components \(\phi^{r}\) and \(\phi^{\theta}\) of the vector can be easily calculated. However, they are in complicated forms and we will not show them here. Adopting Eq. (72), the radius and angular momentum of MSCO for dyonic black hole is
\[r_{MSCO}=25.3799,\quad\text{and}\quad l_{MSCO}=18.52. \tag{86}\]
Note that this MSCO is not the innermost TCO, so the ISCO does not coincide with this MSCO. The actual ISCO locates at \(r_{ISCO}=6.0928\) with \(l_{ISCO}=0\) and \(E_{ISCO}=0.1393\).
According to the angular momentum of the MSCO and ISCO, we can divide them into following three characteristic cases,
* \(0\leq l<l_{MSCO}\),
* \(l=l_{MSCO}\),
* \(l_{MSCO}<l<\infty\).
For the first case with small angular momentum, the study shows that there are two TCOs denoted by "\(TP_{11}\)" and "\(TP_{12}\)". This is quite different from that of the Schwarzschild black holes and scalarized Einstein-Maxwell black holes, where no TCOs can be found. So the existence of the pair TCOs is mainly caused by the quasi-topological electromagnetism term. More interestingly, they are not generated from an ISCO or MSCO. Taking \(0<l=2<l_{MSCO}\) as an example, the radius of TCOs locate at
\[r_{TP_{11}}=2.63492,\qquad r_{TP_{12}}=6.11503. \tag{87}\]
Their stability is evaluated via the first derivative
\[\partial_{r}\phi^{r}(r_{TP_{11}})=-0.07678,\qquad\partial_{r}\phi^{r}(r_{TP_{ 12}})=0.01462, \tag{88}\]
which significantly implies that \(TP_{11}\) is unstable while \(TP_{12}\) is stable.
For the second case \(l=l_{MSCO}\), we find the vector satisfies
\[\phi^{\prime\prime}(r_{MSCO})=0,\quad\text{and}\quad\phi^{\prime\prime\prime }(r_{MSCO})=0.00001835. \tag{89}\]
at \(r_{MSCO}\), which indicates that MSCO obeys the condition (75). For the third case with \(l=18.58\) for an example, four TCOs will be observed with two being stable and other two unstable.
Next, we turn to the topology for the TCOs. For the first case with \(l=2\), we observe that there are two zero points of the unit vector field \(n=(n^{r},n^{\theta})\) marked with black dots in Fig. 13(a). By constructing two closed loops \(C_{14}\) and \(C_{15}\), we calculate the deflection angle \(\Omega(\psi)\), and plot them in Fig. 13(b) for these two loops. Along \(C_{15}\) and \(C_{14}\), \(\Omega(\psi)\) increases from \(0\) to \(2\pi\), or decreases from \(0\) to \(-2\pi\). This result suggests that the winding number of \(TP_{11}\) and \(TP_{12}\) are -1 and +1, respectively. Summing them, one obtains the topological number \(W=-1+1=0\). This result indicates that although the quasi-topological electromagnetism term produces two new TCOs, the topological number stays unchanged.
Concentrating on the other two cases, we plot the unit vector \(n\) and the deflection angle \(\Omega(\psi)\) in Figs. 14 and 15. When \(l=l_{MSCO}\), the vector \(n\) admits three zero points, two of them are the TCOs and the large one is MSCO, see Figs. 14(a) and 14(b). By constructing three loops enclosing these three zero points, we find these two small TCOs have winding numbers -1 and +1, respectively, via \(\Omega(\psi)\) shown in Fig. 14(c). While the MSCO does not attribute to the winding number, see Fig. 14(d). As a result, the topological number \(W=-1+1=0=0\) as expected. When taking \(l=18.38>l_{MSCO}\), we observe four zero points of the vector \(n\), which, respectively, locate at \(r\)= 2.3125, 6.3071, 23.4766 and 27.6136, displayed in Figs. 15(a) and 15(b). After constructing the closed loops, we find their winding numbers are -1, +1, -1, +1 through the deflection angle \(\Omega(\psi)\) in Figs. 15(c) and 15(d). Summing them, we obtain the topological number \(W=-1+1-1+1=0\), which remarkably indicates that \(W\) does not change for large angular momentum.
### Evolution of TCOs
In the previous section, we have examined the topology of the TCOs for the dyonic black holes. With different angular momentum \(l_{t}\), the number of the TCOs changes. So here we turn to study the evolution of the TCOs as a function of \(l_{t}\).
First, we perform the Taylor's expansion near \(r_{MSCO}\) denoting a bifurcation point,
\[l_{t}=l_{MSCO}+0.0140657(r-r_{MSCO})^{2}+\mathcal{O}\left((r-r_{MSCO})^{3}\right). \tag{90}\]
Since \(l_{t}^{\prime\prime}(r_{ISCO})=0.0140657>0\), the MSCO acts as a generated point. This behavior can also be clearly observed in Fig. 16. After \(l_{MSCO}\), two TCO branches originate. A simple calculation shows that the upper and lower branches have positive and negative winding numbers. Their sum of the winding numbers keeps zero, the same with that of the MSCO.
On the other hand, there are two extra TCOs at small \(r\) caused by the quasi-topological electromagnetism term.
Figure 14: Case two with \(l=l_{ISCO}\) for the dyonic black holes. (a) The unit vector field \(n\) on a portion of the \(\theta\)-\(r\) plane. (b) Deflection angle \(\Omega(\psi)\). “\(TP_{1}\)” and “\(TP_{1}\)” are two TCOs, and “\(MP_{2}\)” is the MSCO. The closed loops \(C_{16}\), \(C_{17}\), and \(C_{18}\) have parametric coefficients (\(c_{0}\), \(c_{1}\), \(c_{2}\))=(2.313, 0.32, 0.51), (6.307, 0.3, 0.5), and (25.380, 0.9, 0.4).
Figure 13: Case one with \(l=2.0\) for the dyonic black holes. (a) The unit vector field \(n\) on a portion of the \(\theta\)-\(r\) plane. “\(TP_{1}\)” and “\(TP_{1}\)” are two TCOs at \(r\)=2.635 and 6.115. The closed loops \(C_{14}\) and \(C_{15}\) have parametric coefficients (\(c_{0}\), \(c_{1}\), \(c_{2}\))=(2.635, 0.9, 0.15) and (6.115, 0.45, 0.4). (b) Deflection angle \(\Omega(\psi)\) along \(C_{14}\) and \(C_{15}\).
Both them start at \(l=0\) and extend to large \(l\) forming two new TCO branches described by green curves in Fig. 16(a). Interestingly, they have different values of the winding number.
Such characteristic behavior of the TCOs is significantly different from that of the Schwarzschild black holes and scalarized Einstein-Maxwell black holes. For convenience, we sum the winding numbers for the TCOs in Fig. 16(b). For \(l<l_{MSCO}\), there is a pair TCOs, respectively having \(w\)=1 and -1. When the angular momentum is beyond \(l_{MSCO}\), a new pair TCOs emerge. However the topological number \(W\) always vanishes.
In summary, for a spherically symmetric dyonic black hole with the quasi-topological term, there is a differential topological configuration from other black holes. However, the topological number still vanishes. This result also obviously supports our general result given in Sec. II.
Figure 16: (a) The evolution of TCO radius \(r_{t}\) vs. the angular momentum \(l\) for the dyonic black holes. The “\(\pm\)” denote the positive or negative winding numbers for the TCO branches. (b) The topological number (black line) and winding numbers of the TCO branches (green and purple lines).
Figure 15: Case three with \(l=18.58\) for the the dyonic black holes. (a) The unit vector field \(n\) on a portion of the \(\theta\)-\(r\) plane. (b) Deflection angle \(\Omega(\psi)\). “\(TP_{15-18}\)” are four TCOs. The closed loops \(C_{19-22}\) have parametric coefficients (\(c_{0}\), \(c_{1}\), \(c_{2}\))=(2.313, 0.22, 0.5), (6.307, 0.22, 0.48), (23.477, 0.6, 0.35), and (27.614, 0.6, 0.35).
Conclusions
In this work, we studied the topology of the TCOs for generic spherically symmetric and asymptotic flat black holes. The results suggest that the total topological charge of TCOs is zero for each given angular momentum. Then we extended the study to the Schwarzschild black holes, scalarized Einstein-Maxwell black holes, and dyonic black holes, which exhibit three characteristic topological configurations.
At first, we considered a static, spherically symmetric, and asymptotic flat black hole. Starting with the Lagrangian of a massive test particle, we obtained the corresponding effective potential, through which the TCOs can be well determined. By making use of the effective potential, we constructed a vector \(\phi\) in the \(r-\theta\) plane with its zero points exactly denoting the TCOs. Employing this property, we established the topology for the TCOs. Each TCO is endowed with a winding number and the stability of the TCOs can be reflected via it. Globally, the topological number \(W\) defined as the sum of all the winding numbers can give us information on the number of the TCOs. By examining the asymptotic behaviors of the vector, we observed that, for each angular momentum, the vector \(\phi\) is outwards at \(\theta=0\) and \(\pi\) and to the right at the horizon and infinity. Such a pattern leads to a vanishing total topological number. Consequently, the TCOs always come in pairs for the fixed angular momentum. Locally, we also found that the stable and unstable TCOs have positive or negative winding numbers, respectively. Meanwhile, the MSCO can be treated as the bifurcation point with vanishing winding number.
Then we generalized the results to three kinds of black holes, which have three different topological configurations.
For the Schwarzschild black holes, we divided them into three cases: \(0\leq l<l_{ISCO}\), \(l=l_{ISCO}\), and \(l_{ISCO}<l<\infty\) according to the angular momentum of the ISCO. For the first case, no TCO can be observed. For the second case, the ISCO exists. And when the angular momentum beyond this values, a pair TCOs appears, one of which is stable and the other one is unstable. For each case, we obtained the topological number \(W\). The results indicate that \(W=0\) and is independent of the angular momentum. It confirms our generic result given in Sec. II.
Further, we took the scalarized Einstein-Maxwell black hole as an example. Different from the Schwarzschild black holes, the MSCO and ISCO do not coincide due to the presence of the scalar hair, and which allows an interesting topological configuration of the TCOs. With the increase of the angular momentum, we observed that there may be no TCO, one pair TCOs, and two pairs TCOs. Nevertheless, the topological number \(W\) still vanishes, keeping the same with the Schwarzschild black holes.
As a third example, we considered the dyonic black holes with a quasi-topological term. Due to the quasi-topological term, the pattern of the TCOs modifies. For the previous two cases, the TCO is absent for small angular momentum. However, there will be at least one pair TCOs for the dyonic black holes, providing a novel topological configuration. As a result, there will be one pair, two pairs TCOs with the increase of the angular momentum. More interestingly, the innermost and outermost TCOs have \(w=-1\) and 1, respectively. Further combining with the winding number of each TCO, we obtained that the topological number \(W\) still vanishes, and which is independent of the angular momentum. This further confirms our general results.
In conclusion, we in this paper established the topology of the TCOs for a spherically symmetric and asymptotic flat black hole. For each angular momentum, the TCOs always come in pairs, independent of the certain topological configurations. Moreover, stable and unstable TCOs, respectively, have winding number \(w\)=1 and -1. We expect to generalize this study to other black hole backgrounds and to disclose more information of the TCOs from the viewpoint of topology.
## Acknowledgements
We thank Prof. Peng Wang for sharing their Mathematica file of hairy black holes. This work was supported by the National Natural Science Foundation of China (Grants No. 12075103 and No. 12247101).
|
2304.01020 | Investigating starburst-driven neutrino emission from galaxies in the
Great Observatories All-Sky LIRG Survey | We present a phenomenological framework for starburst-driven neutrino
production via proton-proton collisions and apply it to (ultra)luminous
infrared galaxies (U/LIRGs) in the Great Observatories All-Sky LIRG Survey
(GOALS). The framework relates the infrared luminosity of a GOALS galaxy,
derived from consistently available Herschel Space Observatory data, to the
expected starburst-driven neutrino flux. The model parameters that define this
relation can be estimated from multiwavelength data. We apply the framework in
a case study to the LIRG NGC 3690 (Arp 299, Mrk 171) and compare the obtained
neutrino fluxes to the current sensitivity of the IceCube Neutrino Observatory.
Using our framework, we also conclude that the neutrino emission in the LIRG
NGC 1068, recently presented as the first steady IceCube neutrino point source,
cannot be explained by a starburst-driven scenario and is therefore likely
dominated by the active galactic nucleus in this galaxy. In addition to the
single-source investigations, we also estimate the diffuse starburst-driven
neutrino flux from GOALS galaxies and the total LIRG population over cosmic
history. | Yarno Merckx, Pablo Correa, Krijn D. de Vries, Kumiko Kotera, George C. Privon, Nick van Eijndhoven | 2023-04-03T14:17:04Z | http://arxiv.org/abs/2304.01020v2 | Investigating starburst-driven neutrino emission from galaxies in the Great Observatories All-Sky LIRG Survey
###### Abstract
We present a phenomenological framework for starburst-driven neutrino production via proton-proton collisions and apply it to (ultra-)luminous infrared galaxies (U/LIRGs) in the Great Observatories All-Sky LIRG Survey (GOALS). The framework relates the infrared luminosity of a GOALS galaxy, derived from consistently available _Herschel Space Observatory_ data, to the expected starburst-driven neutrino flux. The model parameters that define this relation can be estimated from multi-wavelength data. We apply the framework in a case study to the LIRG NGC 3690 (Arp 299, Mrk 171) and compare the obtained neutrino fluxes to the current sensitivity of the IceCube Neutrino Observatory. Using our framework, we also conclude that the neutrino emission in the LIRG NGC 1068, recently presented as the first steady IceCube neutrino point source, cannot be explained by a starburst-driven scenario and is therefore likely dominated by the active galactic nucleus in this galaxy. In addition to the single-source investigations, we also estimate the diffuse starburst-driven neutrino flux from GOALS galaxies and the total LIRG population over cosmic history.
## I Introduction
In 1983, the _Infrared Astronomical Satellite_ (IRAS) was the first space-borne telescope to perform an all-sky survey at infrared (IR) wavelengths [1]. This novel view on the extragalactic sky revealed the existence of galaxies that emit most of their electromagnetic luminosity in the IR frequency range. Amongst these sources, so-called Luminous Infrared Galaxies (LIRGs; \(10^{11}L_{\odot}\leq L_{\rm IR}\equiv L_{\rm IR[8-1000\mu m]}<10^{12}L_{\odot}\)) and Ultra Luminous Infrared Galaxies (ULIRGs; \(L_{\rm IR}\geq 10^{12}L_{\odot}\)) were discovered. Although these objects are relatively rare in the local Universe (\(z<0.3\))1, deep-sky IR observations show that the comoving number density of both LIRGs and ULIRGs has a positive redshift evolution. Furthermore, LIRGs appear to be more numerous than ULIRGs up to at least \(z\sim 2\)[3, 4, 5].
Footnote 1: The number density of LIRGs, however, is larger than optically selected starburst galaxies and Seyfert galaxies at comparable redshift and bolometric luminosity (e.g. [2]).
Follow-up surveys revealed that LIRGs are systems of galaxies covering the entire evolutionary merger sequence, ranging from isolated galaxies, to early interacting systems, to advanced mergers [6, 7]. This is opposed to ULIRGs which are nearly always involved in the final stages of a merger between two gas-rich galaxies [8, 9, 10]. The extreme IR output observed in U/LIRGs is a result of the dynamical nature of these objects. In the merger process, gas and dust are funneled towards the central \(\sim\)100 pc of the interacting galaxies, thereby triggering intense star formation (\(\sim\)10\(-\)100 M\({}_{\odot}\) yr\({}^{-1}\)) [11, 12]. The strong radiation fields emerging from the newly formed stars heat thick layers of dust accumulated from active star formation, which re-radiate the energy in the IR regime. This mechanism generally explains the elevated IR output of U/LIRGs. However, an additional contribution is expected from gas accretion onto a central supermassive black hole, with growing evidence suggesting that they inhabit all massive galaxies (e.g. [13]). This accretion can result in relativistic outflows of matter perpendicular to the plane of accretion. This elevated state of activity is known as an Active Galactic Nucleus (AGN). Reprocessed high-energy emission from AGN activity can (significantly) contribute to the IR output of U/LIRGs (e.g. [14, 15]).
The Great Observatories All-Sky LIRG Survey2 (GOALS) aims to fully characterize the diversity of properties observed in a large, statistically significant sample of the nearest (\(z<0.088\)) U/LIRGs (Section II) [16].
This sample covers all galaxy-interaction stages [17; 7]. Moreover, GOALS galaxies span the full range of nuclear types, i.e. type-1 and type-2 AGN, LINERs, and pure starbursts.
GOALS combines data from space-borne facilities such as the _Spitzer Space Telescope_[18; 19; 20; 21; 22; 23] and _Herschel Space Observatory_[24; 25; 26; 27; 28; 29; 30; 31] at mid-IR and far-IR wavelengths, the _Hubble Space Telescope_ which observes near-IR and optical emission from the Universe [6; 31], the _Galaxy Evolution Explorer_ (GALEX) UV telescope [32; 33], and the _Chandra X-ray Observatory_ operating in the X-ray frequency band [34; 35]. Recently, the _James Webb Space Telescope_ (JWST) was used for the first time to observe GOALS LIRGs with unprecedented resolution [36; 37; 38; 39; 40; 41; 42; 43]. In addition, GOALS objects are also targeted by ground-based observatories such as the radio and submillimeter telescopes _Very Large Array_ (VLA) [44] and the _Atacama Large Millimeter/submillimeter Array_ (ALMA) [45], and large optical-IR facilities such as the Keck Telescopes [46; 47; 48]. The multi-wavelength data from these space-borne and ground-based observatories are combined in comprehensive imaging and spectroscopic surveys. In this work, we set the first step towards expanding GOALS from a multi-wavelength to a multi-messenger survey by investigating high-energy neutrino emission from these galaxies.
High-energy cosmic neutrinos were first discovered in 2013 with the 1-km3 IceCube Neutrino Observatory buried deep within the ice at the South Pole [49]. To date, a diffuse astrophysical neutrino flux has been observed via various independent IceCube analyses [50; 51; 52; 53; 54]. However, the sources of these cosmic neutrinos remain largely unknown. The IceCube Collaboration has performed several searches in order to identify the origin of the astrophysical neutrino flux [55]. Such analyses typically target astrophysical muon neutrinos (\(\nu_{\mu}\)) and antineutrinos (\(\bar{\nu}_{\mu}\))3. Upon collision with an ice nucleus, muon neutrinos can produce muons via a charged-current interaction. These muons leave track-like Cherenkov signatures in the detector, allowing to reconstruct the incoming direction of the neutrino with an angular resolution \(\lesssim 1^{\circ}\) for a muon energy \(\gtrsim 1\) TeV [57]. One of the more generic analyses aims to identify steady point sources in a time-integrated sky scan by looking for spatial clustering of neutrino events on the sky. Recently, such a scan of the Northern Hemisphere, combined with a dedicated source-catalog search, revealed significant evidence (\(4.2\sigma\)) for neutrinos originating from the direction of the GOALS LIRG NGC 1068 [58], which contains an enshrouded AGN surrounded by a starburst ring. Other evidence for a specific source was reported by IceCube after the spatial and temporal correlation between an IceCube neutrino event and the gamma-ray flaring blazar TXS 0506+056 [59; 60]. Both sources contribute no more than about 1% to the diffuse neutrino flux in the energy ranges within which they were observed. As such, the origin of the diffuse flux remains largely unidentified. Nevertheless, diffuse multi-messenger observations of both neutrinos and gamma rays hint towards gamma-ray opaque neutrino sources (e.g. [61; 62; 63]).
Footnote 3: IceCube cannot distinguish neutrinos from antineutrinos, except in the specific case of the Glashow resonance [56]. Therefore, the term “neutrinos” is used in this work to refer to both neutrinos and antineutrinos.
Galaxies in the GOALS sample are characterised by a large amount of enshrouding matter and an enormous energy budget, driven by vigorous star formation and AGN activity. These two key features, in combination with the proximity of the sources, make U/LIRGs excellent candidate neutrino sources. A recent IceCube study sought for neutrinos from the population of ULIRGs in particular, although null results were reported [64]. This allowed the authors to set upper limits on the contribution of the entire ULIRG population to the diffuse neutrino flux observed by IceCube. However, LIRGs show similar star-forming properties as ULIRGs and are \(\sim\)10-50 times more numerous than ULIRGs at any given redshift (e.g. [3]). Therefore, it is crucial to investigate the contribution of _both_ LIRGs and ULIRGs to the IceCube neutrino flux.
In this work, we present a phenomenological framework for starburst-driven neutrino production in starburst galaxies and apply this framework to the GOALS sample. In Section II the GOALS sample is introduced and Section III motivates these galaxies as candidate high-energy neutrino sources. Then, we construct a starburst-driven neutrino production framework in Section IV. Subsequently, this framework is applied to the LIRG NGC 3690 (also known as Arp 299 and Mrk 171) in Section V. Finally, in Section VI, we use the framework to estimate the diffuse neutrino flux expected from the GOALS sample and the total LIRG population over cosmic history.
## II The GOALS sample
The GOALS sample consists of 180 LIRGs and 22 ULIRGs with a median redshift of \(\langle z\rangle=0.0212\)[16]. The closest source in the sample is located at \(z_{\rm min}=0.0030\) and the most distant one at \(z_{\rm max}=0.0876\). The distribution of the GOALS sample on the sky is shown in Figure 1. GOALS objects were originally selected from the IRAS Revised Bright Galaxy Sample (RBGS [65]) as sources with a luminosity threshold of \(L_{\rm IR,IRAS}\geq 10^{11}L_{\odot}\). The RBGS consists of a complete flux-limited sample of 629 galaxies that have an IRAS 60-\(\mu\)m flux density \(S_{60\mu{\rm m,IRAS}}>5.24\) Jy and Galactic Latitude \(|b|>5^{{}^{\circ}}\)[65]. This cut on the Galactic Latitude is shown by the dashed lines in Figure 1.
GOALS is fundamentally based on observations of IRAS, which had a relatively low angular resolution be
tween \(\sim\)0.5' at 12 \(\mu\)m and \(\sim\)2' at 100 \(\mu\)m [66]. Therefore, the IRAS emission for a single GOALS object may correspond to the cumulative emission of individual galaxies in an interacting system. However, the framework presented in this work (Section IV) models neutrino production in the cores of U/LIRGs, based on electromagnetic emission from those regions. Therefore, the IR luminosity of each galaxy in the interacting system is of interest, rather than the total IRAS IR luminosity of the system4. In what follows, it is described how these individual IR luminosities were obtained by GOALS, as these will be used to trace starburst-driven neutrino production. We also discuss the contribution of AGN to the IR luminosity of the targeted galaxies.
Footnote 4: Note that neutrino telescopes such as IceCube, with an angular resolution of the order of \(1^{\circ}\), cannot resolve individual galaxies in interacting systems.
### Individual IR luminosity
The _Spitzer Space Telescope_ (2003), one of IRAS' successors with a higher angular resolution, allowed to spatially disentangle galaxies within the same U/LIRG system. This yields more than 290 individual galaxies for the GOALS sample. Only a fraction of these galaxies were targeted by the _Photodetecting Array Camera and Spectrometer_ (PACS) onboard the _Herschel Space Observatory_ (2009). For U/LIRGs consisting of two or more galaxies, _Herschel_ only targeted the dimmer companion galaxies if their contribution to the total 24-\(\mu\)m flux-density ratio in the _Multiband Imaging Photometer for Spitzer_ (MIPS) exceeded 1:5 with respect to the brightest galaxy in the system. For those U/LIRG constituents that were targeted by _Herschel_, the individual IR luminosity of a galaxy (\(L_{\rm IR,individual}\)) is obtained by applying a scaling factor to the IRAS luminosity of the system in which that galaxy resides (\(L_{\rm IR,IRAS}\)). The scaling factor is computed by taking the ratio between the continuum flux density detected for the individual galaxy, evaluated at 63 \(\mu\)m in the PACS spectrum (\(S_{63\mu{\rm m,PACS}}\)), and the IRAS 60-\(\mu\)m flux density of the whole system (\(S_{60\mu{\rm m,IRAS}}\)). The IR luminosity of a disentangled component in a U/LIRG is then computed as
\[L_{\rm IR,individual}=\frac{S_{63\mu{\rm m,PACS}}}{S_{60\mu{\rm m,IRAS}}}\cdot L _{\rm IR,IRAS}. \tag{1}\]
This leads to an individual IR luminosity for 229 individual GOALS galaxies, consisting of 40 galaxies with \(10^{10.08}L_{\odot}\leq L_{\rm IR}<10^{11}L_{\odot}\), 167 galaxies with \(10^{11}L_{\odot}\leq L_{\rm IR}<10^{12}L_{\odot}\), and 22 galaxies with \(L_{\rm IR}\geq 10^{12}L_{\odot}\)[67]. These individual IR luminosities will be used in Section V and Section VI to trace the starburst-driven neutrino production in the respective sources. The redshift distributions of the three galaxy groups are shown in Figure 2. It is noted that the LIRGs are observed over the whole redshift range, while the closest ULIRG Arp 220 is located at \(z\sim 0.018\).
### AGN contribution to the IR luminosity
In this work we focus on the starburst-driven neutrino emission in U/LIRGs which we trace via the observed IR luminosity. However, a significant fraction of the IR luminosity could be generated by AGN activity. This should be taken into account in order to not overestimate the starburst-driven neutrino flux.
In [67], the average AGN contribution to the total bolometric luminosity, \(\langle\alpha_{\rm AGN}\rangle\in[0,1]\), is presented for each of the galaxies introduced in Section II.1. By making use of low-resolution spectral measurements obtained with the _InfraRed Spectrograph_ (IRS) onboard _Spitzer_, the \(\langle\alpha_{\rm AGN}\rangle\)-values were computed from a number of independent estimators such as the line ratios [Ne V]/[Ne II] and [O IV]/[Ne II], the mid-IR continuum slope, and the equivalent width of polycyclic aromatic hydrocarbon emission bands [67]. Since for U/LIRGs the total IR luminosity approximates the total bolometric luminosity (\(L_{\rm bol}\)) [68], it follows that the IR luminosity can be corrected for AGN activity by multiplying it with the factor \(1-\langle\alpha_{\rm AGN}\rangle\). This is used in Section V and Section VI to properly estimate the starburst-driven neutrino flux expected from GOALS galaxies.
The physical area of a galaxy probed by IRS observations depends on the angular scale covered by the short-low slit, \(\sim 4"\times 4"\)[7]. Because of this limitation, the estimated \(\langle\alpha_{\rm AGN}\rangle\)-values are representative of an entire galaxy only for sources at luminosity distances \(D_{L}\gtrsim 50{-}100\) Mpc. Therefore, as noted by GOALS in [67], the galaxy-wide \(\langle\alpha_{\rm AGN}\rangle\)-values are to be interpreted as upper limits for more nearby sources. The most prominent example of this is NGC 1068 with a reported value \(\langle\alpha_{\rm AGN}\rangle=1\). This galaxy is the closest Seyfert II galaxy to Earth, located at \(D_{L}\sim 15.9\) Mpc. Only two other sources within 50 Mpc have \(\langle\alpha_{\rm AGN}\rangle>0.3\), i.e. the LIRGs
Figure 1: Sky distribution of the GOALS sample. The solid line indicates the Galactic plane and the dashed lines indicates a band with \(|b|>5^{\circ}\).
NGC 1365 and NGC 4418. As the \(\langle\alpha_{\rm AGN}\rangle\)-value of these sources are only tracing the inner central part of the galaxy, it follows that, when considering the full system, these sources potentially have a smaller \(\langle\alpha_{\rm AGN}\rangle\).
The distribution of \(\langle\alpha_{\rm AGN}\rangle\)-values in the GOALS sample is shown in Figure 3. This distribution shows that in the majority of GOALS galaxies the AGN has a secondary contribution to the total bolometric luminosity. However, some of the sources show large \(\langle\alpha_{\rm AGN}\rangle\)-values, i.e. 14% have \(\langle\alpha_{\rm AGN}\rangle>0.2\) and 3% have \(\langle\alpha_{\rm AGN}\rangle>0.5\) (AGN dominates over star formation). This implies, with a good consistency among different estimators, that only for 3% of the local U/LIRGs an AGN is the dominant power source.
## III Motivating goals galaxies as candidate neutrino sources
The majority of GOALS objects are galaxies participating in a dynamical interaction. Such interactions allow for large amounts of dust and gas to be funneled from kpc-scales to the innermost regions of the merging galaxies. This generates pressure waves in the central region and thereby triggers intense star formation. Such starburst regions consist of short-lived, hot, massive stars that emit strong UV radiation fields. This radiation heats the enshrouding matter in which the stars were formed, and this heat is subsequently re-radiated as thermal IR emission. Therefore, IR luminosity and starburst activity are intimately connected in dust-obscured environments, such as the nuclei of U/LIRGs. The massive stars in the starburst region burn significantly faster through the hydrogen phase than low-mass stars. This results in an increased rate of core-collapse supernova events for stars with masses \(\gtrsim 8\ M_{\odot}\). The supernova rate can be \(\sim\)10\(-\)100 times larger than for normal star-forming galaxies such as the Milky Way. During a supernova explosion, the outer layer of the star is ejected with a kinetic energy of \(\sim\)10\({}^{51}\) erg (e.g. [69]). Upon collision, these supersonic ejecta drive strong shock waves with large Mach numbers in the surrounding medium. Particles can be accelerated along these shocks via diffusive shock acceleration [70; 71], which is based on the first-order Fermi mechanism [72]. A fraction of the accelerated hadrons are expected to reach the threshold energy to interact with the radiation fields and merger-enhanced matter in the starburst region via photohadronic and inelastic hadronuclear interactions, respectively. Both interactions produce, along with other particles, charged (\(\pi^{\pm}\)) and neutral (\(\pi^{0}\)) pions. The charged pions decay to high-energy neutrinos, \(\pi^{\pm}\to\mu\ \nu_{\mu}\to e\ \nu_{e}\nu_{\mu}\nu_{\mu}\) (e.g. [73; 74]), and the neutral pions to gamma rays, \(\pi^{0}\to\gamma\gamma\) (e.g. [75; 76]). Thus far, eleven star-forming galaxies, containing four GOALS U/LIRGs, have been identified as gamma-ray sources [77].
As already noted, AGN activity is found in several of the GOALS galaxies. The unified AGN model (see e.g. [78] for a review) states that the AGN is powered by accretion of matter onto a supermassive black hole. In U/LIRGs this is triggered and sustained via the flow of gas and dust towards the central region as the merger progresses. This forms an accretion disk which emits high-energy UV/Optical radiation and can result in relativistic outflows of ionized matter perpendicular to this disk. Particle acceleration is possible both in the relativistic jets and in the thermal plasma above the accretion disk (see e.g. [79]). The hadrons accelerated in this way can interact with the strong radiation fields in the AGN vicinity via photohadronic collisions [79]. Moreover, the accelerated hadrons can inelastically col
Figure 3: Distribution of the average AGN contribution to the bolometric luminosity for individual galaxies in the GOALS sample [67]. The median of the sample is \(\langle\alpha_{\rm AGN}\rangle\) = 0.09.
lide with thermal hadrons in, for example, the accretion disk, the dusty torus surrounding the accretion disk, or in a cloud in the line of sight of the out-flowing jet [63]. Much as in the starburst-driven scenario, these collisions produce high-energy neutrinos.
A subset of the GOALS U/LIRGs host extremely compact and dusty nuclei in the central 100 pc, known as Compact Obscured Nuclei (CONs, e.g. [80; 81]). These CONs can generate a significant fraction of the total IR output of the galaxy. The high column density (\(N_{\rm H}\gtrsim 10^{25}\) cm\({}^{-2}\)) in CONs implies a dust optical thickness above unity up to at least far-IR wavelengths. The most obscured systems only become optically thin at submillimeter and radio wavelengths, e.g. the ULIRG Arp 220 [82], and the LIRGs IC 860 [83] and NGC 4418 [84]. Consequently, the source powering these CONs remains unknown. They could be driven by hidden AGN activity, a nuclear starburst with a top-heavy initial mass function, or a combination of both. If AGN activity is at the origin, then CONs could be the result of rapid accretion onto a supermassive black hole, surrounded by extreme column densities. In any case, if hadronic acceleration occurs, a large cosmic-ray density is expected in a compact and obscured region. Such an extreme environment provides favorable conditions for high-energy neutrino production, whilst also significantly attenuating gamma rays. The latter is of interest as diffuse observations of both neutrinos and gamma rays hint towards gamma-ray opaque neutrino sources (e.g. [85]).
As outlined in this section, obscured star formation and AGN activity, traced by strong IR emission, provide favorable conditions for high-energy neutrino production. In this work, the focus lies solely on high-energy neutrino production driven by supernova activity. This does not exclude other starburst-driven activity, e.g. newborn pulsars [86], and AGN-related processes as interesting neutrino sources. The recent detection of high-energy neutrinos from the direction of the LIRG NGC 1068, for example, points towards the AGN being the dominant source of neutrinos in this galaxy ([58] and see Section VI.1). Additionally, we note that Tidal Disruption Events (TDEs), i.e. when a star is tidally disrupted by the gravitational pull of a supermassive black hole (see [87] for a review), are candidate sources of high-energy neutrinos [88]. U/LIRGs could have an increased rate of TDEs as a result of the amplified star-formation rate in their nuclear regions.
In terms of neutrino-production channels, our starburst-driven model does not take into account photohadronic interactions. For U/LIRGs, the target radiation field is likely dominated by IR emission. The threshold energy for a cosmic ray to interact with a near-IR background photon (\(\sim\)1 eV) is of the order of 100 PeV. The threshold energy for the cosmic ray is even larger for a target field dominated by far-IR radiation. Such extreme cosmic-ray energies are unlikely to be produced efficiently by the starburst activity considered in this work. Finally, as the fraction of elements heavier than protons is subdominant both in the acceleration region and the target interstellar medium (ISM), we opt to only consider high-energy neutrino production in proton-proton (pp) collisions.
## IV Neutrino production framework
Each (circum-)nuclear starburst region in the GOALS sample has a related core-collapse supernova rate (\(\mathcal{R}_{\rm SN}\), typically in units [yr\({}^{-1}\)]). This rate drives the high-energy proton injection rate (\(Q_{\rm p}\) [(GeV/c)\({}^{-3}\) cm\({}^{-3}\) s\({}^{-1}\)]) in the starburst volume (\(V_{\rm SBR}\) [pc\({}^{3}\)]). After injection, these cosmic rays reside for an average time (\(\tau\) [s]) in the volume. The interplay between the proton injection rate and the residence time determines the distribution of high-energy proton momenta (\(\mathcal{F}_{\rm p}\) [(GeV/c)\({}^{-3}\) cm\({}^{-3}\)]). The latter provides information on the available energy budget for charged pion production, which is required to compute the neutrino production rate (\(q_{\nu}\) [GeV\({}^{-1}\) cm\({}^{-3}\) s\({}^{-1}\)]). Finally, the expected neutrino flux (\(\Phi_{\nu}\) [GeV\({}^{-1}\) cm\({}^{-2}\) s\({}^{-1}\)]) is found by integrating the neutrino production rate over the starburst volume and taking into account the luminosity distance to the source (\(D_{L}\) [Mpc]).
In the following, we construct a phenomenological framework based on the above-mentioned parameters to compute a per-source starburst-driven neutrino flux for all GOALS galaxies. Our framework builds on the model of cosmic-ray transport in starburst nuclei presented in [89]. We contribute to this model by placing it in the context of local U/LIRGs as candidate neutrino sources. Moreover, our framework provides an approach to estimate the cosmic-ray luminosity per source via the IR luminosity, the AGN contribution to the IR luminosity, and the initial mass function of the studied region.
### Supernova rate in the starburst region
Optical emission from a supernova explosion is known to outshine entire galaxies. Given the large amounts of obscuring matter in GOALS galaxies, one cannot rely on optical counting to compute supernova rates in these galaxies. Nevertheless, there are numerous electromagnetic tracers (nearly) unaffected by obscuring matter that can be related to star-forming activity. There have been individual supernova counting experiments in U/LIRGs using near-IR emission (e.g. [90]) and radio emission (e.g. [91]). It is, however, not feasible to do this for the entire GOALS sample. As such, we opt to relate the total IR luminosity of a galaxy to the star-formation rate and subsequently relate this star-formation rate to the core-collapse supernova rate via scaling relations. We also take into account that part of the total IR luminosity
could be generated by AGN activity and regions outside the central \(\sim\)100 pc of interest. This allows to estimate the AGN-corrected nuclear supernova rate.
We use the IR emission as it is available for all GOALS galaxies (Section II), and as such allows us to estimate the diffuse neutrino flux for the GOALS sample in Section VI. Radio emission is also an interesting tracer in this context, as it is a direct tracer of particle acceleration. However, such data is not uniformly available for all GOALS U/LIRGs. Moreover, it is not straightforward to connect the relativistic electron population, traced by synchrotron emission, with any associated proton population.
#### iv.1.1 Calibrating the supernova rate to the IR luminosity
The bolometric luminosity of young stellar populations is dominated by massive, short-lived, UV-bright stars. Therefore, the UV luminosity is a sensitive probe for recent star formation. The presence of obscuring matter can lead to severe attenuation of UV photons, which are reprocessed into thermal emission. IR and UV emission can therefore be used to trace obscured and unobscured star formation, respectively. A study of 135 GOALS U/LIRGs shows that the far-UV measured by GALEX contributes an average of \(\sim\)4 % to the overall star-formation rate [33]. Therefore, we opt to only use the IR luminosity to trace the star-formation rate in GOALS U/LIRGs.
In general, each tracer of the star-formation rate is mapped back to star formation via two main relations, i.e. the Initial Mass Function (IMF) and the Star-Formation History (SFH). The IMF describes the mass distribution of a population of stars at formation time within a volume of space. The IMF is typically well-described by a power law of the form \(\zeta(m)\propto m^{-\beta}\), with \(m\) the stellar mass and \(\beta(m)\) the power-law index (see e.g. [92] for a review). The latter can have different values for different stellar-mass ranges. The SFH describes how the star-formation rate evolved over time. Given an IMF, SFH, and a stellar-evolution model, it is possible to determine from simulations the calibration factor (\(A_{\rm IR}\)) that relates star-formation rate to IR luminosity.
To obtain the calibration factor \(A_{\rm IR}\) for different IMFs, we use the web-based software _Starburst995_(SB99) [93; 94; 95; 96]. This software allows to model spectrophotometric properties of star-forming galaxies, such as the time-dependent Spectral Energy Distribution (SED) of a stellar population. Following the procedure outlined in [97] (M11 from here on) we assume that the entire Balmer continuum, i.e. stellar UV emission between 912 A \(<\lambda<\) 3646 A, is absorbed by dust and re-radiated as optically-thin thermal IR emission. This implies that the IR luminosity due to re-processed stellar emission, \(L_{\rm IR,SED}\), is obtained by integrating the Balmer range of the simulated SED. As such, the calibration factor is defined as
Footnote 5: Available at www.stsci.edu/science/starburst99/docs/default.htm.
\[\left(\frac{\rm SFR_{SB99}}{M_{\odot}~{}{\rm yr}^{-1}}\right)=A_{\rm IR} \cdot\left(\frac{L_{\rm IR,SED}}{{\rm erg~{}s}^{-1}}\right)\,, \tag{2}\]
with \(\rm SFR_{SB99}\) the star-formation rate used as input to run the SB99 simulation. The value of the calibration factor in M11, assuming a Kroupa IMF (see Section IV.1.2), solar metallicity, and a constant SFH, is \(A_{\rm IR}=3.88\times 10^{-44}\). In a follow-up study ([98], M12 from here on), an empirical approach found a linear relation between the star-formation rate and \(L_{\rm IR}\) resulting in \(A_{\rm IR}=3.15\times 10^{-44}\). This empirical relation is quoted to be reliable within a factor of two. The calibration factor obtained in M11 via SB99 is therefore consistent with the empirical calibration factor in M12. In this work, \(A_{\rm IR}\) is computed in Section IV.1.2 for various types of IMFs using SB99, including the IMF of M11 for comparison.
SB99 also provides the total supernova rate as a function of time for the same stellar population. This allows to compute a calibration between the supernova rate and star-formation rate as
\[\left(\frac{\mathcal{R}_{\rm SN,SB99}}{{\rm yr}^{-1}}\right)=A_{\rm SFR}\cdot \left(\frac{\rm SFR_{SB99}}{M_{\odot}~{}{\rm yr}^{-1}}\right)\,, \tag{3}\]
with \(\mathcal{R}_{\rm SN,SB99}\) provided by SB99. Both calibration factors, \(A_{\rm IR}\) and \(A_{\rm SFR}\), are computed in a regime where the SED and supernova rate reach an equilibrium.
Combining both calibration factors gives \(A_{\rm IR}=A_{\rm SFR}\cdot A_{\rm IR}\), such that the total supernova rate in the whole galaxy is estimated as
\[\left(\frac{\mathcal{R}_{\rm SN}}{{\rm yr}^{-1}}\right)=\Lambda_{\rm IR}\cdot \left(\frac{L_{\rm IR}}{{\rm erg~{}s}^{-1}}\right)\,, \tag{4}\]
with \(L_{\rm IR}\) the IR luminosity of that galaxy.
#### iv.1.2 Computing the calibration factors
To compute the value of the calibration factors \(A_{\rm IR}\) and \(A_{\rm SFR}\), the SB99 input parameters must be fixed. We consider a constant SFH and solar metallicity. As the parametrization of the IMF and its universality remains uncertain, it is not straightforward to select an appropriate IMF. We therefore investigate the effect of two different classes of IMFs. The first class consists of the Salpeter IMF (1953) [99], which has a single power-law exponent \(\beta=2.35\), and a Kroupa IMF (2001) [100], with \(\beta_{\rm low}=1.3\) for \(0.1<m/M_{\odot}<0.5\) and \(\beta_{\rm high}=2.3\) for \(0.5<m/M_{\odot}<100\). These so-called canonical IMFs are based on resolved stellar populations in the Milky Way and nearby galaxies. For the second class, we consider two top-heavy IMFs. Such IMFs predict relatively more heavy-mass stars than expected from canonical IMFs. The interest in a top-heavy IMF for starburst regions
is justified from both a theoretical and data-driven point of view. Theoretically, this is argued by the increased temperature in star-forming clouds due to the enhanced cosmic-ray density in starburst regions. This increase in temperature leads to a larger Jeans mass in the star-forming clouds, which suppresses the formation of low-mass stars [101]. This implies a change of the IMF shape towards a top-heavy IMF. In addition, high-resolution ALMA observations of nearby U/LIRGs suggest unusually low \({}^{13}\)C/\({}^{18}\)O isotope abundance ratios [102; 103; 104]. Short-lived massive stars (\(\gtrsim 8~{}M_{\odot}\)) are the predominant source of \({}^{18}\)O in the ISM, while the \({}^{13}\)C atom is convected into envelopes of long-lived, low-mass stars (\(\lesssim 8~{}M_{\odot}\)). Therefore, unusually small values of the abundance ratio hints towards relatively more short-lived massive stars than expected.
The SB99 simulations show that \(\sim\)60 Myr after the onset of star formation the supernova rate stabilizes, assuming a constant SFH. Therefore, all calibrations were computed beyond this timestamp. Table 1 shows the computed calibration factors at 100 Myr for a Salpeter, Kroupa, and two top-heavy IMFs. The latter have the same low-mass exponent \(\beta_{\rm low}=1.3\) as the canonical Kroupa IMF discussed earlier, but the high-mass exponent is taken to be \(\beta_{\rm high}=1.0\) and \(\beta_{\rm high}=1.5\). An exponent such as the latter has been suggested to explain the reionization of the intergalactic medium at \(z\lesssim 11\)[105]. The value \(\beta_{\rm high}=1.0\) is chosen as an arbitrary extreme case. First, it is concluded that \(A_{\rm IR}\) obtained from the Kroupa IMF is a factor 1.27 larger than the value obtained in M11, which uses the same IMF. This increase is still consistent with the empirical calibration between star-formation rate and total IR luminosity presented in M12. Second, comparing the \(A_{\rm IR}\) values obtained from all the investigated IMFs, it is concluded that \(A_{\rm IR}\) is significantly lower for top-heavy IMFs as compared to the canonical IMFs. For a fixed IR luminosity, this results in a lower star-formation rate for top-heavy IMFs as opposed to the star-formation rate obtained from the canonical IMFs. This difference is, however, less prominent for the \(\Lambda_{\rm IR}\) calibration factor. Compared to the canonical Kroupa IMF, the supernova rate for a fixed \(L_{\rm IR}=10^{11}L_{\odot}\) is a factor 1.31 lower for the top-heavy IMF with \(\beta_{\rm high}=1.5\) and a factor 1.75 lower for a top-heavy IMF with \(\beta_{\rm high}=1.0\). Although the total predicted supernova rate decreases for a top-heavy IMF, the average progenitor mass per supernova event is larger, and as a consequence also the average explosion energy per supernova event \(E_{\rm SN}\) (see e.g. [106]). This increase affects the supernova luminosity, i.e. \({\cal L}_{\rm SN}=E_{\rm SN}\cdot{\cal R}_{\rm SN}\), which is required to compute the high-energy particle budget available for neutrino production. In the following section, it is discussed how this effect is taken into account in this work.
#### ii.2.3 The effect of the average supernova progenitor mass on the supernova luminosity
To account for the increase in average progenitor mass when considering top-heavy IMFs, we take the normal Kroupa IMF as a benchmark and fix the average energy per supernova event to \(E_{\rm SN,bm}=10^{51}\) erg. We assume in this work that the explosion energy scales linearly with the average progenitor mass. Then, the average energy per supernova event for a different IMF is found by the scaling relation
\[E_{\rm SN}=\frac{\langle M_{\rm SN,IMF}\rangle}{\langle M_{\rm SN,bm}\rangle} \cdot E_{\rm SN,bm}={\cal M}\cdot E_{\rm SN,bm}. \tag{5}\]
\({\cal M}\) is the fraction of the typical mass per supernova event for the chosen IMF, \(\langle M_{\rm SN,IMF}\rangle\), over the typical mass per supernova event for the benchmark case, \(\langle M_{\rm SN,bm}\rangle\). Based on our SB99 simulations, we find \({\cal M}=1.37\) for a top-heavy IMF with \(\beta_{\rm high}=1.5\) and \({\cal M}=2.01\) for the top-heavy IMF with \(\beta_{\rm high}=1.0\). Taking this factor into account, it is found that the supernova luminosity \({\cal L}_{\rm SN}\) for the top-heavy IMF with \(\beta_{\rm high}=1.5\) and the benchmark case differ by 5%. For the top-heavy IMF with \(\beta_{\rm high}=1.0\) this correction gives a supernova luminosity which is \(\sim\)15 % larger than found for the canonical Kroupa IMF.
#### ii.2.4 Correcting the IR luminosity for AGN contamination and extended IR emission
The SB99 simulations do not take into account AGN activity. However, strong AGN activity in U/LIRGs could heat the matter in the (circum-)nuclear region around the supermassive black hole. This heating can significantly contribute to the observed IR luminosity of its host galaxy. Therefore, using the total IR luminosity as tracer for the star-formation rate in the presence of a strong AGN can significantly overestimate the actual supernova rate. To correct the IR luminosity for this, the relative AGN contribution to the bolometric luminosity \(\langle\alpha_{\rm AGN}\rangle\) (see Section II) is used. Doing this is justified, as per definition for U/LIRGs, \(L_{\rm bol}\sim L_{\rm IR}\) (e.g. [68]). Furthermore, in this work, we are interested in the nuclear supernova rate in the central \(\sim\)100 pc. Therefore, the AGN-corrected IR luminosity of the nuclear region
\begin{table}
\begin{tabular}{c c c c c} & \(A_{\rm IR}\times 10^{44}\) & \(A_{\rm SFR}\) & \(A_{\rm IR}\times 10^{46}\) & \(R_{\rm SN}\) [yr\({}^{-1}\)] \\ \hline Salpeter & 7.48 & 0.008 & 5.98 & 0.23 \\ Kroupa & 4.93 & 0.012 & 5.43 & 0.21 \\ TH \(\beta_{\rm high}=1.5\) & 1.54 & 0.027 & 4.15 & 0.16 \\ TH \(\beta_{\rm high}=1.0\) & 1.24 & 0.026 & 3.24 & 0.12 \\ \end{tabular}
\end{table}
Table 1: Calibration factors for different IMFs at 100 Myr, using a solar metallicity and constant SFH. The supernova rate \({\cal R}_{\rm SN}\) is computed for a fixed IR luminosity of \(L_{\rm IR}=10^{11}\)\(L_{\odot}\) via Eq. 4.
(\(L_{\rm IR,nuclear}\)) is required rather than the total IR luminosity of the galaxy. To take this into account, the factor \({\cal G}\in\) ]0,1] is introduced, which describes the amount of IR luminosity generated by nuclear starburst activity. As such, \(L_{\rm IR,nuclear}={\cal G}\cdot([1-\langle\alpha_{\rm AGN}\rangle]\cdot L_{\rm IR})\). IR observations of U/LIRGs show that systems with larger \(L_{\rm IR}\) tend to have a more centrally concentrated emission (e.g. [19; 20]). Therefore, \({\cal G}\) is likely to be closer to unity for systems with larger \(L_{\rm IR}\). Targeted observations of four GOALS LIRGs show that \({\cal G}\gtrsim 0.5\) for these galaxies [107].
The nuclear AGN-corrected supernova rate per resolved galaxy in the GOALS sample is therefore calculated as
\[\left(\frac{{\cal R}_{\rm SN}}{{\rm yr}^{-1}}\right)=\Lambda_{\rm IR}\cdot \left(\frac{{\cal G}\cdot[1-\langle\alpha_{\rm AGN}\rangle]\cdot L_{\rm IR}}{ {\rm erg~{}s}^{-1}}\right)\,. \tag{6}\]
Using Eq. 6, we can estimate the supernova rates in each of the 229 individual GOALS galaxies targeted in this work (see Section II). To do so, we use the IR luminosity of the galaxy and its corresponding \(\langle\alpha_{\rm AGN}\rangle\)-value, discussed in Section II. Then, for \(\Lambda_{\rm IR}\) = \(5.43\times 10^{-46}\) (Table 1) and \({\cal G}=1\) in all galaxies, we find a median supernova rate of \({\cal R}_{\rm SN}\) = 0.39 yr\({}^{-1}\), a minimum supernova rate of \({\cal R}_{\rm SN}\) = 0.02 yr\({}^{-1}\), and a maximum supernova rate of \({\cal R}_{\rm SN}\) = 6.84 yr\({}^{-1}\).
### Proton injection rate
Cosmic-ray acceleration via diffusive shock acceleration is expected along the forward shock in Core-Collapse Supernova (CCSN) remnants. This mechanism gives rise to a power-law differential momentum distribution of accelerated particles. Therefore, a \(p^{-\gamma_{\rm SN}}\) power-law relation between the proton injection rate (\(Q_{\rm p}\)) and the injected momentum \(p\) is adopted. In addition, an exponential cutoff is considered at the maximum momentum \(p_{\rm max}\) achieved in the acceleration process. The values of both \(\gamma_{\rm SN}\) and \(p_{\rm max}\) are discussed below. The total injection rate of high-energy protons per unit volume due to all CCSN in a (circum-)nuclear starburst region is then expressed as
\[Q_{\rm p}=\frac{N_{C}}{V_{\rm SBR}}\left[\frac{p}{m_{p}c}\right]^{-\gamma_{ \rm SN}}e^{\frac{-p}{p_{\rm max}}}. \tag{7}\]
\(N_{C}\) is the normalization constant to be fixed by the supernova rate in the starburst region, \(V_{\rm SBR}\) is the volume of the region under consideration, and \(m_{\rm p}\) is the proton mass. In the following sections, each of the parameters of Eq. 7 are discussed in more detail.
#### iv.2.1 Geometry of the starburst region
Hydrodynamic simulations of mergers between gas-rich galaxies predict formation of nuclear gas disks on scales of \(\sim\)10\(-\)100 pc (e.g. [108]). Observational evidence for such gas disks in the nuclear region of GOALS U/LIRGs is provided by a survey targeting 17 nearby U/LIRGs [46]. Within this gas-disk configuration, stars are formed, which eventually explode as supernovae and thereby inject cosmic rays into the nuclear ISM. Based on these simulations and observations, we opt for a disk geometry to model the volume in which cosmic rays propagate. This disk is parameterized by a radius \(R_{\rm SBR}\) and a scale height \(H_{\rm SBR}\). This implies that the volume of the starburst region is computed as \(V_{\rm SBR}=2H_{\rm SBR}\pi R_{\rm SBR}^{2}\), with \(2H_{\rm SBR}\) the total thickness of the nuclear disk.
#### iv.2.2 Normalizing the injection rate to the cosmic-ray luminosity
The normalisation constant \(N_{C}\) in Eq. 7 of the proton injection rate is determined by imposing
\[{\cal L}_{\rm CR}=\int_{p_{\rm min}}^{p_{\rm max}}4\pi p^{2}\cdot N_{C}\cdot \left(\frac{p}{m_{\rm p}c}\right)^{-\gamma_{\rm SN}}\cdot\,e^{\frac{-p}{p_{ \rm max}}}\cdot{\cal T}(p)\ {\rm d}p. \tag{8}\]
\({\cal T}(p)=\sqrt{p^{2}c^{2}+m_{\rm p}^{2}c^{4}}-m_{\rm p}c^{2}\) is the kinetic energy of a single cosmic-ray particle, and \({\cal L}_{\rm CR}\) is the total cosmic-ray luminosity due to CCSN activity in the nuclear starburst region. The minimum proton momentum \(p_{\rm min}\) is fixed to6\(p_{\rm min}=0.1\) GeV/\(c\) and the cosmic-ray luminosity \({\cal L}_{\rm CR}\) is computed as
Footnote 6: The value of \(N_{C}\) is weakly dependent on the choice of \(p_{\rm min}\) for \(p_{\rm min}\lesssim 0.1\) GeV/\(c\).
\[{\cal L}_{\rm CR}=\eta_{\rm SN}\cdot{\cal R}_{\rm SN}\cdot E_{\rm SN}=\eta_{\rm tot }\cdot L_{\rm IR}. \tag{9}\]
The total CCSN rate \({\cal R}_{\rm SN}\) and kinetic energy output per supernova \(E_{\rm SN}\) are computed as discussed in Section IV.1. The conversion factor \(\eta_{\rm SN}\) determines the amount of kinetic energy from the outflow that goes into the acceleration of cosmic-ray particles. The observed cosmic-ray spectrum at Earth up to \(\sim\)3 PeV can be explained with \(\eta_{\rm SN}\simeq 0.10{-}0.30\) for the bulk of the supernovae in the Milky Way (e.g. [109], [110] for reviews). Moreover, kinetic simulations show that this energy transfer can be as much as \(\eta_{\rm SN}\simeq 0.10{-}0.20\)[111]. This indicates that the conversion can be anything between \(\eta_{\rm SN}\simeq 0.10{-}0.30\) as long as predictions relying on this conversion factor are compatible with observations. The factor \(\eta_{\rm tot}\) describes the fraction of IR luminosity which is related to the cosmic-ray luminosity due to starburst activity.
Spectral index of proton injection
Diffusive shock acceleration in the presence of strong shock waves, such as those driven by supernova ejecta, predicts \(Q_{\rm p}\propto p^{-4}\). However, observations of Galactic supernova events typically require softer spectra to model their gamma-ray spectra (e.g. [112; 113]). The value of \(\gamma_{\rm SN}\) for a GOALS galaxy can be estimated from the spectral index (\(\Gamma\)) of the starburst-driven hadronic gamma-ray spectrum of that galaxy (see e.g. [114; 89]). The value of \(\Gamma\) can on its turn be obtained by fitting the observed gamma-ray flux \(\Phi_{\gamma}\) of a galaxy with a function of the form \(\Phi_{\gamma}\propto E^{-\Gamma}\). Both spectral indices are related as \(\Gamma=\gamma_{\rm SN}-2\), because \(\gamma_{\rm SN}\) is associated with momentum space and \(\Gamma\) with energy space. As both gamma rays and neutrinos are expected to follow the same spectral shape, the spectral index \(\gamma\) of the neutrino flux \(\Phi_{\nu}\) can also be estimated from \(\Gamma\). Note that \(\Phi_{\nu}\propto E^{-\gamma}\), with \(\gamma=\gamma_{\rm SN}-2\). However, only eleven star-forming galaxies are identified as gamma-ray sources at this time, including three LIRGs and one ULIRG, all four in GOALS [77]. It is therefore not possible to constrain \(\gamma_{\rm SN}\) systematically for individual galaxies in the GOALS sample.
#### iv.2.4 Maximum cosmic-ray momentum
The maximum proton energy reached in the acceleration process (\(E_{\rm max}=p_{\rm max}c\)) determines up to which energy neutrinos are significantly produced. About 5 % of the primary proton energy is transferred to the high-energy neutrino in an inelastic collision. As such, to produce neutrinos of \(\sim\)1 PeV, as observed with IceCube, a cosmic accelerator should be able to accelerate particles up to \(E_{\rm max}\sim 100\) PeV. This reduces to \(E_{\rm max}\sim 1\)\(-\)10 PeV to produce neutrinos of \(\sim\)100 TeV.
Observed cosmic rays with energies up to \(\sim\)3 PeV are generally attributed to galactic supernovae. This is based on energy considerations and GeV-TeV gamma-ray observations of supernova remnants [115; 113]. Modelling efforts are also in favor of particles reaching energies of \(\sim\)10\(-\)100 PeV in supernova acceleration. This relies on the presence of a sufficiently strong magnetic fields and/or the presence of a magnetic-plasma wind of the progenitor star [116; 117; 118; 119]. In (circum-)nuclear starburst regions of U/LIRGs the magnetic field strength is significantly amplified (e.g. [120]) and the newly formed stars could be on average more massive as opposed to normal star-forming regions. The latter implies an increase in the average explosion energy per supernova and an enhancement in the stellar-mass loss via stellar winds. This, in combination with the amplified magnetic field, indicates on average larger \(p_{\rm max}\) values as opposed to star-forming galaxies such as the Milky Way.
### Cosmic-ray propagation and calorimetric conditions
To model the confinement of the supernova-injected particles in the nuclear disk, a leaky-box model is assumed. This model dictates that the injected cosmic rays are allowed to move freely in the starburst volume and have a non-zero chance to escape the boundaries. The rate at which particles escape these boundaries is defined as the inverse of the average escape time \(\tau_{\rm esc}\). We consider advection via a galactic-scale outflow and spatial diffusion as cosmic-ray removing processes. The average escape time \(\tau_{\rm esc}\) in a particular starburst region is thus computed as
\[\tau_{\rm esc}=\left[\tau_{\rm diff}^{-1}+\tau_{\rm adv}^{-1}\right]^{-1}\, \tag{10}\]
with \(\tau_{\rm diff}\) and \(\tau_{\rm adv}\) the average timescales over which diffusion and advection occur, respectively. In addition, cosmic rays can also participate in inelastic pp-interactions before being removed from the starburst volume. These catastrophic collisions result in energy loss over an average timescale \(\tau_{\rm pp}\). Continuous energy losses such as Coulomb interactions and ionization also affect the propagation of cosmic rays. However, for particle energies larger than 1 GeV, such energy losses are negligible as opposed to the catastrophic interactions (e.g. [121]). Therefore, the continuous energy-loss processes can be safely neglected for the purposes of this work as a primary cosmic-ray energy of \(E\gtrsim 1\) PeV is required to produce neutrinos at the level of IceCube observations. Moreover, for typical magnetic-field strengths at the scale of the starburst region, proton-synchrotron losses are negligible.
The average total time a particle spends in the starburst region \(\tau\) is then computed as
\[\tau=\left[\tau_{\rm diff}^{-1}+\tau_{\rm adv}^{-1}+\tau_{\rm pp}^{-1}\right] ^{-1}. \tag{11}\]
The diffusion timescale, advection timescale, and the energy-loss timescale due to inelastic pp-collisions are discussed in more detail in the following sections.
#### iv.3.1 Diffusion
Cosmic rays injected by supernova activity will interact with the turbulent magnetic field in the starburst region. This leads to a random walk driven by the Larmor radius \(r_{\rm L}\) of the particle. Eventually, the random walk leads to diffusion from the central starburst region, assuming no other processes are affecting the propagation. The timescale over which diffusion happens is therefore conservatively approximated as
\[\tau_{\rm diff}=\frac{H_{\rm SBR}^{2}}{D}\, \tag{12}\]
with \(D\) the diffusion coefficient, which depends on the magnetic field strength \(B\) in the central starburst region,
and \(H_{\rm{SBR}}\) the scale height of the nuclear disk introduced in Section IV.2.1.
Following [89] we choose a Kolmogorov-type diffusion in the starburst volume. As such, the diffusion coefficient in Eq. 12 is parameterized as
\[D=\frac{r_{L}c}{3\mathcal{F}(k)}\, \tag{13}\]
which is based on the quasi-linear formalism. The value of the diffusion coefficient \(D\) scales with the relativistic gyroradius \(r_{\rm L}=p/qB\) of the cosmic ray, with \(q\) the charge, \(p\) the momentum, and \(B\) magnetic field strength in the central starburst region. The strength of the magnetic field in the central region of a starburst galaxy is typically \(\gtrsim 100\)\(\mu\)G and can even reach a few mG [120]. For all U/LIRGs in this work, we fix \(B=250\)\(\mu\)G, consistent with targeted observations of NGC 3690 (see Section V). Furthermore, the diffusion coefficient is also affected by the speed of the cosmic ray, which is fixed to the speed of light \(c\). \(\mathcal{F}(k)\) is the normalized energy density per unit logarithmic wave number \(k\) in the turbulent magnetic field. This parameter is expressed as \(\mathcal{F}(k)=k\cdot W(k)=k\cdot W_{0}\cdot(k/k_{0})^{-d}\) and normalized as
\[\int_{k_{0}}^{\infty}\mathcal{F}(k)\mathrm{d}(\ln k)=\int_{k_{0}}^{\infty}W_{0 }\cdot\left(\frac{k}{k_{0}}\right)^{-d}\mathrm{d}k=\eta_{B}. \tag{14}\]
Here \(\eta_{B}=(\delta B/B)^{2}\) is the turbulence ratio with \(\delta B\) the turbulent component of the magnetic field, and \(k_{0}^{-1}=1\) pc is the characteristic length scale at which turbulence is injected. In this work, we consider cosmic-ray interactions with large-scale Kolmogorov turbulence such that \(d=5/3\) and \(\eta_{B}=1\).
To evaluate the diffusion timescale \(\tau_{\rm{diff}}\), it is assumed that cosmic rays predominantly interact with the resonant mode \(k_{\rm{res}}=1/r_{L}\). Then, \(\mathcal{F}(k_{\rm{res}})\propto k_{\rm{res}}^{-\frac{q}{3}}=r_{L}^{\frac{q}{ 3}}\propto p^{\frac{2}{3}}\). As a result, the diffusion coefficient scales with momentum as \(D(p)\propto p^{\frac{q}{3}}\) such that \(\tau_{\rm{diff}}\propto p^{-\frac{1}{3}}\).
#### iv.2.2 Advection
Galactic-scale outflows in starburst galaxies are commonly observed (see e.g. [122] for a review). A possible driving mechanism for such an outflow is the mechanical energy transfer to the nuclear ISM via stellar winds and supernova explosions. These interactions induce strong shocks that heat and pressurize the ISM. In addition, AGN activity and cosmic rays are also proposed as driving mechanisms (e.g. [122]). As a result of the energy transfer to the ISM, a cavity of very hot gas is formed. Due to the pressure imbalance between the nuclear region and the ISM of the host galaxy, this gas starts expanding above and below the galactic disk. Once the scale height of the galactic disk is reached, the wind breaks out into the galactic halo [123] and thereby advects part of the cosmic-ray population out of the nuclear region. As such, these cosmic rays will not contribute to the high-energy neutrino production in the nuclear region. It should be noted that advected cosmic rays could be accelerated and converted to high-energy neutrinos within the wind [124]. This contribution is not considered within our framework.
The velocity profile of the expanding bubble is such that the wind speed increases as the edge of the nuclear region is reached. As the expanding wind breaks out into the galactic halo, the terminal velocity (\(v_{\infty}\)) is quickly reached [123]. The wind speed at the point of cosmic-ray advection (\(v_{\rm{adv}}\)) is therefore bound by the terminal velocity, i.e. \(v_{\rm{adv}}<v_{\infty}\).
The velocity of galactic-scale outflows is inferred from spectral line emission of the wind. Strong winds with speeds of 500\(-\)1500 km s\({}^{-1}\) have been detected by _Herschel_ in ULIRGs [125]. These winds are also observed in LIRGs. ALMA observations of the LIRG NGC 3256, for example, reveal a molecular outflow from the northern nuclear disk. This outflow is part of a starburst-driven superwind with a maximum velocity \(>\) 750 km s\({}^{-1}\)[126]. As indicated above, the advection speed is likely smaller than these terminal velocities.
The advection timescale \(\tau_{\rm{adv}}\) is approximated as the ratio of the scale height of the nuclear disk \(H_{\rm{SBR}}\) and the advection speed \(v_{\rm{adv}}\),
\[\tau_{\rm{adv}}=\frac{H_{\rm{SBR}}}{v_{\rm{adv}}}. \tag{15}\]
#### iv.2.3 Energy-loss timescale
To evaluate the rate at which cosmic rays lose their energy by inelastically colliding with the nuclear ISM, we make the assumption that the cosmic rays encounter the average ISM proton density in the nuclear region (\(n\)). The rate at which high-energy protons interact in the starburst region via inelastic pp-collisions then scales with the average ISM proton density \(n\), the cross-section of inelastic pp-collisons (\(\sigma_{\rm{pp}}\)), and the velocity of the cosmic ray. As the cosmic-ray protons of interest are highly relativistic, the speed of the cosmic rays is fixed to the speed of light \(c\). The inelasticity of a collision is fixed to \(\zeta=0.5\)[127]. As such, the timescale for energy loss via inelastic pp-scattering can be expressed as
\[\tau_{\rm{pp}}=\frac{1}{n\cdot\sigma_{\rm{pp}}(E)\cdot c\cdot\zeta}. \tag{16}\]
For the cross-section we use the parameterization given in [128], such that \(\sigma_{\rm{pp}}=34.3+1.88\ln L+0.25L^{2}\) mb with \(L=\ln(E/1\ {\rm TeV})\). This expression is constructed from accelerator and simulation data.
Calorimeter conditions
A starburst region efficiently converts high-energy protons into neutrinos if the energy-loss timescale is significantly shorter than the timescale over which diffusion and advection occur. In that case, the starburst region acts as a calorimeter. To quantify the calorimeter conditions, the parameter \(\mathcal{C}_{\rm pp}\in[0,1]\) is introduced as
\[\mathcal{C}_{\rm pp}=\frac{\tau}{\tau_{\rm pp}}=\frac{f_{\rm pp}}{1+f_{\rm pp }}. \tag{17}\]
The parameter \(f_{\rm pp}\) is the effective optical depth for pp-interactions, also known as the pp-collision efficiency, and is defined as the ratio of \(\tau_{\rm esc}\) to \(\tau_{\rm pp}\). As such, if the pp efficiency is large, secondary particle production will dominate over particle escape, which corresponds to \(\mathcal{C}_{\rm pp}\to 1\). Conversely, if particle escape dominates, then \(\mathcal{C}_{\rm pp}\ll 1\). Between these two extremes, \(\mathcal{C}_{\rm pp}>0.5\) indicates the conditions for which \(\tau_{\rm pp}\) is on average the shortest timescale in the system.
Figure 4 shows the parameter space of \(\mathcal{C}_{\rm pp}\) for variable ISM proton density in the nuclear region (\(n\)) and advection speed (\(v_{\rm adv}\)). For this, a 10 PeV proton is assumed to propagate in a nuclear disk with scale height \(H_{\rm SBR}\) = 150 pc, taking a Kolmogorov-type diffusion model. The dash-dotted line shows for which combinations of \(n\) and \(v_{\rm adv}\) a value of \(\mathcal{C}_{\rm pp}=0.5\) is obtained. The black hatched region indicates \(\mathcal{C}_{\rm pp}\) values for typical ISM proton densities in the nuclear region of U/LIRGs, \(n\gtrsim 1000\) cm\({}^{-3}\) ([129], see also Section V), and advection speeds between 500 km s\({}^{-1}\) and 1500 km s\({}^{-1}\). Note that, although terminal velocities of \(\sim\)1500 km s\({}^{-1}\) are observed in U/LIRGs, it is unlikely that the advection speed \(v_{\rm adv}\) is equally high (Section IV.3.2).
In [7], for example, it is shown that GOALS U/LIRGs in a late or final merger stage are on average more obscured. As galaxies merge, gas and dust is funneled towards the central regions, making them more compact and obscured. As such, ULIRGs, which are nearly always in the final stage of a merger, are expected to be located at the high end of the particle densities indicated in Figure 4. For LIRGs, which are observed in every merger stage, this could strongly depend on how advanced the merger is. In any case, high-energy protons are expected to lose a significant fraction of their initial energy in the nuclear region of U/LIRGs. For comparison, we also investigate typical conditions in non-U/LIRG starburst galaxies. Prototypical examples of such galaxies are the nearby starburst galaxies M82 and NGC 253. This type of starburst galaxy typically has a lower ISM proton density in the nuclear region, i.e. \(n\sim 100\) cm\({}^{-3}\) (e.g. [130; 131; 132; 89]). The white hatched region in Figure 4 shows \(C_{\rm pp}\) values corresponding to ISM proton number densities between 100 and 500 cm\({}^{-3}\), and the same advection speeds as investigated for the U/LIRGs. Compared to U/LIRGs, non-U/LIRG starburst galaxies are expected to be less efficient calorimeters on average.
The scale height of the nuclear disk (\(H_{\rm SBR}\)) also affects \(\mathcal{C}_{\rm pp}\). Figure 5 shows how \(\mathcal{C}_{\rm pp}\) is affected when varying \(H_{\rm SBR}\) between 50 and 400 pc for a starburst region with a nuclear ISM density of \(n\) = 350 cm\({}^{-3}\), \(n\) = 1000 cm\({}^{-3}\), and \(n=5000\) cm\({}^{-3}\). The range of scale heights is consistent with the values derived for nearby U/LIRGs [46]. The ISM particle density values are chosen to model a wide range of starburst conditions. For each of these starburst configurations, an advection speed of \(v_{\rm adv}=500\) km s\({}^{-1}\) and \(v_{\rm adv}=1500\) km s\({}^{-1}\) is considered. The results show that calorimeter assumptions are robust against changes in \(H_{\rm SBR}\) and \(v_{\rm adv}\) if the particle density in the nuclear region is high. This statement also applies to changes in the diffusion model.
### From cosmic-ray injection to neutrino production at the source
The distribution of high-energy proton momenta in the nuclear region of U/LIRGs (\(\mathcal{F}_{\rm p}\)) is determined by the interplay between the injection rate of high-energy protons by supernovae and subsequent particle transport, as described above. Assuming a spatially homogeneous starburst region in a steady state, the momentum distribution of high-energy protons in the nuclear region is expressed as
\[\mathcal{F}_{\rm p}=Q_{\rm p}\cdot\tau=Q_{\rm p}\cdot\tau_{\rm pp}\cdot \mathcal{C}_{\rm pp}. \tag{18}\]
High-energy protons can collide inelastically with a proton in the nuclear ISM. Such collisions produce,
Figure 4: Parameter space of \(\mathcal{C}_{\rm pp}\) for variable ISM proton density in the nuclear region (\(n\)) and advection speed (\(v_{\rm adv}\)). The white dash-dotted line shows the combinations of \(v_{\rm adv}\) and \(n\) for which \(\mathcal{C}_{\rm pp}=0.5\). The white and black hatched regions indicate expected \(\mathcal{C}_{\rm pp}\) values for nuclear starburst regions in non-U/LIRGs and U/LIRGs, respectively.
amongst other particles, charged (\(\pi^{\pm}\)) and neutral pions (\(\pi^{0}\)). The charged pions decay as
\[\begin{cases}\pi^{+}\to\mu^{+}+\nu_{\mu}^{(1)}\to e^{+}+\nu_{e}+\bar{\nu}_{\mu}^{ (2)}+\nu_{\mu}^{(1)}\\ \pi^{-}\to\mu^{-}+\bar{\nu}_{\mu}^{(1)}\to e^{-}+\bar{\nu}_{e}+\nu_{\mu}^{(2)}+ \bar{\nu}_{\mu}^{(1)}\,\end{cases} \tag{19}\]
and the neutral pions decay to gamma rays, \(\pi^{0}\to\gamma\gamma\). To compute the neutrino production rate (\(q_{\nu}\)) from the energy distribution of high-energy protons, \(n_{\rm p}(E)=4\pi p^{2}{\cal F}_{\rm p}(p){\rm d}p\), we follow the approach outlined in [128]. The authors provide analytical fits to neutrino spectra, obtained from meson spectra simulated with Monte Carlo generators SYBILL and QGSJET. Doing so, the neutrino production rate \(q_{\nu}\) at the source, including neutrinos and antineutrinos, is expressed as
\[q_{\nu}=cn\int_{0}^{1}F_{\nu}\left(x,\frac{E_{\nu}}{x}\right)\sigma_{\rm pp} \left(\frac{E_{\nu}}{x}\right)n_{\rm p}\left(\frac{E_{\nu}}{x}\right)\frac{{ \rm d}x}{x}\, \tag{20}\]
where \(x=E_{\nu}/E_{\rm p}\) and \(F_{\nu}=F_{\nu_{\mu}}^{(1)}+F_{\nu_{\mu}}^{(2)}+F_{\nu_{e}}\) are the neutrino distribution functions corresponding to the decays given in Eq. 19. The spectrum of the muon neutrinos and electron neutrinos obtained from muon decay are described by \(F_{\nu_{\mu}}^{(2)}\) and \(F_{\nu_{e}}\), respectively. The former is described by the same function that describes electrons produced in muon decay, \(F_{e}\). Moreover, \(F_{\nu_{e}}\approx F_{e}\) within 5 %. As such, we use \(F_{\nu}=F_{\nu_{e}}^{(1)}+2\cdot F_{e}\). The distribution functions for \(F_{e}\) and for muon neutrinos produced in pion decay, \(F_{\nu_{e}}^{(1)}\), correspond to Eq. (62\(-\)65) and Eq. (66\(-\)69) in [128], respectively. Note that these analytical fits can only be used for secondaries with energies larger than 100 GeV.
Integrating the neutrino-production rate over the volume of the starburst region yields the neutrino luminosity. Therefore, the all-flavour neutrino flux at Earth from a single GOALS galaxy (\(\Phi_{\nu}\)), containing neutrinos and antineutrinos, is computed as
\[\Phi_{\nu}(E,z)=\frac{V_{\rm SBR}}{4\pi D_{L}^{2}}\cdot q_{\nu}(E(1+z))\, \tag{21}\]
with \(D_{L}\) the luminosity distance to the galaxy and \(z\) its redshift. Note that \(\Phi_{\nu}\propto E^{-\gamma}\) with \(\gamma=\gamma_{\rm SN}-2\).
Inelastic pp-interactions at the source result in a neutrino-flavor ratio given by \((\nu_{e}:\nu_{\mu}:\nu_{\tau})=(1:2:0)\). However, the combination of propagating over extragalactic distances and neutrino oscillations leads to an approximately equal distribution among the three neutrino flavors. As such, the flavour ratio at Earth is expected to be \((\nu_{e}:\nu_{\mu}:\nu_{\tau})\approx(1:1:1)\)[133]. This implies that the single-flavour neutrino flux at Earth (\(\Phi_{\nu_{j}}\)), with \(j\in\{e,\mu,\tau\}\), is obtained by dividing the all-flavour neutrino flux by a factor three. The muon-neutrino flux is of particular interest in the search for the origin of astrophysical neutrinos observed with IceCube, as discussed in Section I.
In conclusion, within the framework considered in this study, the neutrino flux depends on the starburst-specific parameters
\[\Phi_{\nu_{j}}=\Phi_{\nu_{j}}\left({\cal R}_{\rm SN},\gamma_{\rm SN},p_{\rm max },H_{\rm SBR},v_{\rm adv},n,B,D_{L}\right), \tag{22}\]
for a particular diffusion model in the nuclear region and the supernova rate computed as \({\cal R}_{\rm SN}={\cal R}_{\rm SN}\left(L_{\rm IR},\langle\alpha_{\rm AGN} \rangle,{\cal G}\right)\).
## V Case study: LIRG NGC 3690
In this section, the starburst-driven neutrino-production framework is applied to the LIRG NGC 3690 (also known as Arp 299 and Mrk 171)7. This intermediate-stage merger between two gas-rich galaxies, shown in Figure 6, is one of the most powerful merging galaxies in the local Universe at a luminosity distance \(D_{L}\sim 50.7\) Mpc [16]. It is located in the Northern Hemisphere at equatorial coordinates \(\alpha_{\rm J2000}\) = 11h28m32.3s and \(\delta_{\rm J2000}\) = 58d33m43s. The eastern
Figure 5: The \({\cal C}_{\rm pp}\)-parameter for three different starburst configurations at variable scale height (\(H_{\rm SBR}\)). The ISM proton density is fixed to \(n=350\) cm\({}^{-3}\), \(n=1000\) cm\({}^{-3}\), and \(n=5000\) cm\({}^{-3}\). For each of these configurations, a galactic superwind with \(v_{\rm adv}\) = 500 km s\({}^{-1}\) and \(v_{\rm adv}=1500\) km s\({}^{-1}\) is considered.
part of the galaxy system (NGC 3690E) has a _Herschel_ luminosity of \(\log_{10}(L_{\rm IR}/L_{\odot})=11.37\) and the western part (NGC 3690W) has a _Herschel_ luminosity of \(\log_{10}(L_{\rm IR}/L_{\odot})=11.09\)[67]. Mid-IR and radio continuum maps of this LIRG reveal distinct regions A, B, and C+C' which dominate at these wavelengths [135]. Region A is the nuclear region of the eastern galaxy (NGC 3690E-A) and region B is the nuclear region of the western part (NGC 3690W-B). The C'+C component is located in the overlapping region between the two galaxies. Multi-wavelength follow-up studies show that the nature of the nuclear regions are very different. Hard X-ray observations indicate the presence of a Compton thick AGN in region B [136] and high-resolution radio observations reveal a strong nuclear starburst in region A [137]. We also note that a Tidal Disruption Event (TDE) was observed in region B [138].
Region A is of main interest for this work and is used as a case study for the neutrino production model introduced in Section IV. In the following, we first identify the parameters related to cosmic-ray injection in region A, followed by those related to cosmic-ray propagation. Finally, we use these parameters to estimate the starburst-driven muon-neutrino flux from region A in NGC 3690.
#### iv.2.1 Cosmic-ray injection
NGC 3690E-A is characterized as a highly dust-enshrouded region, such that even near-IR wavelengths suffer from attenuation effects [137]. Therefore, high-resolution radio observations are required to identify the supernova activity in this region. The best direct observational constraints on the supernova activity in the central \(R_{\rm SBR}=150\) pc of NGC 3690E-A were revealed by a \(\sim\)2.5 year monitoring campaign at 5.0 GHz. This campaign revealed two CCSN in the starburst region leading to an estimated lower limit of \({\cal R}_{\rm SN}\gtrsim 0.80^{+1.06}_{-0.52}\) yr\({}^{-1}\) with uncertainties corresponding to 1\(\sigma\) errors [137]. Besides these direct observations, the authors also present a supernova rate estimated from diffuse synchrotron observations that is found to be \({\cal R}_{\rm SN}\simeq 0.45-0.65\) yr\({}^{-1}\). This estimate agrees well with our supernova rate estimates computed via Eq. 6, given in Table 2. This table also provides the corresponding mass-scaling factor \({\cal M}\) (Section IV.1.3) and cosmic-ray luminosity \({\cal L}_{\rm CR}\). The latter is the relevant parameter to compute the neutrino flux. To convert the supernova rates to a cosmic-ray luminosity, the kinetic energy conversion factor is fixed to \(\eta_{\rm SN}=0.10\).
The spectral index of the proton injection rate due to supernova activity \(\gamma_{\rm SN}\) is determined from the spectral index of the gamma-ray spectrum \(\Gamma\). This means, \(\gamma_{\rm SN}=\Gamma_{\rm NGC3690}+2\), with \(\Gamma_{\rm NGC3690}=2.11\)\(\pm\) 0.19 determined from gamma-ray observations [77]. It is noted that the location of the gamma-ray emission in NGC 3690 is unresolved. As such, the gamma rays could also (partially) originate from the AGN in region B or an off-nuclear star-forming region. To demonstrate the effect of changing the spectral index, the neutrino flux is also computed for \(\gamma_{\rm SN}=4\), which corresponds to \(\Gamma=\gamma=2\).
The maximum proton momentum \(p_{\rm max}\) for the supernova activity in NGC 3690E is unconstrained by data. Therefore, we investigate the neutrino flux for a maximum momentum \(p_{\rm max}\) of 10 PeV/\(c\), 20 PeV/c, 30 PeV/\(c\), and 100 PeV/\(c\).
The interest is, however, in the proton number density \(n\) which is a factor two larger8. For this work, we make the conservative choice of \(n=2500\) cm\({}^{-3}\) for the ISM proton number density.
Footnote 8: The proton density inferred from the H\({}_{2}\) number density is a lower limit on the proton density in the ISM as there is also a sub-dominant contribution of heavier elements.
Observations of NGC 3690E with the International _Low Frequency Array_ (LOFAR) Telescope at 150 MHz show a two-sided, wide filamentary structure emanating from the nucleus [141]. The outflow is detected via radio wavelengths from synchrotron-emitting electrons. Under the assumption that the outflow is driven by a supernova rate of \(\mathcal{R}_{\rm SN}\gtrsim 0.80\) yr\({}^{-1}\), the outflow is estimated to move at 370-890 km s\({}^{-1}\)[141]. Here we assume an advection speed of \(v_{\rm adv}=500\) km s\({}^{-1}\), consistent with observations of other U/LIRGs (Section IV.3.2). Moreover, LOFAR observations at 150 MHz indicate a minimum equipartition magnetic field for the nuclear region of \(B\gtrsim 250\)\(\mu\)G [142]. To compute the neutrino flux, a magnetic field strength of \(B=250\)\(\mu\)G and Kolmogorov-type diffusion model are used.
Figure 7 shows the diffusion timescale, advection timescale, and pp energy-loss timescale as function of proton energy for the nuclear region of NGC 3690E. Moreover, based on the values found for \(n\), \(H_{\rm{SBR}}\), \(v_{\rm adv}\), and \(B\), it follows that \(\mathcal{C}_{\rm pp}=0.95\) (Section IV.3.4). This implies that the nuclear starburst region in NGC 3690E is expected to efficiently convert cosmic-ray energy into high-energy neutrinos.
#### v.2.3 Neutrino flux predictions
The expected muon-neutrino flux for region A in NGC 3690E, for both a supernova rate of \(\mathcal{R}_{\rm SN}=0.28\) yr\({}^{-1}\) and \(\mathcal{R}_{\rm SN}=1.86\) yr\({}^{-1}\), is shown in Figure 8. The values of \(\mathcal{R}_{\rm SN}\) correspond to the \(1\sigma\) errors on the direct observations. Figure 8 shows that the supernova rate affects the flux predictions linearly, and that small changes in the spectral index \(\gamma_{\rm SN}\) can have a significant effect on the flux predictions. It is noted that the high-energy tail of \(E_{\nu_{\mu}}^{2}\Phi_{\nu_{\mu}}(E_{\nu_{\mu}})\) should be interpreted carefully. Although the exponential cutoff in the proton injection rate is a reasonable assumption, it is not driven by observations. Next-generation neutrino observatories, such as IceCube-Gen2 [143], will help to test the validity of this exponential cutoff. The horizontal red line shows the point-source sensitivity based on 10 years of IceCube data for an \(E^{-2}\) neutrino spectrum at the declination (\(\delta\)) of NGC 3690 [57]. None of the investigated parameter combinations violate this sensitivity. This serves as a consistency check for the model, since NGC 3690 has not shown up as a significant neutrino emitter in previous IceCube analyses.
## VI Diffuse flux predictions
In this section, first the per-source muon-neutrino flux generated by starburst activity (\(\Phi_{\nu_{\mu}}\)) is computed for the \(N=229\) GOALS galaxies targeted in this work (Section II). The calculations are done using our framework intro
Figure 8: Predictions for the starburst-driven muon-neutrino flux of NGC 3690E using our model. All model parameters, except for the maximum proton momentum \(p_{\rm max}\), are driven by multi-wavelength observations as discussed in the text. Note that \(\gamma=\gamma_{\rm SN}-2\), where \(\Phi_{\nu_{\mu}}\propto E^{-\gamma}\). The 10-year \(E^{-2}\) IceCube point-source sensitivity for a source at the declination of NGC 3690 is also indicated by the red solid line [57].
Figure 7: Diffusion, advection, and pp-energy loss timescales as function of proton energy for the nuclear starburst region of NGC 3690E.
duced in Section IV. Then, based on these predictions, the corresponding diffuse muon-neutrino flux (\(\Phi^{\rm diffuse}_{\nu_{\mu}}\)) from all GOALS galaxies is estimated as
\[\Phi^{\rm diffuse}_{\nu_{\mu}}(E_{\nu_{\mu}})=\frac{1}{4\pi}\sum_{i=1}^{N=229} \Phi_{i,\nu_{\mu}}(E_{\nu_{\mu}}). \tag{24}\]
Finally, the diffuse flux from the total LIRG population integrated over cosmic history is estimated from a volume-limited sub-sample of GOALS.
### Per-source and diffuse neutrino flux estimates for the GOALS sample
The eight parameters in Eq. 22 are required for each of the GOALS galaxies to compute their corresponding neutrino flux. However, the scale height (\(H_{\rm{SBR}}\)), the advection speed (\(v_{\rm adv}\)), the magnetic field strength (\(B\)), and the nuclear ISM proton density (\(n\)) are unknown for the majority of the GOALS galaxies. To fix these parameters, similar conditions are assumed as found in the case study of NGC 3690E (Section V). The corresponding values are given in Table 3. In contrast to these fixed parameters, the supernova rate in the nuclear region (\(\mathcal{R}_{\rm SN}\)) is computed from source-specific data. The supernova rate per galaxy is computed with Eq. 6, using the _Herschel_ IR luminosity and the relative AGN contribution to the bolometric luminosity (\(\langle\alpha_{\rm AGN}\rangle\)). Both parameters are available for all 229 galaxies. A canonical Kroupa IMF with \(\Lambda_{\rm IR}=5.43\)\(\times\)\(10^{-46}\) (Table 1) is used, and the assumption is made that half of the total IR luminosity of a galaxy is generated by the nuclear region, i.e. \(\mathcal{G}=0.5\).
To normalize the injection rate of high-energy particles to the nuclear supernova activity, one needs the spectral index of the injection spectrum of cosmic rays (\(\gamma_{\rm SN}\)), the maximum momentum reached in supernova acceleration (\(p_{\rm max}\)), and the conversion factor between supernova explosion energy and cosmic-ray acceleration (\(\eta_{\rm SN}\)). For each supernova event \(\eta_{\rm SN}=0.10\) and \(E_{\rm SN}=10^{51}\) erg are assumed. However, the spectral index \(\gamma_{\rm SN}\) is unconstrained for nearly all GOALS galaxies. Therefore, we opt to take the same spectral index in each galaxy and compute the diffuse neutrino flux for three different cases, i.e. for \(\gamma_{\rm SN}=4.00\), \(\gamma_{\rm SN}=4.25\), and \(\gamma_{\rm SN}=4.50\). This corresponds to \(\gamma=\gamma_{\rm SN}-2.00\), where \(\Phi_{\nu}\propto E^{-\gamma}\). For all cases, an exponential cutoff in the proton injection spectrum is chosen at \(p_{\rm max}=100\) PeV/\(c\). Given all these parameters, the neutrino luminosity at the source can be computed for all 229 galaxies. To find the corresponding neutrino flux at Earth, the luminosity distances (\(D_{L}\)) provided by GOALS are used [16].
To investigate how an AGN contribution to the IR luminosity affects the expected neutrino flux, we also compute the diffuse flux for \(\langle\alpha_{\rm AGN}\rangle=0\) in each source and the other parameters as discussed above. This implies that all IR luminosity is generated by starburst activity.
Figure 9 shows the modelled per-source starburst-driven muon-neutrino fluxes at 1 TeV (\(\Phi^{\rm 1TeV}_{i,\nu_{\mu}}\)) as function of the sine of the declination of these sources. The fluxes are computed for \(\gamma=2\). For each galaxy it is indicated whether it is a galaxy with \(10^{10.08}L_{\odot}\leq L_{\rm IR}<10^{11}L_{\odot}\), a LIRG, or an ULIRG. Moreover, the color scale indicates the \(1-\langle\alpha_{\rm AGN}\rangle\) value of the corresponding galaxy and the size of the marker scales with the \(1-\langle\alpha_{\rm AGN}\rangle\) value for visual aid. The plot also shows the 10-year \(E^{-2}\) IceCube point source sensitivity at 1 TeV which is indicated by the solid black line. The three indicated galaxies in Figure 9 are the top three galaxies with the strongest expected starburst-driven neutrino flux in the GOALS sample. It should be noted, however, that one of the most nearby GOALS galaxies, NGC 1068, is not shown in this plot. The \(\langle\alpha_{\rm AGN}\rangle\)-value of NGC 1068 is put to unity by GOALS. Therefore, this particular galaxy does not have a starburst-driven flux, as Eq. 6 indicates that the supernova rate would be zero. However, as NGC 1068 is so close to Earth, it follows that its \(\langle\alpha_{\rm AGN}\rangle\)-value is most likely smaller than unity. This is a result of the selection effect discussed in Section II. For \(\langle\alpha_{\rm AGN}\rangle\leq 0.54\), NGC 1068 has the strongest expected neutrino flux out of all the investigated GOALS galaxies. This is a result of the proximity of the source and its moderate IR luminosity of \(\log_{10}(L_{\rm IR}/L_{\odot})=11.39\)[67]. In the most optimistic case, for \(\gamma=2\) and \(\langle\alpha_{\rm AGN}\rangle=0\), our prediction for the starburst-driven muon-neutrino flux from NGC 1068 at 1 TeV is \(\Phi^{\rm 1TeV}_{\nu_{\mu}}=2.26\times 10^{-13}\) TeV\({}^{-1}\) cm\({}^{-2}\) s\({}^{-1}\). This prediction is about two orders of magnitude smaller than the neutrino flux from the direction of NGC 1068 reported by IceCube (see Section I). Note that the latter is compatible with a significantly softer spectral index \(\gamma\approx 3.2\)[58]. Assuming that the neutrino flux is indeed generated by NGC 1068, our model suggests a dominant contribution from AGN-related activity in NGC 1068. Figure 9 also illustrates that the most luminous IR sources in the GOALS sample, the ULIRGs, are not necessarily the brightest neutrino sources. This is explained by the redshift distribution of Figure 2. Many of the ULIRGs are found in the tail of the redshift distribution, while LIRGs are found over the whole redshift range. Some ULIRGs therefore have a strong distance-squared suppression, which is not compensated for by their larger IR luminosity. This allows nearby LIRGs to have comparable or larger neutrino flux
\begin{table}
\begin{tabular}{c c c c c} \(p_{\rm max}\) [PeV/\(c\)] & \(H_{\rm{SBR}}\) [pc] & \(v_{\rm adv}\) [km s\({}^{-1}\)] & \(n\) [cm\({}^{-3}\)] & \(B\) [\(\mu\)G] \\ \hline
100 & 150 & 500 & 1000 & 250 \\ \end{tabular}
\end{table}
Table 3: Fixed parameters used in each of the GOALS galaxies to compute the per-source and diffuse muon-neutrino flux.
predictions than ULIRGs.
Figure 10 shows the diffuse starburst-driven muon-neutrino flux expected from the GOALS sample for three different spectral indices of the proton injection rate, computed with Eq. 24. For each case, the diffuse flux is shown with and without the use of AGN-corrected IR luminosities, indicated by the full lines and dashed lines, respectively. The largest neutrino flux per spectral index corresponds to the calculations done without correcting for the AGN contribution to the IR luminosity. This increase in flux, observed for all three cases, is driven by the galaxy NGC 1068. It is also concluded from the predictions that none of the parameter combinations violate the diffuse neutrino flux observed by IceCube. Nevertheless, not only local U/LIRGs can contribute to the diffuse neutrino flux, but also the high-redshift counterparts. This is a result of the positive redshift evolution of the comoving IR luminosity density of U/LIRGs. As such, certain combinations of parameter values could still violate the IceCube flux when integrating the U/LIRG contribution over cosmic history. However, this extrapolation is not trivial for LIRGs as discussed in the next section.
### Extrapolation over cosmic history
Following [61, 85], we can obtain an estimate of the diffuse starburst-driven neutrino flux expected from the total LIRG population over cosmic history, based on a representative set of local LIRGs. This representative sample is defined by making a redshift cut on the GOALS sample at \(z=0.0167\) (see Appendix A), which results in 62 disentangled LIRGs and 0 ULIRGs. The integrated cosmic-ray generation rate expected from this sample is assumed to be a fraction \(\eta_{\rm tot}\) of the total IR luminosity generated by the galaxies in that sample. In the context of starburst-driven neutrino production, this fraction is obtained by applying Eq. 9 to the local volume under consideration. For typical values \(\eta_{\rm SN}=0.10\), \(E_{\rm SN}=10^{51}\) erg, and \(\Lambda_{\rm IR}=5.43\times 10^{-46}\) in Equation 9, it is found that \(\eta_{\rm tot}\approx 0.1\) %. From the integrated cosmic-ray generation rate, the differential rate can be obtained by assuming a spectral index \(\gamma\) for the power-law cosmic-ray spectrum at the source (see e.g. [61]). The latter in combination with the pp-interaction efficiency \(f_{\rm pp}\) (Section IV.3.4), fixed to unity for LIRGs, allows to compute the local differential neutrino generation rate. Finally, the diffuse neutrino flux expected from the LIRG population up to redshift \(z\) can be obtained by taking into account the redshift evolution factor \(\xi_{z}\) (e.g. [63, 73, 145]). This factor effectively integrates the luminosity function of a source class up to redshift \(z\). For an unbroken \(E^{-\gamma}\) power-law spectrum, \(\xi_{z}\) becomes energy-independent and can be directly computed given a parameterization \(\mathcal{H}(z)\) of the source evolution with redshift (e.g. [63]). Assuming for
Figure 10: The diffuse starburst-driven muon-neutrino neutrino flux expected from 229 disentangled galaxies in GOALS for spectral indices \(\gamma_{\rm SN}=4.00\), \(\gamma_{\rm SN}=4.25\), and \(\gamma_{\rm SN}=4.50\). Note that \(\gamma=\gamma_{\rm SN}-2\). Per spectral index, the diffuse flux is shown with and without correcting the IR luminosity for AGN activity (solid and dashed lines, respectively). For all calculations, \(\eta_{\rm tot}=0.1\)% is used. The black data points are the differential per-flavor IceCube measurements using the high-energy starting event (HESE) sample [53]. The red band is the best-fit unbroken power-law spectrum of astrophysical muon neutrinos observed by IceCube in the Northern Hemisphere [144].
Figure 9: Per-source muon-neutrino flux prediction at 1 TeV as function of the sine of the declination of the 229 GOALS galaxies targeted in this work. All fluxes are computed for a spectral index \(\gamma_{\rm SN}=4\), with \(\gamma=\gamma_{\rm SN}-2\), \(\eta_{\rm tot}=0.1\)%, and the other model parameters as discussed in the text. The color scale indicates the value of \(1-\langle\alpha_{\rm AGN}\rangle\) per galaxy and the marker size scales with 1-\(\langle\alpha_{\rm AGN}\rangle\) for visual aid. For each of the galaxies it is indicated whether it is a galaxy with \(10^{10.08}L_{\odot}\leq L_{\rm IR}<10^{11}L_{\odot}\) (star), \(10^{11}L_{\odot}\leq L_{\rm IR}<10^{12}L_{\odot}\) (circle), or \(L_{\odot}\geq 10^{12}L_{\odot}\) (cross). The 10-year \(E^{-2}\) IceCube point-source sensitivity as function of the sine of the declination is also indicated [57].
the LIRG population the same parameterization of the redshift evolution as found for the ULIRG population, it follows that \(\xi_{z}=3.4\) for a spectral index \(\gamma=2\), discussed in more detail in Appendix B. Taking \(\xi_{z}=3.4\) into account, we estimate for the diffuse flux from the total LIRG population that \(E_{\nu_{\mu}}^{2}\Phi_{\nu_{\mu}}^{\rm diffuse}=1.70\times 10^{-8}\) GeV cm\({}^{-2}\) s\({}^{-1}\) sr\({}^{-1}\), which is at the level of the diffuse flux observed with IceCube (see Figure 10).
However, pp-interactions at the source produce gamma rays simultaneously with neutrinos. Therefore, any diffuse neutrino flux prediction should be consistent with the non-blazar Extragalactic Gamma-ray Background (EGB) observed with _Fermi_-LAT [146]. The blazar contribution, which makes up most of the EGB [146], should be subtracted from the total EGB as previous analyses have constrained their contribution to the IceCube flux [147]. Doing so, it has been argued that the IceCube neutrino flux is in tension with the non-blazar EGB bound [62]. That is, the gamma-ray flux expected from the diffuse neutrino flux observed with IceCube overshoots the non-blazar EGB detected by _Fermi_-LAT. Consequently, for a spectral index \(\gamma\approx 2\) and \(\eta_{\rm tot}\approx 0.1\) %, our diffuse neutrino flux prediction is in tension with the non-blazar EGB bound, as gamma rays are not significantly attenuated in a starburst scenario. To alleviate this tension, \(\gamma>2\) and/or \(\eta_{\rm tot}<0.1\) % could be invoked. It follows that the parameter space of our neutrino-production model is constrained by using the non-blazar EGB bound. However, the extrapolation presented in this section assumes LIRGs to be standard-candle neutrino emitters. This is an unlikely assumption given the wide range of physical conditions among GOALS U/LIRGs. Considering each source individually could lead to significantly different neutrino flux predictions, as argued in more detail in Section VI.3. Furthermore, at \(z\sim 0\), an IR luminosity cut of \(L_{\rm IR}>10^{11}L_{\odot}\) favours merger-driven starbursts. At \(z\simeq 1-2\), however, when the star-formation rates in galaxies were much higher, \(L_{\rm IR}>10^{11}L_{\odot}\) targets mostly galaxies that seem to be evolving individually rather than in mergers. As the physical conditions among LIRGs seem to change with redshift, this could indicate a change in the efficiency of neutrino production with redshift. This should be further investigated as this affects the extrapolation results. Last, we also note that we only considered starburst-driven neutrino production in the extrapolation. However, a fraction of the GOALS U/LIRGs are known to host an (obscured) AGN. Such AGN are promising candidate sources of astrophysical neutrinos as they are typically located in the more central and dusty regions of the galaxy, for which significant gamma-ray attenuation is possible. Therefore, obscured AGN could potentially resolve the tension between the diffuse neutrino flux observed by IceCube and the non-blazar EGB observed by _Fermi_-LAT. Particularly interesting U/LIRGs in this context are those known to host Compact Obscured Nuclei (CONs), which are among the most enshrouded regions in the Universe (Section III). As such, a potential AGN contribution to the neutrino flux should be taken into account in the extrapolation to properly constrain the parameter space of our model.
### Per-source vs generic approach
To estimate the diffuse flux predictions in the previous section, all model parameters were fixed in the targeted galaxies, except for the IR luminosity and the luminosity distance. However, electromagnetic observations reveal that the fixed model parameters are potentially significantly different among GOALS galaxies. In this section, we highlight the importance of doing per-source investigations to estimate the model parameters and as such properly constrain the neutrino flux of a source.
In this section, we consider four GOALS U/LIRGs identified as high-energy gamma-ray sources in [77]. These galaxies and their respective \(\Gamma\)-values are NGC 1068 (\(\Gamma=2.27\pm 0.09\)), NGC 2146 (\(\Gamma=2.27\pm 0.07\)), NGC 3690 (\(\Gamma=2.11\pm 0.19\)), and Arp 220 (\(\Gamma=2.48\pm 0.14\)). Assuming that these gamma rays are generated by nuclear starburst activity, this hints towards different spectral indices for the neutrino spectra, \(\gamma=\gamma_{\rm SN}-2\). This motivates us to change the spectral index value, while keeping the other model parameters constant, and investigate the effect on the neutrino flux predictions9. Figure 11 shows the muon-neutrino flux prediction at 1 TeV as function of the sine of the declination of the investigated gamma-ray sources. Per galaxy, the muon-neutrino flux is shown for \(\gamma_{\rm SN}=4\) and for \(\gamma_{\rm SN}=\Gamma+2\). For NGC 1068, \(\langle\alpha_{\rm AGN}\rangle\) is put to zero to compute the neutrino flux. The starburst-driven neutrino flux prediction for NGC 1068 at a given spectral index should therefore be interpreted as an upper limit. Figure 11 shows that the relative strength of the neutrino flux predictions at \(\gamma_{\rm SN}=4\) is significantly different from the case where \(\gamma_{\rm SN}\) is variable. Moreover, the flux predictions per source significantly decrease for a spectral index of \(\gamma<2\). As mentioned in the previous section, this could help to resolve the tension between our diffuse neutrino flux prediction and the non-blazar EGB.
Footnote 9: Even if these gamma rays are not representative for the neutrino production in the nuclear region, it is still informative to study the effect of changes in \(\gamma_{\rm SN}\).
Due to the wide variety in morphologies observed for U/LIRGs (Section I), the average target density encountered by cosmic rays could be significantly different among U/LIRGs. NGC 1068, for example, contains an AGN surrounded by an extended starburst ring while Arp 220 is a merging galaxy in a late stage known to host a much more compact and dense central region. Therefore, the average particle density sampled by cosmic rays could be significantly lower in NGC 1068 than in Arp 220. If the nuclear ISM density encountered by a cosmic ray in NGC 1068 is for example \(n=100\) cm\({}^{-3}\), a factor 10 lower than assumed in Section VI.1, the expected
neutrino flux is a approximately a factor two lower and rapidly decreases for even smaller values of \(n\). Furthermore, as the value of \(n\) becomes smaller, the neutrino flux predictions become more sensitive to changes in the other model parameters (see Figure 5). It is therefore crucial to understand how cosmic rays propagate and interact within the nuclear ISM.
The arguments above show the importance of also performing per-source analyses to constrain the neutrino flux from a particular source rather than inferring the latter only from population studies.
## VII Summary
The extreme IR emission from U/LIRGs traces obscured star formation and AGN activity, which both provide favourable conditions for high-energy neutrino production. In this work we performed the first investigation of high-energy neutrino emission from LIRGs in GOALS, which is a multi-wavelength survey targeting the brightest U/LIRGs in the sky. To do so, we constructed a framework for starburst-driven neutrino production which targets disentangled galaxies in U/LIRG systems. The framework uses the AGN-corrected _Herschel_ IR luminosity per galaxy to estimate the cosmic-ray luminosity in that galaxy. Then, by taking into account cosmic-ray propagation in the nuclear region, the neutrino luminosity per U/LIRG can be estimated. The framework requires eight source-specific parameters to compute the expected starburst-driven neutrino flux per galaxy. Each of these parameters were discussed in the context of local U/LIRGs. This study highlighted in qualitative manner that U/LIRGs are expected to convert high-energy protons into high-energy neutrinos more efficiently than non-U/LIRG starburst galaxies. We then used the framework to:
* Estimate the expected neutrino flux generated by the nuclear starburst region in the LIRG NGC 3690. Source-specific electromagnetic data was used to constrain the model parameters whenever possible. From this case study we concluded that the neutrino flux predictions are most sensitive to changes in the spectral index of the cosmic ray injection rate and the maximum cosmic-ray energy reached in supernova acceleration. These parameters should therefore be the focus of future modeling and experimental efforts. The model predicts that even in the most optimistic cases, the starburst-driven neutrino-flux predictions for NGC 3690 fall one to two orders of magnitude below the current IceCube sensitivity for a source at the declination of NGC 3690. Therefore, these predictions do not violate the IceCube observations. Interestingly, being only an order of magnitude below the current IceCube sensitivity, future observations using extended observatories like IceCube-Gen2, should allow to probe the predicted flux in the more optimistic scenarios.
* Estimate the diffuse starburst-driven neutrino flux expected from the GOALS sample for different spectral indices. These predictions were found to be orders of magnitude smaller than the diffuse neutrino flux observed by IceCube. Nevertheless, as U/LIRGs have a positive redshift evolution, we also estimated the neutrino flux expected from the total LIRG population across cosmic history. Assuming GOALS LIRGs are standard-candle emitters, we found that cosmic-ray injection spectral indices \(\gamma>2\) and/or infrared conversion efficiencies \(\eta_{\rm tot}<0.1\) % are required to avoid tension with the non-blazar extragalactic gamma-ray background observed by _Fermi_-LAT. However, it was also argued that the standard candle assumption for LIRGs is likely unrealistic based on the wide range of nuclear properties observed in LIRGs. Therefore, these population-study results should be interpreted carefully.
* Estimate the starburst-driven flux expected from NGC 1068. This prediction was compared to the recently reported evidence for a neutrino flux from the direction of NGC 1068 by IceCube. Our flux prediction is significantly smaller than the flux reported by IceCube. Therefore our model suggests that the neutrino emission from NGC 1068 is likely dominated by an AGN-related process.
Figure 11: Diffuse muon-neutrino flux predictions at 1 TeV as function of the sine of the declination of four GOALS galaxies identified as gamma-ray sources. For each of the galaxies, the flux prediction is shown for \(\gamma_{\rm SN}=4\) (blue) and \(\gamma_{\rm SN}=\Gamma+2\) (red), with \(\Gamma\) obtained from gamma-ray observations. These galaxies and their respective \(\Gamma\)-values are NGC 1068 (\(\Gamma=2.27\pm 0.09\)), NGC 2146 (\(\Gamma=2.27\pm 0.07\)), NGC 3690 (\(\Gamma=2.11\pm 0.19\)), and Arp 220 (\(\Gamma=2.48\pm 0.14\)). For all galaxies \(\eta_{\rm tot}=0.1\)% is used. The symbols have the same meaning as in Figure 9.
## Acknowledgements
This work was supported by the Flemish Foundation for Scientific Research (1149122N, Y. Merckx), the European Union's Horizon 2020 research and innovation program (No 805486, K. D. de Vries), the French National Research Agency (ANR-21-CE31-0025, P. Correa), and the APACHE grant of the French Agence Nationale de la Recherche (ANR-16-CE31-0001, K. Kotera).
We thank E. Peretti for the valuable feedback and insightful discussions on the cosmic-ray physics. We also thank L. Armus, T. Diaz-Santos, H. Inami, S. Linden, J. Mazzarella, Y. Song, and V. U for their feedback on the U/LIRG aspect of this work.
## Appendix A Local LIRG sample
GOALS is a subsample of the IRAS RBGS and is therefore a complete flux-limited sample of galaxies with an IRAS 60-\(\mu\)m flux density of \(S_{60\mu\rm m,IRAS}>5.24\) Jy. However, GOALS is not a complete volume-limited sample, as suggested by Figure 2. To define a representative sample of local LIRGs we follow the procedure outlined in [64]. First, we estimate the distance up to which the least luminous LIRGs (i.e. \(L_{\rm IR}=10^{11}L_{\odot}\)) can be detected with the RBGS sensitivity at \(60\ \mu\)m, i.e. \(S_{60\mu\rm m,IRAS}=5.24\) Jy. To do so, a fit is performed to the observed correlation between \(S_{60\mu\rm m,IRAS}\) and the total IR flux, \(F_{\rm IR}=L_{\rm IR}/4\pi D_{L}^{2}\), for all GOALS LIRGs. The fit is of the form
\[\log_{10}\left(\frac{S_{60\mu\rm m,IRAS}}{\rm Jy}\right)=a\log_{10}\left( \frac{F_{\rm IR}}{\rm W\ m^{-2}}\right)+b\, \tag{10}\]
with best-fit parameters \(a=1.00\pm 0.01\) and \(b=13.01\pm 0.08\). Given these parameters, the distance corresponding to \(L_{\rm IR}=10^{11}L_{\odot}\) and \(S_{60\mu\rm m,IRAS}=5.24\) Jy is found by inverting Eq. 10. Doing so, we find \(D_{L}\approx 80\) Mpc. In this work, we opt for a conservative value of \(D_{L}=75\) Mpc which corresponds to \(z=0.0167\). As such, the 62 LIRGs within this redshift define the local sample of LIRGs used to estimate the diffuse neutrino flux from the total LIRG population.
## Appendix B Redshift evolution factor
We introduced the redshift evolution factor \(\xi_{z}\) to compute the expected neutrino flux from the LIRG population up to redshift \(z\), based on a local set of LIRGS. For an unbroken \(E^{-\gamma}\) power-law spectrum of the neutrino emission, the redshfit evolution factor becomes independent of energy such that
\[\xi_{z}=\xi(z,\gamma)=\int_{0}^{z}\frac{{\rm d}z^{\prime}}{\sqrt{\Omega_{m}(1 +z^{\prime})^{3}+\Omega_{\Lambda}}}{\cal H}(z)(1+z^{\prime})^{-\gamma}\, \tag{11}\]
with \(\Omega_{m}=0.32\), and \(\Omega_{\Lambda}=0.69\), and \({\cal H}(z)\) the parameterization of the redshift evolution. For the latter we use \({\cal H}\propto(1+z)^{m}\), with \(m=4\) for \(z\leq 1\) and \(m=0\) for \(1<z<4\) (e.g. [63; 64]). Integrating up to redshift \(z=4\) it follows that \(\xi_{z}=3.4\) for a spectral index \(\gamma=2\).
|
2305.16113 | Euler--Chern Correspondence via Topological Superconductivity | The Fermi sea topology is characterized by the Euler characteristics
$\chi_F$. In this paper, we examine how $\chi_F$ of the metallic state is
inhereted by the topological invariant of the superconducting state. We
establish a correspondence between the Euler characteristic and the Chern
number $C$ of $p$-wave topological superconductors without time-reversal
symmetry in two dimensions. By rewriting the pairing potential $\Delta_{\bf
k}=\Delta_1-i\Delta_2$ as a vector field ${\bf u}=(\Delta_1,\Delta_2)$, we
found that $\chi_F=C$ when ${\bf u}$ and fermion velocity ${\bf v}$ can be
smoothly deformed to be parallel or antiparallel on each Fermi surface. We also
discuss a similar correspondence between Euler characteristic and 3D winding
number of time-reversal-invariant $p$-wave topological superconductors in three
dimensions. | Fan Yang, Xingyu Li, Chengshu Li | 2023-05-25T14:47:56Z | http://arxiv.org/abs/2305.16113v2 | # Euler-Chern Correspondence via Topological Superconductivity
###### Abstract
The Fermi sea topology is characterized by the Euler characteristics \(\chi_{F}\). In this Letter, we examine how \(\chi_{F}\) of the metallic state is inhered by the topological invariant of the superconducting state. We establish a correspondence between the Euler characteristic and the Chern number \(C\) of \(p\)-wave topological superconductors without time-reversal symmetry in two dimensions. By rewriting the pairing potential \(\Delta_{\mathbf{k}}=\Delta_{1}-i\Delta_{2}\) as a vector field \(\mathbf{u}=(\Delta_{1},\Delta_{2})\), we found that \(\chi_{F}=C\) when \(\mathbf{u}\) and fermion velocity \(\mathbf{v}\) can be smoothly deformed to be parallel or antiparallel on each Fermi surface. We also discuss a similar correspondence between Euler characteristic and 3D winding number of time-reversal-invariant \(p\)-wave topological superconductors in three dimensions.
_Introduction._ In the past few decades, it has been proven that topology plays an important role in physics [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. The topological invariant of a quantum system leads to quantized response functions and/or robust gapless states at the boundary. For example, the Chern number of an integer quantum Hall state determines the quantized Hall conductance and the number of chiral edge states [11; 12]. Such topological invariants describe the twist of wavefunctions in the momentum space. Formally, the Hamiltonian defines a map from the momentum space to some target space, and the topological invariant is generally given by the homotopy group of this map. Most recent discussions on topological states have been concentrated on gapped systems [2; 3; 4; 5; 6; 7; 8], semimetals [9], and nodal superconductors [10].
Different from the topology related to wavefunctions, there exists another type of geometric topology in Fermi liquids. The Fermi sea as a manifold can have complicated structures, and its topology is characterized by the Euler characteristic \(\chi_{F}\)[13]. The Euler characteristic changes when the Fermi level passes through a critical point of the energy dispersion. The change of \(\chi_{F}\) is accompanied by the famous Lifshitz transitions [14; 15]. In principle, one can always map out the entire Fermi surface to obtain \(\chi_{F}\), but only recently was it realized that \(\chi_{F}\) can be measured directly. An important breakthrough is the prediction that the Euler characteristic \(\chi_{F}\) determines a quantized nonlinear conductance in \(d\)-dimensional ballistic metals [16]. This phenomenon can be viewed as a generalization of the quantized Landauer conductance in one dimension (1D) [17]. It also reveals a deep connection between Fermi sea topology and quantized transport properties. This sparked a series of efforts in directly detecting \(\chi_{F}\), including probing multipartite entanglement entropy [18], measuring quantized response in ultracold Fermi gases [19; 20], and utilizing Andreev state transport [21; 22].
_Results._ In this Letter, we discover a relation between the Euler characteristic and the topological invariants of \(p\)-wave topological superconductors in two (2D) and three dimensions (3D). We provide a condition under which the Euler characteristic of the metallic state and the topological invariant of the superconductor are equal. When this condition is satisfied, the Majorana states at the boundary of the superconductor can be used to measure the Euler characteristic.
In 2D the pairing potential must break time-reversal symmetry so that the resultant topological superconductor is characterized by an integer that is the Chern number \(C\). For simplicity, we first focus on the spinless case. When the following condition is satisfied, \(\chi_{F}\) and \(C\) are equal.
**Condition:** We write the pairing potential \(\Delta_{\mathbf{k}}=\Delta_{1}-i\Delta_{2}\) as a vector field \(\mathbf{u}=(\Delta_{1},\Delta_{2})\). In the weak pairing limit \(|\Delta|\to 0\), if \(\mathbf{u}\) can be smoothly deformed without vanishing to be parallel or antiparallel to the fermion
Figure 1: Illustration of spinless Fermi seas with (a) \(\chi_{F}=0\) and (b) \(\chi_{F}=2\), respectively. The red arrows on the left panel represent the fermion velocity \(\mathbf{v}\) on the Fermi surface. The blue arrows on the right panel represent the pairing vector field \(\mathbf{u}\) of a \(p-ip\) superconductor. On each Fermi surface, \(\mathbf{v}\) and \(\mathbf{u}\) can be smoothly deformed without vanishing to be parallel or antiparallel, giving \(\chi_{F}=C\).
velocity \(\mathbf{v}=\nabla_{\mathbf{k}}\epsilon_{\mathbf{k}}/\hbar\) on each Fermi surface, we have
\[\chi_{F}=C, \tag{1}\]
where \(\epsilon_{\mathbf{k}}\) is the energy dispersion of the metallic state.
To satisfy the above condition, one would need a \(p\)-wave pairing potential that has the proper chirality on each Fermi surface. In Fig. 1, we present two examples where this condition can be met by a simple \(p-ip\) pairing. In this case, the number of chiral Majorana edge modes in the topological superconductor is given by \(\chi_{F}\). By measuring the quantized thermal Hall conductance given by the Majorana edge modes [23; 24; 25], one can probe \(\chi_{F}\) through \(C\). A simple \(p\pm ip\) pairing cannot always satisfy the condition. However, if an additional inversion symmetry is present, \(p\pm ip\) pairing always leads to \(\chi_{F}\equiv C\) (mod 2). We note a related result in Refs. [26; 27], where it was shown that the Chern number and a different topological invariant, which is the number of Fermi surfaces, have the same parity for inversion symmetric odd-parity superconductors.
A similar relation exists in 3D if the system respects time-reversal symmetry. In this case, we introduce a time-reversal-invariant \(p\)-wave pairing \(\Delta_{\mathbf{k}}=\mathbf{u}\cdot\mathbf{\sigma}i\sigma_{y}\), where \(\mathbf{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})\) are Pauli matrices in the spin space and \(\mathbf{u}\) is odd in \(\mathbf{k}\). The resultant topological superconductor is characterized by the 3D winding number \(N_{w}\in\mathbb{Z}\). Due to the presence of both spin components, \(\chi_{F}\) is always an even integer. When \(\mathbf{u}\) and fermion velocity \(\mathbf{v}\) can be smoothly deformed to become parallel with each other on each Fermi surface, we have
\[\frac{\chi_{F}}{2}=N_{w}. \tag{2}\]
Similar to 2D, a simple time-reversal-invariant \(p\)-wave pairing cannot always satisfy the condition above. But with inversion symmetry, it always leads to \(\chi_{F}/2\equiv N_{w}\) (mod 2), which was given previously in Ref. [26; 27].
The above correspondences are robust against weak interactions, since both Fermi surfaces and topological superconductors remain well-defined. In the following, we mathematically establish the conditions given above.
_Euler-Chern correspondence in 2D._ The connection between the Euler characteristic and the Chern number may seem surprising at first, since the Euler characteristic is defined for gapless Fermi liquids, while the Chern number is defined for a fully gapped system without a Fermi surface. In 2D, the Euler characteristic is given by the number of electron-like Fermi surfaces minus the number of hole-like Fermi surfaces, while open Fermi surfaces do not contribute to \(\chi_{F}\). On the other hand, the Chern number is given by the integration of Berry curvature over the entire Brillouin zone. However, in the weak pairing limit, the Berry curvature in a superconductor is concentrated near the Fermi surface. And the sign of the Berry curvature near the Fermi surface depends on if the Fermi surface is electron-like, hole-like, or open. In this way, the information of Fermi sea topology is encoded into the Chern number.
To generally establish the relation between \(\chi_{F}\) and \(C\), we express them as the winding number on the Fermi surface of two different vector fields \(\mathbf{v}\) and \(\mathbf{u}\), respectively. \(\mathbf{v}\) is given by the fermion velocity, and \(\mathbf{u}\) is given by the pairing potential \(\Delta_{\mathbf{k}}\). Let us first consider the spinless case. At each point on the Fermi surface, the fermion velocity \(\mathbf{v}=\nabla_{\mathbf{k}}\epsilon_{\mathbf{k}}/\hbar\) is always perpendicular to the Fermi surface and pointing away from the Fermi sea. With the help of Poincare-Hopf theorem [28], one can write the Euler characteristic \(\chi_{F}\) as the sum of the winding numbers of \(\mathbf{v}\) on all Fermi surfaces
\[\chi_{F}=\sum_{\alpha}w_{\alpha}(\mathbf{v}), \tag{3}\]
where \(w_{\alpha}(\mathbf{v})\) is the winding number of \(\mathbf{v}\) on Fermi surface \(S_{\alpha}\). Each electron-like, hole-like, and open Fermi surface has winding number \(+1\), \(-1\), and \(0\), thus contributing \(+1\), \(-1\), and \(0\) to \(\chi_{F}\).
Let us consider a lattice version of \(p-ip\) superconductor with pairing potential \(\Delta_{\mathbf{k}}=\Delta_{0}(\sin\mathbf{k}\cdot\mathbf{a_{1}}-i\sin\mathbf{ k}\cdot\mathbf{a_{2}})\), where \(\mathbf{a_{1,2}}\) are lattice basis vectors [29; 30]. The Bogoliubov-de Gennes (BdG) Hamiltonian can be written as
\[H(\mathbf{k})=(\epsilon_{\mathbf{k}}-\mu)\tau_{z}+\Delta_{0}\sin\mathbf{k} \cdot\mathbf{a_{1}}\tau_{x}+\Delta_{0}\sin\mathbf{k}\cdot\mathbf{a_{2}}\tau_{ y}, \tag{4}\]
where \(\tau_{x,y,z}\) are Pauli matrices in the Nambu space and \(\mu\) is the chemical potential. The pairing breaks time-reversal symmetry and the resultant topological superconductor belongs to the D class, which is characterized by the Chern number [31; 32; 33]. \(H(\mathbf{k})\) defines a map from the Brillouin zone to a unit sphere given by \(\hat{\mathbf{h}}=\mathbf{h}/|\mathbf{h}|\) with \(\mathbf{h}=(\Delta_{0}\sin\mathbf{k}\cdot\mathbf{a_{1}},\Delta_{0}\sin \mathbf{k}\cdot\mathbf{a_{2}},\mu-\epsilon_{\mathbf{k}})\). The Chern number \(C\) equals to the degree of this map
\[C=\frac{1}{4\pi}\int_{\text{1BZ}}d^{2}k\,\hat{\mathbf{h}}\cdot\left(\frac{ \partial\hat{\mathbf{h}}}{\partial k_{x}}\times\frac{\partial\hat{\mathbf{h}}} {\partial k_{y}}\right), \tag{5}\]
which is the number of times that \(\hat{\mathbf{h}}\) covers the unit sphere. In the weak pairing limit \(\Delta_{0}/\mu\ll 1\), the Berry curvature given by the integrand of Eq. (5) is concentrated near the Fermi surface. Except for a thin shell near each Fermi surface, \(\hat{\mathbf{h}}\) points along \(\hat{z}\) inside and \(-\hat{z}\) outside the Fermi sea, respectively. On each Fermi surface, \(h_{z}=\mu-\epsilon_{\mathbf{k}}=0\) and \(\mathbf{h}\) is reduced to a two-dimensional vector field \(\mathbf{u}=(\Delta_{0}\sin\mathbf{k}\cdot\mathbf{a_{1}},\Delta_{0}\sin\mathbf{ k}\cdot\mathbf{a_{2}})\). Therefore, in the weak pairing limit, Eq. (5) becomes [34]
\[C=\sum_{\alpha}w_{\alpha}(\mathbf{u}), \tag{6}\]
where
\[w_{\alpha}(\mathbf{u})=\frac{1}{2\pi}\oint_{S_{\alpha}}(\hat{u}_{x}d\hat{u}_{y }-\hat{u}_{y}d\hat{u}_{x}) \tag{7}\]
is the winding number of \(\hat{\mathbf{u}}\equiv\mathbf{u}/|\mathbf{u}|\) on the \(\alpha\)th Fermi surface \(S_{\alpha}\) with its orientation induced by the Fermi sea.
Eqs. (3) and (6) suggest that if \(w_{\alpha}(\mathbf{v})=w_{\alpha}(\mathbf{u})\) on each Fermi surface \(S_{\alpha}\), we will have \(\chi_{F}=C\). This is equivalent to the condition that the pairing vector field \(\mathbf{u}\) can be smoothly deformed without vanishing to be parallel or antiparallel to fermion velocity \(\mathbf{v}\) on each Fermi surface. This is the condition given in the _Results_.
If this condition cannot be satisfied with a simple \(p\pm ip\) pairing, we would require that Cooper pairs be formed by electrons from the same Fermi surface in the \(p\)-wave channel with the proper chirality, which requires special engineering of the pairing potential. In the weak pairing limit, one can write a BdG-like Hamiltonian near each Fermi surface. By choosing the proper chirality of \(p\)-wave pairing potential near each Fermi surface, we can have \(w_{\alpha}(\mathbf{u})=w_{\alpha}(\mathbf{v})\).
Physically, if simple \(p\pm ip\) pairings cannot satisfy the condition, we can still have \(\chi_{F}\equiv C\pmod{2}\) if inversion symmetry is present. To see this, we extend both vector fields to the entire Brillouin zone and analyze their zeros inside the Fermi sea.
The zeros are defined as the points where the vector field vanishes [35]. Each zero can be assigned an index \(\nu_{i}\), which is the winding number of the vector field on a small counterclockwise oriented circle enclosing the \(i\)th zero. Zeros with indices \(+1\) and \(-1\) can be viewed as vortices and antivortices of the vector field, respectively. Since the winding number of a vector field at the boundary of a manifold equals to the sum of the indices in its interior, we have
\[\chi_{F}=\sum_{\alpha}w_{\alpha}(\mathbf{v})=\sum_{i\in\mathrm{FS}}\nu_{i}( \mathbf{v}), \tag{8}\]
\[C=\sum_{\alpha}w_{\alpha}(\mathbf{u})=\sum_{j\in\mathrm{FS}}\nu_{j}(\mathbf{u}), \tag{9}\]
where \(\nu_{i}(\mathbf{v})\) and \(\nu_{j}(\mathbf{u})\) are the indices of \(\mathbf{v}\) and \(\mathbf{u}\) fields, respectively. We use different subscripts \(i\) and \(j\) to remind the reader that the number of zeros for \(\mathbf{v}\) and \(\mathbf{u}\) fields could be different. On the right hand side, the summation is over all zeros inside the Fermi sea (FS) [36] (Fig. 2).
We compare the indices of zeros of \(\mathbf{v}\) and \(\mathbf{u}\) inside the Fermi sea. For inversion symmetric systems, both \(\mathbf{v}\) and \(\mathbf{u}\) are odd under inversion, i.e., \(\mathbf{v}(\mathbf{k})=-\mathbf{v}(-\mathbf{k}+\mathbf{G})\) and \(\mathbf{u}(\mathbf{k})=-\mathbf{u}(-\mathbf{k}+\mathbf{G})\). All time-reversal invariant momenta (TRIM) \(\Gamma_{n_{1}n_{2}}=\frac{n_{1}}{2}\mathbf{b}_{2}+\frac{n_{2}}{2}\mathbf{b}_{2}\) are inversion centers and therefore also the zeros, where \(\mathbf{b}_{1}\) and \(\mathbf{b}_{2}\) are reciprocal lattice basis vectors. Along any direction across the inversion center, the vector field must change to its opposite direction. Thus, the index at each TRIM for both vector fields must be odd [37]. For vector fields odd under inversion, other zeros must appear in pairs at \(\mathbf{k}\) and \(-\mathbf{k}+\mathbf{G}\). These zeros must have the same index and both appear inside or outside the Fermi sea. Therefore, we have
\[\sum_{i\in\mathrm{FS}}\nu_{i}(\mathbf{v})\equiv\sum_{j\in\mathrm{FS}}\nu_{j}( \mathbf{u})\pmod{2}, \tag{10}\]
i.e.,
\[\chi_{F}\equiv C\pmod{2}. \tag{11}\]
Note that Eq. (11) holds for any odd-parity pairing potential, not just the chiral \(p\)-wave pairing chosen above.
When both spin components are considered, without spin-orbit coupling, we demand a time-reversal-breaking \(p\)-wave pairing with spin \(U(1)\) rotational symmetry. For example, we can have each spin component paired in the same chiral \(p\)-wave channel, given by the pairing potential \(\Delta_{\mathbf{k}}=\Delta_{0}(\sin\mathbf{k}\cdot\mathbf{a}_{1}-i\sin \mathbf{k}\cdot\mathbf{a}_{2})\sigma_{0}\), where \(\sigma_{0}\) is the identity matrix in the spin space. This state is the 2D analog of the He-3 A phase, with the spin rotational axis along the \(y\)-direction [38; 39; 40; 41]. One can also introduce a weak spin-orbit coupling that does not change \(\chi_{F}\). In this case, the topological superconductor is still classified by the Chern number and its value does not change as long as the superconducting gap remains open. Therefore, our results still apply. When multiple bands are present at the Fermi surface, we would require the condition hold for each band.
Generalization to 3D.Next, we generalize our result to 3D, where each Fermi surface with genus \(g\) contributes \(1-g\) to the Euler characteristic \(\chi_{F}\) of the Fermi sea. We
Figure 2: Illustration of spinless Fermi sea with dispersion \(\epsilon_{\mathbf{k}}=-\cos(k_{x})+\cos(k_{y})+\cos(2k_{x})\). (a) \(\mu=0.5\), \(\chi_{F}=-1\), and \(C=-1\). (b) \(\mu=-1.5\), \(\chi_{F}=2\), and \(C=0\). The left column shows the fermion velocity and the right column pairing field \(\mathbf{u}\) extended to the entire Brillouin zone. Red and green dots label the zeros with indices \(+1\) and \(-1\), respectively.
require the metallic state respect time-reversal symmetry of spin-1/2 fermions. By introducing a time-reversal invariant \(p\)-wave pairing, we can convert the metallic state to a topological superconductor of the DIII class, which is classified by an integer.
The pairing potential has the form \(\Delta_{\mathbf{k}}=\mathbf{u}\cdot\mathbf{\sigma}i\sigma_{y}\), where \(\mathbf{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})\) are Pauli matrices in the spin space, and \(\mathbf{u}=(\Delta_{0}\sin\mathbf{k}\cdot\mathbf{a_{1}},\Delta_{0}\sin \mathbf{k}\cdot\mathbf{a_{2}},\Delta_{0}\sin\mathbf{k}\cdot\mathbf{a_{3}})\) is the pairing vector field in 3D. In the continuous limit, this corresponds to the He-3 B phase [39; 40; 41]. Without spin-orbit coupling, the Hamiltonian can be written as
\[\begin{split} H(\mathbf{k})&=(\epsilon_{\mathbf{k}}- \mu)\tau_{z}-\Delta_{0}\sin\mathbf{k}\cdot\mathbf{a_{1}}\tau_{x}\sigma_{z}\\ &\quad-\Delta_{0}\sin\mathbf{k}\cdot\mathbf{a_{2}}\tau_{y}+\Delta _{0}\sin\mathbf{k}\cdot\mathbf{a_{3}}\tau_{x}\sigma_{x}.\end{split} \tag{12}\]
\(H(\mathbf{k})\) defines a map from the 3D Brillouin zone to a unit 3-sphere given by \(\hat{\mathbf{h}}=\mathbf{h}/|\mathbf{h}|\), with \(\mathbf{h}=(\mathbf{u},\mu-\epsilon_{k})\). This map is classified by the homotopy group \(\pi_{3}(S^{3})\), and its topological invariant is the 3D winding number \(N_{w}\). When spin-orbit coupling is taken into account, the topological invariant is still the 3D winding number, although the homotopy group becomes \(\pi_{3}(U(2))\).
In complete analogy to the 2D case, we can rewrite \(N_{w}\) as the sum of 2D winding numbers of \(\mathbf{u}\) on each Fermi surface in the weak pairing limit. Similarly, the Euler characteristic is given by the sum of the winding numbers of fermion velocity \(\mathbf{v}\) on each Fermi surface. Due to the presence of both spins, \(\chi_{F}\) is an even integer. When \(\mathbf{u}\) can be smoothly deformed without vanishing to be parallel to \(\mathbf{v}\), we have \(\chi_{F}/2=C\). If this condition cannot be met, in the presence of inversion symmetry, we have \(\chi_{F}/2\equiv N_{w}\pmod{2}\).
Experimental implications.We discuss two experimental implications of the above correspondences.
To verify the correspondence between Euler characteristic and Chern number, we utilize the superconducting proximity effect [42; 43]. Let us consider spinless fermions. In general, Cooper pairs would not always form in a time-reversal-breaking \(p\)-wave channel. However, we can induce such pairings by depositing the 2D metallic sample onto a chiral \(p\)-wave superconducting substrate. If the condition is satisfied, then the number and chirality of the Majorana edge modes in the sample is given by \(\chi_{F}\). If the condition is not satisfied, with inversion symmetry the number of chiral Majorana edge modes is given by \(\chi_{F}\) modulo two. In either case, the number and chirality of Majorana modes in the sample can be different from that in the substrate (Fig. 3).
The relation between Euler characteristic and topological invariants of superconductors also suggests that Lifshitz transitions in the metallic phase can lead to topological phase transitions in the superconducting phase in both 2D and 3D [44; 45; 46; 29]. For example, by applying pressure on a \(p\)-wave topological superconductor, one should be able to observe topological phase transitions marked by the change of Majorana edge modes. This topological phase transition is due to the change of Fermi sea topology of the metallic state by the applied pressure [14]. When \(\mathbf{u}\) and \(\mathbf{v}\) satisfy the conditions given above, the superconductor undergoes a topological phase transition whenever \(\chi_{F}\) changes its value. If the condition is not satisfied, with inversion symmetry topological phase transitions happen when \(\chi_{F}\) changes its parity.
Discussions.In this Letter, we have established the correspondence between the topological invariant of superconductors and the Euler characteristic of the normal state Fermi sea. The key to establishing this correspondence is to express the Euler characteristic and the topological invariant of superconductors as the winding numbers of the fermion velocity \(\mathbf{v}\) and the pairing field \(\mathbf{u}\) on the Fermi surface, respectively. The way to express topological invariants as properties on the Fermi surface is reminiscent of the introduction of Fermi surface topological invariants for time-reversal invariant superconductors [47] and the characterization of Floquet topological phases with band-inversion surface properties [48].
Our work reveals a connection between two seemingly unrelated topological invariants in two physical systems with drastically different properties. One related question that is interesting to study in the future is whether there exists similar correspondence between other topological invariants and what is the physical mechanism to connect them. Answering this question will help us understand the relations between different topological phases and may eventually lead to a more unified understanding of topology in physics.
We thank Hui Zhai, Yingfei Gu, Pengfei Zhang, and Zhong Wang for helpful discussions. This project is supported by China Postdoctoral Science Foundation (Grant No. 2022M711868). F.Y. and C.L. are supported by Chinese International Postdoctoral Exchange Fellowship Program (Talent-introduction Program) and Shuimu Tsinghua Scholar Program at Tsinghua University.
Figure 3: We induce chiral \(p\)-wave superconductivity to a 2D metallic sample by proximity effect. The number of chiral Majorana mode on the sample depends on its Euler characteristic. The number and chirality of the Majorana modes in the sample can be different from that of the substrate. |
2304.01746 | Is ChatGPT a Highly Fluent Grammatical Error Correction System? A
Comprehensive Evaluation | ChatGPT, a large-scale language model based on the advanced GPT-3.5
architecture, has shown remarkable potential in various Natural Language
Processing (NLP) tasks. However, there is currently a dearth of comprehensive
study exploring its potential in the area of Grammatical Error Correction
(GEC). To showcase its capabilities in GEC, we design zero-shot
chain-of-thought (CoT) and few-shot CoT settings using in-context learning for
ChatGPT. Our evaluation involves assessing ChatGPT's performance on five
official test sets in three different languages, along with three
document-level GEC test sets in English. Our experimental results and human
evaluations demonstrate that ChatGPT has excellent error detection capabilities
and can freely correct errors to make the corrected sentences very fluent,
possibly due to its over-correction tendencies and not adhering to the
principle of minimal edits. Additionally, its performance in non-English and
low-resource settings highlights its potential in multilingual GEC tasks.
However, further analysis of various types of errors at the document-level has
shown that ChatGPT cannot effectively correct agreement, coreference, tense
errors across sentences, and cross-sentence boundary errors. | Tao Fang, Shu Yang, Kaixin Lan, Derek F. Wong, Jinpeng Hu, Lidia S. Chao, Yue Zhang | 2023-04-04T12:33:40Z | http://arxiv.org/abs/2304.01746v1 | # Is ChatGPT a Highly Fluent Grammatical Error Correction System? A Comprehensive Evaluation
###### Abstract
ChatGPT, a large-scale language model based on the advanced GPT-3.5 architecture, has shown remarkable potential in various Natural Language Processing (NLP) tasks. However, there is currently a dearth of comprehensive study exploring its potential in the area of Grammatical Error Correction (GEC). To showcase its capabilities in GEC, we design zero-shot chain-of-thought (CoT) and few-shot CoT settings using in-context learning for ChatGPT. Our evaluation involves assessing ChatGPT's performance on five official test sets in three different languages, along with three document-level GEC test sets in English. Our experimental results and human evaluations demonstrate that ChatGPT has excellent error detection capabilities and can freely correct errors to make the corrected sentences very fluent, possibly due to its over-correction tendencies and not adhering to the principle of minimal edits. Additionally, its performance in non-English and low-resource settings highlights its potential in multilingual GEC tasks. However, further analysis of various types of errors at the document-level has shown that ChatGPT cannot effectively correct agreement, coreference, tense errors across sentences, and cross-sentence boundary errors.
## 1 Introduction
In recent years, Natural Language Processing (NLP) has made significant advancements due to the emergence of large language models (LLMs). Among the numerous LLMs available, Generative Per-trained Transformer (GPT) models (Radford et al., 2019; Brown et al., 2020) have demonstrated high efficacy in various NLP tasks. Recently, ChatGPT1, an advanced language model developed by OpenAI, has gained significant attention from researchers and practitioners in the field of NLP (Qin et al., 2023; Luo et al., 2023; Liu et al., 2023; Hendy et al., 2023). Building upon the InstructGPT (Ouyang et al., 2022), ChatGPT is a groundbreaking innovation in the field of conversational agents. It possesses the remarkable ability to understand complex instructions and generate responses that closely mimic human speech. In addition to its conversational abilities, ChatGPT has demonstrated impressive performance in various other NLP tasks, including machine translation (Jiao et al., 2023; Hendy et al., 2023; Peng et al., 2023),question-answering (Bang et al., 2023), and text summarization (Yang et al., 2023).
Footnote 1: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt)
When it comes to the Grammatical Error Correction (GEC) ability of ChatGPT, it appears that many individuals favor its use for text revision and refinement. Nonetheless, there is currently a scarcity of comprehensive research literature regarding ChatGPT's genuine error correction capabilities. To the best of our knowledge, only one study (Wu et al., 2023) conducts a preliminary evaluation of ChatGPT's performance in the English GEC task of the CoNLL14 test by only analyzing random 300 sentences and yields some preliminary but not sufficient results, which leaves us unclear about ChatGPT's specific capabilities and advantages in the GEC task. To address the current research gap, the aim of this study is to comprehensively investigate the performance and potential of ChatGPT in GEC, as well as to compare it with state-of-the-art (SOTA) models. This includes evaluating its performance at both the sentence-level and document-level GEC, designing prompts for zero-shot and few-shot scenarios, and assessing its effectiveness in English, non-English languages, and low-resource settings.
To explore the potential of ChaGPT for GEC, we first conduct a preliminary study to examine the effectiveness of our designed prompting methods in both **zero-shot** and **zero-shot chain-of-thought (CoT)** settings (Kojima et al., 2023). Our findings
indicate that incorporating CoT techniques can significantly enhance the performance of GEC on both CoNLL14 test and BEA19 test sets. Moreover, we propose the utilization of few-shot prompts in conjunction with CoT techniques to further enhance ChatGPT's performance through in-context learning (Brown et al., 2020). We evaluate ChatGPT's performance for sentence-level grammatical error correction (GEC) in three languages covered five official test sets using **zero-shot CoT**, **1-shot CoT**, **3-shot CoT**, and **5-shot CoT**. Besides, we also assess its performance in **zero-shot CoT** and **1-shot CoT** scenarios for document-level GEC.
Furthermore, we conduct both automatic human evaluation and manual human evaluations, comparing ChatGPT to mainstream SOTA models (GECToR and T5) and a widely-used commercial GEC system, Grammarly. These assessments provide valuable and reliable insights into the strengths and weaknesses of ChatGPT for the GEC task. We also conducte an analysis of error types in the document-level GEC, revealing the reasons behind ChatGPT's suboptimal performance in this context.
Based on the results of our experiments and analyses, we summarize the following obervations:
* ChatGPT exhibits a significant disparity with the current SOTA systems in terms of Precision and F\({}_{0.5}\), while its Recall performance is remarkably superior. This suggests that ChatGPT has the ability to detect errors in the text, but its elevated degree of modification freedom may result in superfluous changes.
* ChatGPT shows significant potential for GEC through zero-shot CoT and few-shot CoT strategies. Even on the JFLEG test set, it has shown minimal difference from the SOTA models and surpassed human evaluations, exhibiting human-like fluency.
* ChatGPT also demonstrates advantages in non-English languages and low-resource environments under the zero-shot CoT strategy, outperforming Transformer models trained from scratch. This highlights the potential of ChatGPT for multilingual GEC tasks.
* Our human evaluations indicate that ChatGPT performs with greater fluency in correcting grammatical errors, albeit with a noticeable tendency towards over-correction. Additionally, as sentence length increases, ChatGPT exhibits a tendency to follow minimal edits and the under-correction rate gradually increases.
* Upon analyzing the error types for document-level GEC, ChatGPT shows relatively poor performance in correcting agreement, coreference, and tense errors across sentences, as well as cross-sentence boundary errors. This could be due to ChatGPT's inherent limitations in processing excessively long sentences.
## 2 Experimental Setup
### Dataset
Our evaluation involves conducting sentence-level grammatical error correction (GEC) evaluations on a total of five official test sets across three languages: English, German, and Chinese. For English, we select the wildly-used CoNLL14 (Ng et al., 2014) and BEA19 (Bryant et al., 2019) GEC test sets. We use the official test set of NLPCC18 (Zhao et al., 2018) and the official Falko-MERLIN (Boyd et al., 2014) test set for Chinese and German, respectively. The four aforementioned test sets only contain minimal edits that rectify the grammatical errors in a sentence, without necessarily improving fluency or naturalness of the sentence. To assess the error correction abilities of GEC systems more accurately, we also evaluate on the JFLEG (Napoles et al., 2017) test set in English, which represents a range of language proficiency levels and utilizes comprehensive fluency edits to enhance the accuracy of the evaluation. Additionally, we contemplate conducting evaluations on three document-level test sets for the English language, following the methodology suggested by Yuan and Bryant (2021). These test sets include the FCE document-level test set (Yannakoudakis et al., 2011), the BEA19 document-level develop
\begin{table}
\begin{tabular}{l|l|l|l|l} \hline
**Lan.** & **Data** & **\#Sents** & **\#Docs** & **\#Doc.len.** \\ \hline \multirow{4}{*}{EN} & CoNLL14 & 1,312 & 50 & 26 \\ & JFLEG test & 747 & - & - \\ & BEA19 test & 4,477 & - & - \\ & BEA19 dev & - & 350 & 13 \\ & FCE test & - & 194 & 14 \\ \hline DE & Falko-MERLIN & 2,337 & - & - \\ \hline ZH & NLPCC18 test & 2,000 & - & - \\ \hline \end{tabular}
\end{table}
Table 1: The datasets used in the evaluation.
ment set ((Bryant et al., 2019)), and the CoNLL14 document-level test set (Ng et al., 2014). Table 1 presents the statistics of the datasets we used.
### Grammatical Error Correction Systems
Currently, among the publicly available GEC systems for sentence-level GEC tasks, the state-of-the-art (SOTA) Seq2Seq GEC model is (m)T5 and its variant models (Rothe et al., 2021), and the SOTA Seq2Edit GEC model is the GECToR model (Omelianchuk et al., 2020). In addition to these SOTA models, we include the Transformer-base (Vaswani et al., 2017) GEC model as our baseline model for comparing the performance of the Chat-GPT system. Since TagGEC (Stahlberg and Kumar, 2021) is the SOTA model on the JFLEG test set, we also include it in the comparison. Regarding the comparison of the document-level GEC system, we utilize the MultiEnc-dec model proposed by Yuan and Bryant (2021), which is presently considered as the SOTA for English document-level GEC task.
### ChatGPT system
OpenAI has recently developed several GPT-3.5 series models2, among which ChatGPT (gpt-3.5-turbo) stands out as the most advanced and specifically optimized for chat functionality. We assess the ChatGPT's performance in the GEC task using official API3.
Footnote 2: [https://platform.openai.com/docs/model-index-for-researchers](https://platform.openai.com/docs/model-index-for-researchers)
Footnote 3: [https://platform.openai.com/docs/api-reference](https://platform.openai.com/docs/api-reference)
Footnote 4: [https://github.com/nusnlp/m2scorer](https://github.com/nusnlp/m2scorer)
Footnote 5: [https://github.com/chrisjbryant/errant](https://github.com/chrisjbryant/errant)
### Evaluation Method
Sentence-Level EvaluationGEC systems are evaluated using automated metrics, which compare their output against gold-standard corrections from reference corpora. The selection of automatic evaluation metrics depends on their correlation with human judgments on different types of test sets. Therefore, following previous work (Katsumata and Komachi, 2020; Omelianchuk et al., 2020; Rothe et al., 2021), we utilize the M2 Scorer4(Dahlmeier and Ng, 2012) to evaluate the performance of systems on CoNLL14 English, Falko-MERLIN German, and NLPCC18 Chinese GEC tasks. For assessing the BEA2019 test set, we employ an official ERRANT Scorer5(Bryant et al., 2019). Additionally, we adopt the GLUE metric6(Napoles et al., 2016) to evaluate the JFLEG test set.
Footnote 6: [https://github.com/cnrjsbryant/doc-gec](https://github.com/cnrjsbryant/doc-gec)
Footnote 7: [https://github.com/chrisjbryant/doc-gec](https://github.com/chrisjbryant/doc-gec)
Document-Level EvaluationBased on the approach by Yuan and Bryant (2021), we manually process the raw data to produce document-level references since the available references are only at the sentence level. To evaluate the performance of the ChatGPT system on document-level GEC task, we adopt their method and utilize the official scorer of the BEA19 shared task (Bryant et al., 2019), the ERRANT Scorer (Bryant et al., 2017), for document-level GEC evaluation. For further details, it can be referred to their repository7.
Footnote 7: [https://github.com/chrisjbryant/doc-gec](https://github.com/chrisjbryant/doc-gec)
Human EvaluationAutomatic evaluation may introduce bias in reflecting the true performance of a system, especially when there is a limited number of available references. Therefore, we conducte two types of human evaluations, namely automatic human evaluation and manual human evaluation, to more accurately assess the performance of the systems. To conduct automatic human evaluation, we follow the method of Bryant and Ng (2015) and Napoles et al. (2017) to compare the performance of a GEC system against human performance. They measure human performance on the CoNLL14 and JFLEG test sets, respectively, using 10 and 4 sets of human annotations. Furthermore, to further analyze the potential of the ChatGPT system, we invite three annotators to manually annotate the outputs of SOTA GEC systems from various aspects on the CoNLL14 test set. The detailed results are provided in Section 4.
## 3 Experiments
### The Effect of Zero-Shot Prompts for ChatGPT
Zero-shot Prompts DesignWe explore the zero-shot capabilities of ChatGPT in GEC from two aspects: zero-shot and zero-shot CoT settings. Table 2 presents the different designed prompting methods. Specifically, for zero-shot, we directly ask the ChatGPT system to identify and correct any grammatical errors in the sentence while keeping the original sentence structure unchanged as much as possible. However, we observe that using this normal prompts, ChatGPT tends to generate a multitude of explanations and the format of the resulting answers is often disorganized, necessitating
manual intervention. To alleviate this issue, we adopt the zero-shot CoT technique Kojima et al. (2022) and design our own zero-shot CoT prompt for ChatGPT. Different from the zero-shot prompt, we use special tag <input> Input Sentence </input> to indicate the input sentences, and tell ChatGPT should output the results using the format <output> Your Corrected Version </output>. We instruct ChatGPT to approach the task by comprehending the sentence as a whole before identifying and correcting any errors step by step. Afterward, we request ChatGPT to provide us with the corrected sentences directly, without any explanations. We show some examples of using zero-shot and zero-shot CoT settings for ChatGPT in Appendix A.1.
its **GLEU** score is very close to the SOTA score (falling only 1.2 points short) and it even surpasses the T5 large GEC system by 0.7 points. As we know, JFLEG is a dataset based on fluency editing, which can be used to evaluate a GEC system's ability to correct grammatical errors in a sentence while maintaining fluency. The promising scores obtained by ChatGPT demonstrate its potential for achieving fluency and naturalness in English sentence correction.
Additionally, comparing the performance of different shot CoT strategies on ChatGPT, the few-shot CoT prompting method outperforms the zero-shot CoT prompting method significantly, except for the 1-shot CoT prompting method on the JFLEG test set. This further demonstrates the effectiveness of the in-context learning method. Interestingly, providing more in-context examples does not necessarily lead to better performance for ChatGPT. Our experiments show that performance tends to decrease when the number of in-context examples exceeds five.
### ChatGPT Performance on non-English and Low-Resource GEC Tasks
We also evaluate non-English GEC tasks on the ChatGPT system by conducting experiments on the German Falko-MERLIN test and Chinese NLPCC18 test sets. It is worth noting that the German training dataset is much smaller than that of English and Chinese, which means that GEC in the German language can be considered a low-resource task. Table 5 presents the evaluation results of zero-shot CoT and few-shot CoT prompt
\begin{table}
\begin{tabular}{l|c c c|c c|c} \hline \hline & \multicolumn{3}{c|}{**CoNLL14**} & \multicolumn{3}{c|}{**BEA19 (test)**} & \multicolumn{1}{c}{**JFLEG (test)**} \\ \cline{2-7}
**System** & Pre. & Rec. & F\({}_{0.5}\) & Pre. & Rec. & F\({}_{0.5}\) & GLEU \\ \hline Transformer & 60.1 & 36.6 & 53.3 & 60.9 & 48.3 & 57.9 & 55.4 \\ TagGEC (Stahlberg and Kumar, 2021) & 72.8 & 49.5 & 66.6 & 72.1 & 64.4 & 70.4 & **64.7** \\ GECToR & **75.6** & 44.5 & 66.3 & **76.7** & 57.8 & 71.9 & 58.6 \\ T5 large & 72.2 & 51.4 & 66.8 & 73.4 & 67.0 & 72.0 & 62.8 \\ T5 xxl (Rothe et al., 2021) & - & - & **68.9** & - & - & **75.9** & - \\ \hline ChatGPT (zero-shot CoT) & 50.2 & 59.0 & 51.7 & 32.1 & **70.5*** & 36.1 & 61.4 \\ ChatGPT (1-shot CoT) & 52.0 & 58.1 & 53.1 & 34.6 & 69.7 & 38.4* & 59.7 \\ ChatGPT (3-shot CoT) & 51.3 & **62.4*** & 53.2* & 34.0 & 70.2 & 37.9 & 63.5* \\ ChatGPT (5-shot CoT) & 50.9 & 61.8 & 52.8 & 32.4 & 69.9 & 36.3 & 62.5 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Zero-shot CoT and few-shot CoT evaluation results with ChatGPT on the English CoNLL14, BEA19 test, and JFLEG test sets. The comparison GEC models for Transformer, GECToR and T5-large are ones that we trained on the English CLang8 data as Rothe et al. (2021). **Bold** values indicate the best scores across different systems. * denotes the best results among different shot paradigms for ChatGPT.
Figure 1: An illustration of all few-shot CoT prompts used for ChatGPT to perform Grammatical Error Correction.
ing methods. The results of German and Chinese GEC exhibit similar trends to those in English GEC, where ChatGPT outperforms the SOTA systems in **Recall** scores but scores significantly lower in \(\textbf{F}_{0.5}\) and **Precision**.
Furthermore, ChatGPT surpasses the performance of the Transformer base model trained from scratch on the evaluation metrics, suggesting that with proper prompting methods, ChatGPT can effectively perform GEC on low-resource and non-English tasks. This also highlights the potential of ChatGPT for application in multilingual GEC tasks. Another interesting finding is that the performance of zero-shot CoT and few-shot CoT strategies in Chinese is opposite to that in English and German. Specifically, the few-shot CoT prompting method performs comparatively worse than the zero-shot CoT prompting method in Chinese. We hypothesize that, on the one hand, ChatGPT is an LLM centered around English, and German and English belong to the same language family. Therefore, ChatGPT performs similarly on GEC tasks in both languages. On the other hand, the lexicon of Chinese GEC is much more complex than that of English. However, this does not mean that ChatGPT's performance on Chinese GEC is limited to this extent. It may be necessary to design more effective few-shot selection methods to guide ChatGPT and improve its performance.
### Document-Level GEC Performance with ChatGPT
Prior research has shown limited attention toward document-level GEC task, with most studies concentrating on sentence-level GEC tasks using PLMs. As far as we know, only two studies have explored the improvement of performance in English document-level GEC (Chollampatt et al., 2019; Yuan and Bryant, 2021), which only trained on CNN/Transformer architecture from scratch with their approach. In this section, we evaluate English document-level GEC tasks on the ChatGPT system by conducting experiments on the CoNLL14 document-level test, FCE document-level test, and BEA19 document-level development sets followed by Yuan and Bryant (2021). Regarding the design of prompts, we simply replace "sentence" with "document" in Table 2 and Figure 1.
Table 6 presents the evaluation results of the zero-shot CoT prompting method for ChatGPT. Similar to sentence-level GEC tasks, ChatGPT exhibits good performance in terms of **Recall**, but its scores in \(\textbf{F}_{0.5}\) and **Precision** are significantly lower in the context of document-level GEC models that are trained from scratch on Transformer without leveraging knowledge from large language models. It is worth noting that this poor performance raises concerns and warrants further investigation.
Furthermore, the large number of sentences in each document has impeded our exploration of the few-shot CoT prompting method. To address this issue, we only conduct 1-shot CoT experiments using the BEA19 development set, which has the
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{2}{c|}{**De (Falko-MERLIN)**} & \multicolumn{2}{c}{**Zh (NPLCC18)**} \\ \cline{2-7}
**System** & Pre. & Rec. & \(\mathrm{F}_{0.5}\) & Pre. & Rec. & \(\mathrm{F}_{0.5}\) \\ \hline Transformer & 58.8 & 34.3 & 51.5 & 31.2 & 20.2 & 28.1 \\ GECToR & - & - & - & 37.4 & 26.3 & 34.5 \\ mT5 large & **75.4** & 55.1 & 70.2 & **41.5** & 25.8 & **37.0** \\ mT5 xxl (Rothe et al., 2021) & - & - & 74.8 & - & - & - \\ gT5 xxl (Rothe et al., 2021) & - & - & **76.0** & - & - & - \\ \hline ChatGPT (zero-shot CoT) & 59.9 & 63.9 & 60.7 & 28.7 & 39.4 & 28.7* \\ ChatGPT (1-shot CoT) & 61.6 & 65.6 & 62.4 & 25.7 & **40.8*** & 27.8 \\ ChatGPT (3-shot CoT) & 63.1 & 65.3 & 63.5* & 26.4 & 39.4 & 28.3 \\ ChatGPT (5-shot CoT) & 61.5 & **65.9*** & 62.3 & 25.2 & 39.2 & 27.2 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Zero-shot CoT and few-shot CoT evaluation results with ChatGPT on the German Falko-MERLIN and Chinese NLPCC18 test sets. The comparison GEC models for Transformer, GECToR and T5-large are ones that we trained on the German CLang8 and Chinese lang8 datasets. **Bold** values indicate the best scores across different systems. * denotes the best results among different shot paradigms for ChatGPT.
fewest average number of sentences compared to the other two test sets. For in-context examples, we randomly select one sample from the FCE test set. As shown in Table 7, it appears that the 1-shot CoT strategy is not effective for document-level GEC. While there is a noticeable increase in **Recall**, the \(\textbf{F}_{0.5}\) and **Precision** scores still decreased significantly. In light of these results, we speculate that ChatGPT may not be sufficiently capable of processing long sentences, which often require high levels of coherence and consistency between sentences. In the subsequent analysis section 4.3, we will examine ChatGPT's shortcomings from the perspective of correcting different types of errors.
## 4 Human Evaluation and Analysis
### Automatic Human Evaluation
Bryant and Ng (2015) is the first to attempt to measure human performance on the CoNLL14 test set using 10 references. Specifically, they calculate the performance of each annotator by comparing its corrections to the other 9 annotators. The average of the 9 F\({}_{0.5}\) scores was taken as the final score for the human-level performance. To perform an automatic human evaluation on the ChatGPT performance, we adopt the approach used by Bryant and Ng (2015) and Napoles et al. (2017) to compare the performance of different GEC systems with that of humans on the CoNLL14 and JFLEG test sets. The reported F\({}_{0.5}\) scores for human performance on these test sets are 72.58 and 62.37, respectively.
Table 8 shows the automatic human evaluation results. We have observed some interesting phenomena. Firstly, the F\({}_{0.5}\) score of ChatGPT on the CoNLL14 test is lower than that of the Transformer base model in Table 4, but the human evaluation shows a significant improvement for ChatGPT. This suggests that the existing evaluation methods may underestimate the performance of ChatGPT on sentence-level GEC tasks. Although the best few-shot CoT strategy for ChatGPT achieved a human evaluation score far lower than the scores of two mainstream SOTA models, it is only 0.85 F\({}_{0.5}\) points lower than the human-level evaluation score, indicating that ChatGPT has great potential in the
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{**CoNLL14**} & \multicolumn{3}{c|}{**FCE (test)**} & \multicolumn{3}{c}{**BEA19 (dev)**} \\ \cline{2-10}
**System** & Pre. & Rec. & F\({}_{0.5}\) & Pre. & Rec. & F\({}_{0.5}\) & Pre. & Rec. & F\({}_{0.5}\) \\ \hline SingEnc & 59.8 & 27.3 & 48.3 & 61.6 & 45.0 & 57.4 & 57.0 & 43.2 & 53.5 \\ MultiEnc-enc & 63.2 & 28.0 & 50.5 & 65.6 & 42.7 & 59.2 & 62.1 & 41.7 & 56.5 \\ MultiEnc-dec & **64.6** & 28.7 & **51.6** & **65.4** & 44.2 & **59.7** & **62.6** & 40.7 & **56.6** \\ \hline ChatGPT(zero-shot CoT) & 42.3 & **40.8** & 42.0 & 46.3 & **49.9** & 47.0 & 47.7 & **50.8** & 48.3 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Zero-shot CoT evaluation results with ChatGPT on the English CoNLL14 document-level test, FCE document-level test, and BEA19 document-level development sets. The comparison document-level GEC results for SingEnc, MultiEnc-enc, and MultiEnc-dec models are reported by Yuan and Bryant (2021), where MultiEnc-dec is the current SOTA model. **Bold** values indicate the best scores across different systems.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline & \multicolumn{2}{c}{**BeA19 (dev)**} \\ \cline{2-3}
**Method** & Pre. & Rec. & F\({}_{0.5}\) \\ \hline ChatGPT (zero-shot CoT) & **47.7** & 50.8 & **48.3\({}^{\dagger}\)** \\ ChatGPT (1-shot CoT) & 42.5 & **55.6** & 44.6 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Performance of zero-shot CoT and 1-shot CoT prompting methods for ChatGPT on the BEA19 document-level development set. **Bold** values indicate the best scores. Statistically significant improvement over the 1-shot CoT prompting method is reported using _P_value, \({}^{\dagger}p<0.01\).
\begin{table}
\begin{tabular}{l|c|c} \hline \hline & **CoNLL14** & **JFLEG** \\ \cline{2-3}
**System** & F\({}_{0.5}\) & GLEU \\ \hline Human & 72.58 & 62.37 \\ \hline Transformer & 66.97 & 55.41 \\ GECToR & 80.49 & 58.61 \\ T5 large & **81.19** & 62.82 \\ \hline ChatGPT (0-shot) & 69.74 & 61.42 \\ ChatGPT (1-shot) & 71.55 & 59.65 \\ ChatGPT (3-shot) & 71.73* & **63.52*** \\ ChatGPT (5-shot) & 70.66 & 62.53 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Zero-shot CoT and few-shot CoT Prompting methods for ChatGPT in comparison to automatic human evaluation performance. **Bold** values indicate the best scores across different systems. * denotes the best results among different shot paradigms for ChatGPT.
sentence-level GEC task.
Furthermore, the human evaluation performance of ChatGPT on the JFLEG test set is impressive. The best few-shot CoT prompting method not only exceeds the human-level evaluation score by 1.15 GLEU points but also outperforms the strong baseline T5 large model by 0.7 GLEU points. This observation suggests that the sentences corrected by ChatGPT exhibit a high level of fluency and naturalness.
### Manual Human Evaluation
To explore ChatGPT's true performance on the CoNLL14 test set, we carry out a manual human evaluation. In addition to comparing with two mainstream SOTA GEC systems (GECToR, T5 large), we also consider the widely-used commercial system, Grammarly8, renowned for its ability to detect and correct a range of errors in English texts, such as spelling, punctuation, grammar, and word choice, using features like Correctness, Clarity, Engagement, and Delivery. In this evaluation, we only use its free and open Correctness feature to correct sentences. We select the 50 longest, 50 medium-length, and 50 shortest sentences from the CoNLL14 test set and invite three postgraduate students with international study experience to conduct a manual evaluation. Our reference is the widely-used _official-2014.combined.m2_ version (Ng et al., 2014) of CoNLL14, and we obtain the final evaluation by averaging the scores from the three students. The evaluation focus on four main categories: Fluency, Minimal Edits, Over-Correction, and Under-Correction. Some examples are shown in Appendix A.2.
Footnote 8: [https://app.grammarly.com](https://app.grammarly.com)
Fluencyrefers to ensuring that the corrected sentence conforms to the linguistic conventions, is easily comprehensible, flows naturally, and preserves the intended meaning of the original sentence while correcting the grammatical errors. We define a 1-5 rating scale, where 1 represents the lowest and 5 is the highest level of fluency. From the results shown in Figure 2(a), it can be seen that ChatGPT exhibits significantly better fluency in correcting grammar errors of short, medium, and long sentences compared to the other three mainstream systems. In addition, we also observed that among the three mainstream systems, Grammarly performs more fluently in correcting grammar errors in short sentences, while GECToR performs better in long sentences. As for sentences of medium length, the three systems perform similarly.
Minimal EditsThe CoNLL14 test set includes grammatically incorrect sentences that have been created by making minimal edits to grammatically correct sentences. We compare the output of ChatGPT and three other systems to the references sentences in order to determine whether they use minimal edits. The annotators assign a value of 1 to indicate that a sentence followed minimal edits, and 0 to indicate that it does not. We compute the proportion of minimal edits for sentences of different lengths. The results are shown in Fig
Figure 2: The results of manual human evaluations on four criteria: Fluency, Minimal Edits, Over-Correction, and Under-Correction. The statistical analysis is based on the average scores provided by three evaluators.
ure 2(b). Compared to the other three systems, ChatGPT is not particularly inclined to use minimal edits, especially in short sentences. However, as the length of the sentence increases, ChatGPT becomes more willing to follow minimal edits. Furthermore, Grammarly exhibits a lower level of conformity to minimal edits in short sentences compared to two mainstream SOTA systems. In terms of medium and long sentences, GECToR shows a lower level of minimal edits compared to T5 large. Interestingly, comparing Figure 2(a), we find that the less a system adheres to minimal edits, the better the fluency performance of the generated sentences.
Over-CorrectionSince there can be multiple ways to correct a sentence, sometimes a correction that differs from the references does not necessarily mean the sentence has not been corrected. Over-Correction is used to evaluate a system's ability to produce correct results beyond what is indicated by the reference correction. We also ask the annotators to assign a value of 1 to indicate that a sentence is an over-correction, and 0 to indicate that it is not. We calculate the proportion of Over-Correction for sentences with varying lengths and the results are depicted in Figure 2(c). ChatGPT surpasses the other three GEC systems in its capacity to over-correct sentences of different lengths, which indicates that it can correct sentences freely and diversely, which is consistent with its good fluency demonstrated in Figure 2(a). Additionally, we observe that Grammarly also exhibits over-correction compared to GECToR and T5 large models.
Under-Correctionrefers to cases where a GEC system fails to correct certain grammatical errors that exist in the original sentence. Our requirement for annotators is to check any errors present in the generated sentences by GEC systems that are not identified or corrected. Similar to Over-Correction, 1 indicates a sentence is an under-correction, and 0 indicates that it is not. Interestingly, in contrast to Over-Correction, ChatGPT demonstrates fewer Under-Corrections than the other systems in short, medium, and long sentences, as shown in Figure 2(d). This suggests that ChatGPT has great potential for GEC tasks. Furthermore, T5 large GEC system is prone to Under-Corrections, which highlights that a higher F\({}_{0.5}\) score does not imply a more perfect GEC system. We also observe that as sentence length increases, ChatGPT exhibits a stronger tendency towards Under-Correction. This may explain its poor performance in document-level GEC tasks.
Through manual analysis, we can conclude that ChatGPT generates grammatically corrected sentences that are highly fluent, which may be attributed to its powerful diversity and free-generation ability.
### Fine-grained Error Analysis for Document-Level GEC
To evaluate ChatGPT's ability to correct various types of errors in document-level GEC, we conduct an study based on Yuan and Bryant (2021), using the ERRANT toolkit Bryant et al. (2017) to analyze the results on the BEA19 document-level development set. The results are presented in Table 9, comparing the performance of ChatGPT and the MultiEnc-dec document-level GEC system on POS-based fine-grained error types. ChatGPT demonstrates strong performance in addressing errors related to punctuation, nouns, and possessive nouns, achieving improvements of **6.0**, **7.1**, and **0.8**\(\mathbf{F}_{0.5}\) scores, respectively. Regarding errors that frequently demand a strong understanding of agreement, coreference, or tense across multiple sentences, these are typically considered document-level errors. The ChatGPT performs significantly worse than the MultiEnc-dec model in correcting errors related to subject-verb agreement, prepositions, noun numbers, determiners, and pronouns (e.g. VERB:SVA **-5.1**\(\mathbf{F}_{0.5}\), VERB:TENSE **-3.7**\(\mathbf{F}_{0.5}\), VERB:FORM **-15.5**\(\mathbf{F}_{0.5}\), NOUN:NUM **-6.0**\(\mathbf{F}_{0.5}\), PRON **-7.7**\(\mathbf{F}_{0.5}\)). Moreover, ChatGPT's performance was even worse in handling cross-sentence boundary errors, as evidenced by a decline of **14.8**\(\mathbf{F}_{0.5}\) in CONJ and **18.3**\(\mathbf{F}_{0.5}\) in PUNCT. We speculate the underperformance of the ChatGPT could be attributed to its potential limitations in terms of contextual memory, consistency, and coherence across sentences at the document-level.
## 5 Conclusion
In this study, we undertake a thorough examination of ChatGPT's performance and potential in GEC, as well as compare it with current SOTA models. To the best of our knowledge, we are the first to design zero-shot CoT and few-shot CoT settings for ChatGPT in the GEC task. Our experiments involve evaluating ChatGPT on five official test sets in three different languages, as well as on three document-level GEC test sets in En
glish. The experimental results demonstrate that ChatGPT has strong error detection capabilities and can generate sentences with human-like fluency, despite its poor performance in precision and F\({}_{0.5}\) scores. Additionally, we find that ChatGPT also demonstrate its advantages in multilingual and low-resource settings, showing great potential for multilingual GEC. The results of further human evaluations once again confirm that ChatGPT is a highly fluent GEC system, and demonstrate that its performance can be better leveraged through the use of chain-of-thought prompts. However, our analysis of ChatGPT's ability to correct various types of errors in document-level GEC indicates that it performs poorly on most error types, such as agreement, coreference, tense errors across sentences, and cross-sentence boundary errors.
### Limitations
Our study has some limitations as follows:
* Our current work solely focus on exploring ChatGPT's potential performance in GEC, without delving into investigating other mainstream LLMs, such as other GPT-3.5 series models or the latest GPT-4 model from OpenAI. However, we plan to include these models in our future work.
* Our study highlights that ChatGPT exhibits higher fluency and tends to make more casual edits in GEC tasks. However, in certain practical applications, such as language education, users may require only minimal editing corrections [13]. Therefore, exploring ChatGPT's potential in this aspect through the design of prompting methods is also a worthwhile pursuit.
* For the few-shot settings using in-context learning, we follow [23] in randomly selecting examples from the development set,
\begin{table}
\begin{tabular}{l|c c c|c c|c c c|c} \hline \hline & \multicolumn{3}{c|}{**MultiEnc-dec**} & \multicolumn{3}{c|}{**ChatGPT(zero-shot)**} & \multicolumn{3}{c|}{**ChatGPT(1-shot)**} & \multicolumn{1}{c}{**Differ.**} \\ \cline{2-11}
**Error-Type** & Pre. & Rec. & F\({}_{0.5}\) & Pre. & Rec. & F\({}_{0.5}\) & Pre. & Rec. & F\({}_{0.5}\) & F\({}_{0.5}\) \\ \hline ADJ & 44.4 & 15.6 & **32.5** & 34.9 & 20.2 & 30.4 & 32.2 & 32.5 & 32.3 & -0.2 \\ ADJ:FORM & 100.0 & 25.0 & 62.5 & 60.0 & 75.0 & 62.5 & 56.5 & 81.3 & 60.2 & 0.0 \\ ADV & 42.1 & 17.9 & **33.2** & 18.7 & 17.7 & 18.5 & 18.6 & 27.7 & 19.9 & -13.3 \\ CONJ & 46.2 & 14.3 & **31.9** & 13.2 & 22.5 & 14.4 & 15.0 & 37.5 & 17.1 & -14.8 \\ CONTR & 85.0 & 58.6 & **78.0** & 38.7 & 38.7 & 38.7 & 28.6 & 51.6 & 31.4 & -39.3 \\ DET & 63.7 & 51.3 & **60.8** & 57.9 & 59.1 & 58.1 & 54.1 & 66.0 & 56.1 & -2.7 \\ MORPH & 70.6 & 33.3 & **57.7** & 44.6 & 55.6 & 46.5 & 45.2 & 65.6 & 48.2 & -9.5 \\ NOUN & 38.1 & 13.3 & 27.7 & 41.2 & 21.5 & **34.8** & 31.5 & 26.9 & 30.4 & **+7.1** \\ NOUN:INFL & 100.0 & 75.0 & **93.8** & 88.9 & 72.7 & 85.1 & 90.9 & 90.9 & 90.9 & -2.9 \\ NOUN:NUM & 74.2 & 47.6 & **66.7** & 59.1 & 60.2 & 59.3 & 58.1 & 74.5 & 60.7 & -6.0 \\ NOUN:POSS & 63.0 & 51.8 & 60.4 & 68.4 & 65.0 & 67.7 & 69.0 & 66.7 & **68.5** & **+0.8** \\ ORTH & 71.2 & 59.9 & **68.6** & 64.3 & 76.1 & 66.4 & 63.3 & 78.6 & 65.8 & -2.2 \\ OTHER & 38.1 & 22.5 & **33.5** & 26.5 & 29.8 & 27.1 & 18.8 & 33.5 & 20.6 & -6.4 \\ PART & 58.5 & 40.7 & 53.8 & 71.4 & 39.7 & **61.6** & 60.3 & 55.6 & 59.3 & -7.8 \\ PREP & 64.7 & 41.9 & **58.3** & 56.6 & 43.2 & 53.3 & 49.4 & 52.1 & 49.9 & -5.0 \\ PRON & 55.7 & 43.1 & **52.6** & 43.1 & 53.9 & 44.9 & 33.6 & 56.9 & 36.6 & -7.7 \\ PUNCT & 70.1 & 47.3 & **63.9** & 42.2 & 63.2 & 45.2 & 42.8 & 62.2 & 45.6 & -18.3 \\ SPELL & 86.2 & 53.9 & 76.9 & 81.5 & 89.3 & **82.9** & 81.3 & 88.6 & 82.7 & **+6.0** \\ VERB & 44.8 & 21.6 & **36.9** & 39.5 & 21.9 & 34.0 & 31.5 & 31.2 & 31.5 & -2.9 \\ VERB:FORM & 71.1 & 60.9 & **68.8** & 50.5 & 68.6 & 53.3 & 45.5 & 68.6 & 48.8 & -15.5 \\ VERB:INFL & 80.0 & 66.7 & 76.9 & 80.0 & 66.7 & 76.9 & 57.1 & 66.7 & 58.8 & 0.0 \\ VERB:SVA & 72.4 & 74.5 & **72.8** & 64.7 & 82.9 & 67.7 & 61.8 & 84.3 & 65.3 & -5.1 \\ VERB:TENSE & 63.5 & 44.8 & **58.6** & 58.1 & 44.5 & 54.8 & 55.4 & 53.2 & 54.9 & -3.7 \\ WO & 64.7 & 37.5 & **56.5** & 41.0 & 46.6 & 42.0 & 36.2 & 47.7 & 38.0 & -14.5 \\ \hline Total & 62.6 & 40.7 & **56.6** & 47.7 & 50.8 & 48.3 & 42.5 & 55.6 & 44.63 & -8.3 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Fine-grained error types performance of **ChatGPT** and **MultiEnc-dec** systems on the BEA19 development set. **Differ.** refers to the difference in F\({}_{0.5}\) scores between the two systems. **Bold** values indicate the best F\({}_{0.5}\) scores.
without specifically designing corresponding examples for each sentence. It is worth investigating whether selecting better examples could help improve ChatGPT's performance in GEC.
## Acknowledgments
This work was supported in part by the Science and Technology Development Fund, Macau SAR (Grant Nos. FDCT/0070/2022/AMJ, FDCT/060/2022/AFJ) and the Multi-year Research Grant from the University of Macau (Grant No. MYRG2020-00054-FST). This work was performed in part at SICC which is supported by SKL-IOTSC, and HPCC supported by ICTO of the University of Macau.
|
2303.04841 | The dynamic nature of trust: Trust in Human-Robot Interaction revisited | The role of robots is expanding from tool to collaborator. Socially assistive
robots (SARs) are an example of collaborative robots that assist humans in the
real world. As robots enter our social sphere, unforeseen risks occur during
human-robot interaction (HRI), as everyday human space is full of
uncertainties. Risk introduces an element of trust, so understanding human
trust in the robot is imperative to initiate and maintain interactions with
robots over time. While many scholars have investigated the issue of
human-robot trust, a significant portion of that discussion is rooted in the
human-automation interaction literature. As robots are no longer mere
instruments, but social agents that co-exist with humans, we need a new lens to
investigate the longitudinal dynamic nature of trust in HRI. In this position
paper, we contend that focusing on the dynamic nature of trust as a new inquiry
will help us better design trustworthy robots. | Jimin Rhim, Sonya S. Kwak, Angelica Lim, Jason Millar | 2023-03-08T19:20:11Z | http://arxiv.org/abs/2303.04841v1 | # The dynamic nature of trust: Trust in Human-Robot Interaction revisited
###### Abstract
The role of robots is expanding from tool to collaborator. Socially assistive robots (SARs) are an example of collaborative robots that assist humans in the real world. As robots enter our social sphere, unforeseen risks occur during human-robot interaction (HRI), as everyday human space is full of uncertainties. Risk introduces an element of trust, so understanding human trust in the robot is imperative to initiate and maintain interactions with robots over time. While many scholars have investigated the issue of human-robot trust, a significant portion of that discussion is rooted in the human-automation interaction literature. As robots are no longer mere instruments, but social agents that co-exist with humans, we need a new lens to investigate the longitudinal dynamic nature of trust in HRI. In this position paper, we contend that focusing on the dynamic nature of trust as a new inquiry will help us better design trustworthy robots.
Socially assistive robots (SARs), human-robot interaction, trust, trustworthy robotics
## 1 Introduction
Conventional robotic arms were placed in manufacturing settings to conduct repetitive, dull, and dangerous tasks for humans. Typically, only trained users have direct access to interact with such robots due to safety concerns. These days, however, more and more robotic arms are being deployed to interact with laypeople. For instance, barista robots make and serve coffee for customers, and assistive robots interact with patients with disabilities [7]. While these robot arms provide convenience and novel user experiences, and tend to be smaller and less physically dangerous, they can still create unexpected hazards or confusion when interacting with untrained human users--the wide range of movements of robotic arms makes it difficult for humans to fully anticipate their movements. As the aforementioned example indicates, we are witnessing an increase in the application of robots in our social spheres where humans interact with robots in everyday tasks (e.g., mobility [24], adopted in healthcare [22], entertainment [1], and education [4] sectors, and act as social assistance for older adults [27]). The expansion of robot roles from tools to teammates poses novel questions regarding the co-existence of humans and robots. This shift leads to a state of affairs in which robots and humans co-exist, which indicates that a broader range of lay human users will face various types and levels of risk or hazard during HRI. Risk and uncertainty inevitably involve trust [15], so understanding the longitudinal aspects of human trust in robot partners is imperative to initiate and maintain relationships with robots over time.
Many scholars have highlighted that a comprehensive conceptualization of trust is essential when designing robots that interact socially with humans because trust is integral for a user's acceptance and inclusion of the robot into their social sphere [23]. That is, a user is unlikely to use a robot if they believe that the robot is untrustworthy.
While trust can induce cooperation between humans and robots, forming well-placed trust is extremely difficult. Further, misaligned trust toward a robot leads to the misuse or disuse of a robot. For instance, people tend to misuse a robot when they over-trust it or disuse a robot when they under-trust it [12]. Several studies have shown the caveats of misplacing trust in robots: one study showed that participants mindlessly followed instructions from a robot during potentially dangerous situations even if the robot made risky and unsafe suggestions [25], which had the potential to endanger the user. While the consequences of disusing a robot tend to have less direct harm than the misuse of a robot, there remain negative impacts of robot disuse, including that users may not take advantage of the potential benefits a robot could provide [32]. The above cases specify that successful human-robot trust formation aims to form a well-calibrated trust during HRI.
A broad spectrum of literature has developed with increasing emphasis on the importance of human-robot trust. Topics on the human-robot trust include trust measurement [21], trust repair strategies [3, 9], and trust modelling [29]. However, the foundation of empirical and theoretical literature on trust in robots is often centred around the automated system [12, 19]. While the insights gained from studies that regard trust in automation provide profound knowledge for understanding trust in robots, the trust characteristics of human interactions with SARS have different implications than considerations regarding human interactions with automated machines. A robot is distinguished from an automated machine or computer interface by the embodiment and corporeality of robots with multi-modal interaction which can provide richer social interactions with humans [13]. Such dissimilarities highlight the importance of grounding human-robot trust beyond literature regarding human-automation trust. As the roles and expectations of robots change from tools to collaborators, we need to revisit the notion of trust which is more appropriate for the context of HRI. Next, we provide an outline of the gaps in the literature to identify the absence of an adequate conceptualization of trust in current deliberations in HRI. We then identify different layers of the dynamic nature of trust over time (i.e. the longitudinal aspects of trust and trustworthiness) during HRI to open up discussions for the future research directions.
## 2 Gaps in Current Human-Robot Trust Research
What does it mean for humans to trust a robot? While extensive literature discusses various aspects that impact human-robot trust over the past two decades, there is no clear agreement on the definition of trust in HRI. Further, there is a discrepancy between the current public perception of trust toward robots and what is articulated in research. While many studies treat trust towards robots as something that has already been established, many contemporary reports indicate that the public is reluctant to trust robots, leading to some uncertainty whether people trust robots. In response, we posit that the ill-posed nature of the current discussion of human-robot trust in research hinders the development of well-placed trust between humans and robots. In this section, we outline several factors that may impede concretizing the notion of trust during HRI to shed light on how we could approach to overcome these challenges.
### A Lack of Consensus in the Definition of Trust in HRI
According to Salem et al., 2015 [26], the concept of trust is still an ongoing exploration in HRI due to the sheer complexity of the concept itself. One challenge that makes the concept of trust elusive is that there are multiple synonyms for trust. Consider the interchangeability of the terms e.g., reliability, faith, confidence, belief, vulnerability, certainty, and credit with trust. Another challenge is the difficulty of coining the term trust in the specific context of HRI. Yet, another challenge is the difficulty of coining the term trust in the specific context of HRI. It is widely accepted that different disciplines define and analyze the concept of trust differently [5]. In parallel, the focal point of the trust discussions in human-machine interaction differs depending on the medium and the context of its deployment. For instance, the literature on trust in human-automation emphasizes the reliability, robustness, predictability, and safety of automated systems as core factors that shape human trust [12]. This outlook portrays the role of the humans as operators who expect machines to function predictably. On the other hand, the discussion on trust for robots that interact socially with humans includes considerations of user characteristics (e.g., social factors, user propensity) [11] on top of the reliability of the robot's functionality. The expansion of robot roles and
user expectations in social contexts implies that human-robot trust discussions should avoid monolithic perspectives and should rather reflect the multifaceted nature of trust in HRI.
### Gap between robot-centric HRI versus human-centric HRI
HRI's treatment of trust appears to be divided into two main categories: human-centric and robot-centric perspectives [14, 16]. Human-centered HRI research investigates themes such as design or usability of robots often through user studies, whereas robot-centered HRI research investigates algorithms and engineering innovations that improve the overall performance of the robot [14]. In terms of trust studies, human-centric HRI explores the factors that affect human users' perception of trust (e.g., demographics, personality traits, attitudes toward robots, and propensity to trust). One such study investigated the impact of erroneous robot behaviour on human subjective perception and acceptance of the robot's trustworthiness [26]. An example of a robot-centric HRI trust study includes one that developed an Online Probabilistic Trust Inference Model (OPTIMo)--a widely adopted computational model that estimates near real-time human trust toward a robot by observing human behaviours [31]. However, as the examples indicate, despite the common intention to shape human-robot trust, each research approach differs depending on the focus of perspective. Accordingly, we need to have a context-dependent understanding of trust to provide appropriate discussions for the trust discussion at hand.
### Gap between viewing human-robot trust research as a variable versus process
Trust in HRI is a multifaceted concept with many layers and a dynamic process that fluctuates over time. The temporal trust trajectory includes the following phases: development, dissolution, and restoration [18]. However, many empirical studies in human-robot trust treat trust as a time-independent variable, often measuring the level of trust through surveys and experiments [6]. Studies that treat trust as an independent variable tend to focus on the benefit of trust [17], such as how trust facilitates cooperation with humans or reduces uncertainties during HRI [30]. Studies that view trust as a dependent variable focus on factors that directly impact trust [17], such as users' attitudes towards robots, operator's performance, and failure rates of robots [10]. Conventional study methods that treat trust during HRI as variables may not fully capture the characteristic of trust that encapsulates temporality and dynamic nature. Nonetheless, if we envision longitudinal interactions with the robot, trust should be explained as a process rather than a variable.
## 3 The dynamic nature of trust during HRI
The traditional discussion of trust during HRI needs to fully convey the dynamic nature of trust between the human who trusts and the robot that is trusted. As elaborated in a previous section, the meaning of human-robot trust does not have a clearly established definition, which implies that a new conception needs to be formed through inferences [28]. This section provides an account of how different layers of the dynamic nature of trust should be considered when establishing a conception of human-robot trust by drawing discussions from multidisciplinary research fields.
First, the dynamic act of trusting relies upon social interaction. As previously posited, most of the trust discussions are based on the discussion of human-automation interaction, where the focus of trust formation is based on the robustness of the robot's performance. This view treats trust as instrumental or consequential. However, as robots become collaborators that conduct various social tasks alongside humans, it is likely that people will not only expect functional success from robots but also expect robots to fulfill social expectations (e.g., how empathetic the robot is, how well the robot respects social norms). As such, it is imperative for HRI to include trust as a social and emotional act that considers the relationship between humans and robots. A human-human interaction study that regards the social dynamic of trust illustrates this distinction: "people's decision to trust is best predicted by the emotions they attach to the action itself rather than by emotions they attach to possible outcomes" (p. 692) [8]. An HRI study also found how people perceive a robot to be more significant for trust formation than the robot's performance alone [26]. To that end, human-robot trust theories should also include the
dynamic social aspects of human nature, including factors such as context- and time-dependent social norms, relationship status, and emotions.
Second, HRI should address the temporal dynamic of trust. Temporal dynamics refer to the fluctuation of human trust towards a robot over time, and the factors that impact trust differs over time. Stage models of trust [18] delineate how different factors impact trust development or deteriorate in different stages [2, 20]. Calculus-based trust is most prevalent in the initial trust formation stage [20]. The most critical facet at this stage is that the trustor (the human) can be assured that the trustee (the robot) will act according to his or her expectations; thus, reliability and dependability are integral. The latter trust formation stage relies heavily on knowledge-based trust: trust is based on accumulated knowledge of the trustee's ability over repeated interactions. The third stage involves identification-based trust, where the trustor expects the trustee's values or interests to align with their own. This means that calculus-based trust is not as important at the latter stage of trust, but rather the perception of the trustworthiness of a trustee is more nuanced. As this stage model indicates, it is important to consider that human-robot trust may fluctuate over repeated interactions, and different factors will have different implications depending on the stage of trust formation.
## 4 Conclusion
The current discussion of trust in robotics tends to conceive trust as a unitary, fully established, and stable value. However, a careful exploration of the concept thus reveals that trust is a dynamic concept in a state of flux. This position paper highlights the versatile and dynamic nature of trust that should be considered to progress realistic discussion of trust during HRI. In summary, it is important to consider that trust is not a stable state during HRI. Trust is dynamic in both temporal and social aspects. One significant hindrance to conceptualizing human-robot trust as dynamic in nature is that most empirical studies conducted in HRI to understand trust are based on one-shot study designs. Consequently, existing studies do not allow for the examination of growth or decrease of trust over time. These studies inevitably tend towards highlighting calculus-based trust. This approach naturally eliminates opportunities to examine the social dynamics of trust during HRI. While it is essential for robots to ensure their behaviours are trustworthy in order to initiate interaction, consideration of only the early stages of trust may not convey the full process of trust development during HRI. As such, we propose that when designing trustworthy robots that depend on trust formation, the consideration of a variety of different factors is crucial. Consequently, longitudinal studies that model trust trajectories are necessary for understanding the dynamic nature of trust comprehensively.
###### Acknowledgements.
We would like to thank Dr. Jung-Mi Park for sharing some examples of robots that are adopted in the wild.
|
2305.10562 | Regular Graphs of Degree at most Four that Allow Two Distinct
Eigenvalues | For an $n \times n$ matrix $A$, let $q(A)$ be the number of distinct
eigenvalues of $A$. If $G$ is a connected graph on $n$ vertices, let
$\mathcal{S}(G)$ be the set of all real symmetric $n \times n$ matrices
$A=[a_{ij}]$ such that for $i\neq j$, $a_{ij}=0$ if and only if $\{i,j\}$ is
not an edge of $G$. Let $q(G)={\rm min}\{q(A)\,:\,A \in \mathcal{S}(G)\}$.
Studying $q(G)$ has become a fundamental sub-problem of the inverse eigenvalue
problem for graphs, and characterizing the case for which $q(G)=2$ has been
especially difficult. This paper considers the problem of determining the
regular graphs $G$ that satisfy $q(G)=2$. The resolution is straightforward if
the degree of regularity is $1, 2,$ or $3$. However, the $4$-regular graphs
with $q(G)=2$ are much more difficult to characterize. A connected $4$-regular
graph has $q(G)=2$ if and only if either $G$ belongs to a specific infinite
class of graphs, or else $G$ is one of fifteen $4$-regular graphs whose number
of vertices ranges from $5$ to $16$. This technical result gives rise to
several intriguing questions. | Wayne Barrett, Shaun Fallat, Veronika Furst, Shahla Nasserasr, Brendan Rooney, Michael Tait | 2023-05-17T20:42:31Z | http://arxiv.org/abs/2305.10562v1 | # Regular Graphs of Degree at most Four that Allow Two Distinct Eigenvalues
###### Abstract
For an \(n\times n\) matrix \(A\), let \(q(A)\) be the number of distinct eigenvalues of \(A\). If \(G\) is a connected graph on \(n\) vertices, let \(\mathcal{S}(G)\) be the set of all real symmetric \(n\times n\) matrices \(A=[a_{ij}]\) such that for \(i\neq j\), \(a_{ij}=0\) if and only if \(\{i,j\}\) is not an edge of \(G\). Let \(q(G)=\min\{q(A)\,:\,A\in\mathcal{S}(G)\}\). Studying \(q(G)\) has become a fundamental sub-problem of the inverse eigenvalue problem for graphs, and characterizing the case for which \(q(G)=2\) has been especially difficult. This paper considers the problem of determining the regular graphs \(G\) that satisfy \(q(G)=2\). The resolution is straightforward if the degree of regularity is \(1,2\), or \(3\). However, the \(4\)-regular graphs with \(q(G)=2\) are much more difficult to characterize. A connected \(4\)-regular graph has \(q(G)=2\) if and only if either \(G\) belongs to a specific infinite class of graphs, or else \(G\) is one of fifteen \(4\)-regular graphs whose number of vertices ranges from \(5\) to \(16\). This technical result gives rise to several intriguing questions.
**Keywords** inverse eigenvalue problem for graphs, orthogonality, \(q\)-parameter, regular graphs.
**AMS subject classification** 05C50, 15A29, 15A18.
## 1 Introduction
For any connected graph \(G\) on \(n\) vertices, let \(\mathcal{S}(G)\) denote the set of all real symmetric \(n\times n\) matrices \(A=[a_{ij}]\) where \(a_{ij}=0\) if and only if \(\{i,j\}\) is not an edge in \(G\), and the entries \(a_{ii}\) can take any value. The _inverse eigenvalue problem_ for a graph \(G\) asks to determine all possible spectra of matrices in \(\mathcal{S}(G)\)[5, 14].
This problem and several of its sub-problems have been studied extensively. One of these sub-problems is to consider all possible multiplicity lists of eigenvalues of matrices in \(\mathcal{S}(G)\). If we look at the multiplicity lists of eigenvalues of all matrices in \(\mathcal{S}(G)\) as lists of numbers, the shortest length among all these lists is the minimum number of distinct eigenvalues of matrices in \(\mathcal{S}(G)\). This parameter is denoted by \(q(G)\) and has been studied in [2, 7, 15, 16, 17]. In this paper, we investigate the problem of determining which regular connected graphs \(G\) have a matrix in \(\mathcal{S}(G)\) with exactly two distinct eigenvalues, that is, with \(q(G)=2\).
The connected graphs \(G\) with \(q(G)=n-1\) or \(n\) have been characterized, see [7]. Graphs with \(q(G)=2\) are much harder to describe; for example, there is no forbidden subgraph characterization of
graphs with \(q(G)=2\), as implied by Theorem 5.2 in [2]. It is known that \(q(G)=2\) if and only if there is an orthogonal matrix in \(\mathcal{S}(G)\)[2], and so studying graphs \(G\) on \(n\) vertices with \(q(G)=2\) is equivalent to studying all possible zero patterns of \(n\times n\) symmetric orthogonal matrices.
A graph \(G\) must have a sufficiently large number of edges to satisfy \(q(G)=2\). In [6], we showed that a connected graph \(G\) on \(n\) vertices with \(q(G)=2\) has at least \(2n-4\) edges. We also characterized the graphs for which equality is attained. This result immediately implies that the number of \(r\)-regular graphs with \(r\in\{2,3\}\) is finite, and in Section 2 we characterize these graphs. When considering \(4\)-regular graphs with \(q(G)=2\), the difficulty increases significantly. Our main theorem (Theorem 2.4) characterizes all connected \(4\)-regular graphs with \(q(G)=2\).
Throughout this paper, we only consider connected, simple, undirected graphs.
### Preliminaries
One of the common ways to give a lower bound on \(q(G)\) is to find a unique shortest path between vertices. This technique, specialized to the case \(q(G)=2\), is explained in the following lemma, which is a corollary of Theorem 3.2 in [2].
**Lemma 1.1**.: _Let \(G\) be a connected graph with \(q(G)=2\). If \(xuy\) is a path of length \(2\), then either \(x\sim y\) or there is another path \(xvy\) of length \(2\) between \(x\) and \(y\)._
In [6], we used this lemma extensively in combination with a breadth-first search of a graph, and we use this strategy here as well. Given a fixed vertex \(v\), we perform a breadth-first search from \(v\). Denote the vertices at distance exactly \(i\) from \(v\) by \(N_{i}(v)\), and call this set the \(i\)th _distance set_ from \(v\). We use \(\epsilon(v)\) to denote the _eccentricity_ of \(v\), which is the maximum distance from \(v\) to any vertex in the graph. The distance sets from \(v\) partition the vertex set of \(G\) as
\[V(G)=\bigcup_{i=0}^{\epsilon(v)}N_{i}(v),\]
which we call the _distance partition_ of \(G\) with respect to \(v\). If \(u\in N_{i}(v)\) and \(w\in N_{i+1}(v)\) and \(u\sim w\), we call \(u\) a _predecessor_ of \(w\) and we call \(w\) a _successor_ of \(u\). A _terminal vertex_ is a vertex with no successors.
Assume that \(G\) is a graph with \(q(G)=2\) and consider the distance partition from a vertex \(v\). If a vertex \(u\) is in \(N_{i}(v)\) for some \(i\geq 2\), then by Lemma 1.1, it must have at least two predecessors, as otherwise there would be a unique shortest path of length \(2\) from \(u\) to a vertex in \(N_{i-2}(v)\). If \(G\) is a \(4\)-regular graph on \(n\) vertices, then there are \(n-5\) vertices in
\[\bigcup_{i=2}^{\epsilon(v)}N_{i}(v)=V(G)\setminus(\{v\}\cup N_{1}(v)),\]
and each of these vertices has at least two predecessors so there are at least \(2(n-5)\) edges incident to these vertices. With the \(4\) edges incident to \(v\), this accounts for \(2n-6\) of the \(2n\) edges of \(G\). We call the remaining six edges _extra edges_, and throughout the paper we consider the possible locations of these six extra edges.
We use standard graph theory terminology and notations. Often we abbreviate an edge \(\{u,v\}\in E(G)\) as \(uv\) for vertices \(u,v\in V(G)\). For two graphs \(G\) and \(H\) with vertex sets \(V(G)\) and \(V(H)\), respectively, the Cartesian product \(G\Box H\) is the graph with vertex set \(V(G)\times V(H)\) and \((g_{1},h_{1})\) adjacent to \((g_{2},h_{2})\) if either \(g_{1}=g_{2}\) and \(h_{1}h_{2}\in E(H)\), or \(h_{1}=h_{2}\) and \(g_{1}g_{2}\in E(G)\). The complete graph on \(n\) vertices, complete bipartite graph on partite sets of sizes \(m\) and \(n\), the cycle on \(n\) vertices, the path on \(n\) vertices, and the hypercube graph (the graph on \(2^{n}\) vertices obtained by an \(n\)-fold Cartesian product of \(K_{2}\) with itself) are denoted by \(K_{n}\), \(K_{m,n}\), \(C_{n}\), \(P_{n}\), \(Q_{n}\), respectively. The circulant graph \(G=C(n,\pm i,\pm j)\) is the graph with vertex set \(V(C(n,\pm i,\pm j))=\mathbb{Z}/n\mathbb{Z}\) that has edges \(\{t,t\pm i\}\) and \(\{t,t\pm j\}\) for all \(t\in V(G)\).
Let \(G\) be a graph with \(v\in V(G)\). A graph \(\operatorname{jdup}(G,v)\) is constructed from \(G\) by _joined duplicating_ a vertex \(v\in V(G)\) if \(V(\operatorname{jdup}(G,v))=V(G)\cup\{u\}\) and \(E(\operatorname{jdup}(G,v))=E(G)\cup\{uw\,:\,w\in\{v\}\cup N_{1}(v)\}\). From Lemma 2.9 in [16], \(q(\operatorname{jdup}(G,v))\leq q(G)\) for any vertex \(v\) in a connected graph \(G\).
**Lemma 1.2**.: _[Lemma 2.3, [1]] Let \(G\) be a connected graph on \(n\) vertices with \(q(G)=2\). If \(S\) is an independent set of vertices, then \(|S|\leq k\) where \(k\) is the least integer such that there is a matrix in \(\mathcal{S}(G)\) with two distinct eigenvalues of multiplicities \(k\) and \(n-k\)._
**Lemma 1.3**.: _Let_
\[M=\left[\begin{array}{cc}C&B\\ B^{T}&D\end{array}\right]\]
_where \(C\) is an \(m\times m\) symmetric matrix, \(B=[b_{1}\dots\ b_{n}]\) has no zero columns, and \(D=\operatorname{diag}(d_{1},\dots,d_{n})\). If \(M\) is orthogonal, then \(n\leq m\), the vectors \(b_{1},\dots,b_{n}\) are pairwise orthogonal, and each \(b_{i}\) is a \((-d_{i})\)-eigenvector for \(C\)._
Proof.: Consider the last \(n\) columns of \(M\). Since \(D\) is diagonal, these columns are pairwise orthogonal if and only if \(n\leq m\) and the vectors \(b_{1},\dots,b_{n}\) are pairwise orthogonal.
Second, expanding \(M^{2}=I\) we have
\[M^{2}=\left[\begin{array}{cc}C^{2}+BB^{T}&CB+BD\\ B^{T}C+DB^{T}&B^{T}B+D^{2}\end{array}\right]=\left[\begin{array}{cc}I_{m}&0 \\ 0&I_{n}\end{array}\right].\]
From the \((1,2)\)-block we see \(CB+BD=0\). Rearranging this equation we have
\[[Cb_{1}\dots Cb_{n}]=-[d_{1}b_{1}\dots d_{n}b_{n}]\]
from which it follows that each \(b_{i}\) is a \((-d_{i})\)-eigenvector for \(C\).
As an illustration of Lemma 1.3, suppose \(G\) is a connected bipartite graph with bipartition \(V=V_{1}\cup V_{2}\). Further assume that \(|V_{1}|=|V_{2}|\). It follows that if \(q(G)=2\), then there exists a matrix \(M\in S(G)\), where
\[M=\left[\begin{array}{cc}C&B\\ B^{T}&D\end{array}\right],\]
where \(M^{2}=I\), and both \(C\) and \(D\) are diagonal matrices. Applying Lemma 1.3 we have that \(B\) must be a matrix with orthogonal rows and columns. In fact, the converse also holds in this case. If such an orthogonal matrix \(B\) exists, then the matrix
\[M=\frac{1}{\sqrt{2}}\left[\begin{array}{cc}I&B\\ B^{T}&-I\end{array}\right]\]
is orthogonal and hence \(q(G)=2\).
## 2 Certain Regular Graphs that Allow Two Distinct Eigenvalues
From Theorem 3.1 in [6], if a connected graph \(G\) has fewer than \(2n-4\) edges, then \(q(G)>2\). This implies that there are only finitely many \(r\)-regular graphs with \(r\leq 3\) that satisfy \(q(G)=2\). We describe them below.
**Lemma 2.1**.: _Let \(r\leq 3\). If \(G\) is a connected \(r\)-regular graph with \(q(G)=2\), then \(G\) has at most \(8/(4-r)\) vertices._
Proof.: In order for \(G\) to have \(q(G)=2\), \(G\) has to have at least \(2n-4\) edges. So we have the inequality
\[(r/2)n\geq 2n-4.\]
Since \(0<r/2<2\), this simplifies to \(n\leq 8/(4-r)\).
**Corollary 2.2**.: _If \(G\) is a connected \(r\)-regular graph with \(q(G)=2\) for some \(r\leq 3\), then \(G\) is one of:_
1. \(K_{2}\)
_._
2. \(K_{3}\) _or_ \(C_{4}\)_; or,_
3. \(K_{4}\)_,_ \(K_{3,3}\)_,_ \(K_{3}\Box K_{2}\)_, or_ \(Q_{3}\)_._
Proof.: The graph \(K_{2}\) is the only connected \(1\)-regular graph and has \(q(K_{2})=2\). The only connected \(2\)-regular graphs on \(n\leq 4\) vertices are \(K_{3}\) and \(C_{4}\) and both have \(q(G)=2\). By Lemma 2.1, a \(3\)-regular graph with \(q(G)=2\) has \(4,6\) or \(8\) vertices. If \(n=4\), \(G=K_{4}\) and \(q(G)=2\). If \(n=6\), the complement of \(G\) is \(2\)-regular and so must be \(C_{6}\) or \(2K_{3}\). Thus \(G\) is either \(K_{3}\Box K_{2}\) or \(K_{3,3}\), respectively, both of which have \(q(G)=2\) from Corollaries 6.5 and 6.8 in [2]. If \(n=8\), \(G\) has \(12=2(8)-4\) edges. Thus by Theorem 3.1 from [6], \(G\cong Q_{3}\).
We now proceed with the main purpose of this paper, to characterize the \(4\)-regular graphs \(G\) with \(q(G)=2\). We begin by defining an infinite family of graphs called closed candles which are analogs to the single-ended and double-ended candles in [6]. For \(k\geq 3\) the _closed candle_, \(H_{k}\), is constructed from \(2C_{k}\) as follows. Label the vertices of one \(C_{k}\) with the odd integers from \(1\) to \(2k-1\) and the other with the even integers from \(2\) to \(2k\). Insert \(2k\) additional edges between the two \(C_{k}\)'s according to the rule: \(i\) is adjacent to \(j\), \(i\) odd, \(j\) even if \(j-i=3\), \(j-i=-1\), or \(j=2\), \(i=2k-1\), or \(i=1\), \(j=2k\). Thus \(H_{k}\) is a \(4\)-regular graph. The graph \(H_{10}\) is shown in Figure 1.
In the proof of Theorem 2.4, we will consider induced subgraphs that have the same structure as a closed candle. A _candle section_ is a graph with vertices \(u_{1},\ldots,u_{t},v_{1},\ldots,v_{t}\) with edges \(u_{i}u_{i+1}\), \(v_{i}v_{i+1}\), \(u_{i+1}v_{i}\), and \(u_{i}v_{i+1}\) for \(1\leq i\leq t-1\), see Figure 2.
The following lemma gives a construction of orthogonal matrices for the closed candles.
**Lemma 2.3**.: _For all \(k\geq 3\), we have \(q(H_{k})=2\)._
Proof.: To see that the graphs \(H_{k}\) for \(k\geq 3\) achieve two distinct eigenvalues, we construct orthogonal matrices for this family of graphs.
Let
\[R=\left[\begin{array}{rr}-1&1\\ 1&-1\end{array}\right],\ S=\left[\begin{array}{rr}1&1\\ -1&-1\end{array}\right],\ J=\left[\begin{array}{rr}1&1\\ 1&1\end{array}\right],\ \mbox{and}\ O=\left[\begin{array}{rr}0&0\\ 0&0\end{array}\right].\]
Figure 1: Closed candle \(H_{10}\).
Figure 2: Candle section.
We consider two cases for \(k\) and construct the corresponding matrices \(W\). Here all of the blocks are \(2\times 2.\) The matrix in each case is symmetric so the blocks below the diagonal blocks are transposes of the corresponding matrices.
_Case 1:_\(n=2k\) and \(k\geq 4\) is even.
\[W_{ij}=\begin{cases}R,&\text{for }(i,j)=(1,2),(3,4),\ldots,(k-3,k-2),(k-1,k)\\ J,&\text{for }(i,j)=(2,3),(4,5),\ldots,(k-2,k-1)\\ J,&\text{for }(i,j)=(1,k)\\ O,&\text{otherwise}\end{cases}\]
_Case 2:_\(n=2k\) and \(k\geq 3\) is odd.
First, note that when \(k=3\), the graph \(H_{3}\) is the octahedron (the graph obtained from \(K_{6}\) by deleting a perfect matching). By Corollary 6.9 in [8], we have \(q(H_{3})=q(G204)=2\). Now if \(k\geq 5\), we construct the matrices as follows.
\[W_{ij}=\begin{cases}J,&\text{for }(i,j)=(1,2),(3,4),\ldots,(k-4,k-3)\\ R,&\text{for }(i,j)=(2,3),(4,5),\ldots,(k-5,k-4),\ \text{ and }(k-2,k-1);\\ S,&\text{for }(i,j)=(1,k),(k-3,k-2)\\ S^{T},&\text{for }(i,j)=(k-1,k)\\ O,&\text{otherwise}\end{cases}\]
Each row of \(W\) has Euclidean length \(2\). Since each of the \(2\times 4\) matrices \([J\;S]\), \([J\;R]\), \([S^{T}\;R]\), \([S\;S^{T}]\) have orthogonal rows, each pair of rows of \(W\) coming from the same block are orthogonal. For \(i\neq j\), the \((i,j)\)-block of \(W^{2}\) is one of \(JR\), \(JS\), \(SR\), \(S^{2}\) or their transposes. In each of the cases the matrix is the zero matrix. Hence \(W^{T}W=W^{2}=4I\). We conclude that \(\frac{1}{2}W\) is an orthogonal matrix with whose graph is the closed candle on \(n=2k\) vertices.
For example, the matrices for \(k=4\) and \(k=5\) are respectively
\[\left[\begin{array}{cccc}O&R&O&J\\ R&O&J&O\\ O&J&O&R\\ J&O&R&O\end{array}\right]\quad\text{and}\quad\left[\begin{array}{cccc}O&J&O&O &S\\ J&O&S&O&O\\ O&S^{T}&O&R&O\\ O&O&R&O&S^{T}\\ S^{T}&O&O&S&O\end{array}\right];\]
for \(k=6\) and \(k=7\), the matrices are respectively
\[\left[\begin{array}{cccc}O&R&O&O&O&J\\ R&O&J&O&O&O&O\\ O&J&O&R&O&O&O\\ O&O&R&O&J&O\\ O&O&O&J&O&R\\ J&O&O&O&R&O\end{array}\right]\quad\text{and}\quad\left[\begin{array}{cccc}O&J&O&O &O&O&S\\ J&O&R&O&O&O&O\\ O&R&O&J&O&O&O\\ O&O&J&O&S&O&O\\ O&O&O&S^{T}&O&R&O\\ O&O&O&O&R&O&S^{T}\\ S^{T}&O&O&O&O&S&O\end{array}\right].\]
Note that the closed candle \(H_{k}\) has independence number \(k\) if \(k\) is even, and \(k-1\) if \(k\) is odd. By Lemma 1.2 when \(k\) is even the only achievable multiplicity list for two distinct eigenvalues is \([k,k]\); when \(k\) is odd, the only achievable multiplicity lists for two distinct eigenvalues are \([k,k]\) and \([k-1,k+1]\).
Lemma 2.3 provides an infinite family of 4-regular graphs with \(q(G)=2\). Our main theorem below characterizes all 4-regular graphs \(G\) for which \(q(G)=2\).
**Theorem 2.4**.: _If \(G\) is a connected 4-regular graph with \(q(G)=2\), then \(G\) is either:_
1. \(K_{5}\)
_;_
2. _one of the graphs_ \(R_{7,1}\)_,_ \(R_{8,2}\)_,_ \(R_{8,3}\)_,_ \(R_{8,4}\)_,_ \(R_{8,5}\)_,_ \(R_{8,6}\) _from Figure_ 3_;_
3. \(K_{3}\Box C_{4}\)_,_ \(K_{3,3}\Box K_{2}\)_, one of the graphs_ \(R_{10,2}\)_,_ \(R_{10,3}\)_,_ \(R_{10,4}\)_,_ \(R_{12,3}\)_,_ \(R_{14,1}\) _from Figure_ 4_;_
4. \(Q_{4}\)_; or,_
5. _a closed candle_ \(H_{k}\) _for some_ \(k\geq 3\)_._
_Notes:_ The graphs listed in items (1) through (4) of Theorem 2.4 have diameter \(1\) through \(4\) respectively. The graph \(R_{7,1}\cong C(7,\pm 1,\pm 2)\), the graph \(R_{8,5}\cong K_{4}\Box K_{2}\), and the graph \(R_{8,6}\cong C(8,\pm 1,\pm 2)\). The graph \(R_{10,3}\) is the graph obtained from \(Q_{3}\) by joined duplicating a pair of antipodal vertices (in [6] we referred to this graph as \(Q_{3}^{\prime}\)). The graph \(R_{10,4}\cong C(10,\pm 1,\pm 3)\), and the graph \(R_{12,3}\cong C(12,\pm 1,\pm 3)\). The graph \(R_{14,1}\) appears in [19] as \(S_{14}\). Moreover \(R_{14,1}\) is the Cayley graph for the dihedral group \(D_{7}\) (with generators \(\rho\) and \(\varphi\) satisfying \(\rho^{2}=\varphi^{7}=\varepsilon\) and \(\rho\varphi=\varphi^{6}\rho\)) with connection set \(\{\rho,\varphi\rho,\varphi^{2}\rho,\varphi^{4}\rho\}\). It is also the point-block incidence graph of the non-trivial square \(2-(7,4,2)\) design and is distance regular with diameter \(3\), see [9]. The sporadic graphs \(R_{6,1}\), \(R_{8,1}\), and \(R_{10,1}\) in Figure 3 are \(H_{3}\), \(H_{4}\), and \(H_{5}\), respectively, so appear in item (5) of Theorem 2.4. Also note that the closed candles are all circulants. For \(k\geq 3\), it can be verified that \(H_{k}\cong C(2k,\pm 1,\pm(k-1))\).
The proof of Theorem 2.4 is split over Sections 3, 4, and 5.
## 3 Proof of Theorem 2.4 for 4-Regular Graphs with Small Diameter
In this section we prove items (1) and (2) of Theorem 2.4 for graphs with diameter at most \(2\). The only \(4\)-regular graph with diameter \(1\) is \(K_{5}\), and \(q(K_{5})=2\), so we focus on \(4\)-regular graphs with diameter \(2\). We begin by enumerating the \(4\)-regular graphs \(G\) with diameter \(2\) for which \(q(G)=2\) is not ruled out by Lemma 1.1.
**Lemma 3.1**.: _If \(G\) is a connected \(4\)-regular graph with diameter \(2\) such that \(q(G)=2\), then \(6\leq|V(G)|\leq 10\)._
Proof.: Let \(G\) be a connected \(4\)-regular graph with diameter \(2\). Consider an arbitrary vertex \(v\) in \(G\) and the distance partition of \(V(G)\) from \(v\). In order for \(G\) to have \(q(G)=2\), it must be the case that each \(x\in N_{2}(v)\) has at least two neighbors in \(N_{1}(v)\), otherwise there is a unique path of length \(2\) between \(x\) and \(v\). Let \(X\) be the set of edges between \(N_{1}(v)\) and \(N_{2}(v)\). Then
\[2|N_{2}(v)|\leq|X|\leq 3|N_{1}(v)|=12.\]
So \(|N_{2}(v)|\leq 6\) and \(G\) has at most \(1+4+6=11\) vertices. This establishes \(6\leq|V(G)|\leq 11\).
We now show the upper bound can be improved to \(10\). Consider a \(4\)-regular graph \(G\) with \(11\) vertices and diameter \(2\). Using the notation above, we see \(|N_{2}(v)|=6\), and \(|X|=12\). So every vertex in \(N_{1}(v)\) has exactly three neighbors in \(N_{2}(v)\), and every vertex in \(N_{2}(v)\) has exactly two neighbors in \(N_{1}(v)\). In particular, this means the subgraph \(H\) of \(G\) induced by \(N_{2}(v)\) is \(2\)-regular. So we have two cases: either \(H\) is a \(6\)-cycle, or \(H\) is the disjoint union of two \(3\)-cycles.
_Case 1:_\(H\cong C_{6}\)
Let the vertices of \(H\) be \(x_{1}\), \(x_{2}\), \(x_{3}\), \(x_{4}\), \(x_{5}\), \(x_{6}\) in cyclic order. Note that \(H\) is bipartite, and in \(H\) there is a unique shortest path of length \(2\) between any two vertices in the same partite set. Since there can be no unique shortest path of length \(2\) in \(G\), the edges of \(X\) must supply an additional path of length \(2\) between every pair of vertices in each partite set.
Let \(N_{1}(v)=\{v_{1},v_{2},v_{3},v_{4}\}\). Since \(v_{i}vv_{j}\) is a path of length \(2\), in order for this path not to be unique there must be some \(x_{k}\) that is adjacent to both \(v_{i}\) and \(v_{j}\). In particular, this means we cannot have some \(v_{i}\) whose neighbors are \(\{x_{1},x_{3},x_{5}\}\) and some \(v_{j}\) whose neighbors are \(\{x_{2},x_{4},x_{6}\}\). If there is no \(v_{i}\) whose neighbors are \(\{x_{1},x_{3},x_{5}\}\), then \(\{x_{1},x_{3},x_{5}\}\) together with the common neighbor of \(\{x_{1},x_{3}\}\), the common
neighbor of \(\{x_{3},x_{5}\}\) and the common neighbor of \(\{x_{1},x_{5}\}\) form a \(6\)-cycle. Without loss of generality, suppose this \(6\)-cycle is \((v_{1},x_{1},v_{2},x_{3},v_{3},x_{5})\). Then, in order for each pair of vertices in \(\{x_{2},x_{4},x_{6}\}\) to have a common neighbor, we must have \(v_{4}\) adjacent to each of \(\{x_{2},x_{4},x_{6}\}\). Thus the edges between \(\{v_{1},v_{2},v_{3}\}\) and \(\{x_{2},x_{4},x_{6}\}\) are a perfect matching.
We know \(v_{1}\) is matched to one of \(x_{2}\), \(x_{4}\), or \(x_{6}\). We also know that \(v_{1}\) is already adjacent to \(x_{1}\) and \(x_{5}\). Note that in \(H\), \(x_{1}\) is at distance \(3\) from \(x_{4}\). So if \(v_{1}\) is matched to \(x_{4}\), then we have a unique shortest path of length \(2\), \(x_{1}v_{1}x_{4}\). Similarly, \(x_{5}\) is at distance \(3\) from \(x_{2}\) in \(H\), so if we match \(v_{1}\) to \(x_{2}\), we get another unique shortest path of length \(2\). Thus \(v_{1}\) must be matched to \(x_{6}\). A similar argument shows that \(v_{2}\) must be matched to \(x_{2}\), and \(v_{3}\) must be matched to \(x_{4}\). This accounts for all of the edges in \(X\). But now we see that \(v_{1}x_{1}x_{2}\) is a unique shortest path of length \(2\) in \(G\).
_Case 2: \(H\cong 2C_{3}\)_
Let the vertices of \(H\) be \(x_{1},x_{2},x_{3}\) and \(y_{1},y_{2},y_{3}\), where all of the \(x_{i}\)'s are adjacent, and all of the \(y_{i}\)'s are adjacent. Let \(N_{1}(v)=\{v_{1},v_{2},v_{3},v_{4}\}\). We know that \(x_{1}\) has \(2\) neighbors in \(N_{1}(v)\). Without loss of generality, suppose \(x_{1}\) is adjacent to \(v_{1}\). Then \(v_{1}x_{1}x_{2}\) and \(v_{1}x_{1}x_{3}\) are paths of length \(2\). In order for them not to be unique shortest paths of length \(2\), we must have at least one of the edges \(v_{1}x_{2}\) and \(v_{1}x_{3}\). Suppose \(v_{1}\) is adjacent to exactly one of \(x_{2}\) and \(x_{3}\). Then \(v_{1}\) is adjacent to exactly one of the \(y_{i}\) vertices, and we have a unique path \(v_{1}y_{i}y_{j}\) between \(v_{1}\) and some \(y_{j}\). Thus \(v_{1}\) must be adjacent to both \(x_{2}\) and \(x_{3}\). If \(v_{2}\) is the other neighbor of \(x_{1}\) in \(N_{1}(v)\), then following the same argument as for \(v_{1}\), we must also have edges \(v_{2}x_{2}\) and \(v_{2}x_{3}\). This accounts for all edges in \(X\) with ends \(v_{1}\) and \(v_{2}\), and all edges in \(X\) with ends \(x_{1}\), \(x_{2}\), or \(x_{3}\). Thus the remaining edges in \(X\) are all possible edges between \(\{v_{3},v_{4}\}\) and \(\{y_{1},y_{2},y_{3}\}\). But now \(v_{1}vv_{3}\) is a unique shortest path of length \(2\).
Since there are only \(84\) connected \(4\)-regular graphs with order \(6\leq n\leq 10\)[20], we can generate the list of \(4\)-regular graphs with diameter \(2\) for which \(q(G)=2\) is not ruled out by Lemma 1.1. We do this by:
1. using nauty's geng function [18] to generate all connected \(4\)-regular graphs on \(n\) vertices for each \(6\leq n\leq 10\);
2. checking the diameter of the graphs generated in (1), and eliminating all with diameter at least \(3\); then
3. checking the graphs remaining after (2) for any unique shortest paths connecting two vertices at distance \(2\) and eliminating those graphs.
At the end of this computation we are left with a set of thirteen \(4\)-regular graphs of diameter \(2\) for which \(q(G)=2\) is not ruled out. Figure 3 gives these thirteen graphs. Lemma 3.2 completes the proof of Theorem 2.4 for graphs with diameter at most \(2\).
**Lemma 3.2**.: _Table 1 lists all thirteen candidate \(4\)-regular graphs with diameter 2 (shown in Figure 3) and includes their \(q\)-values (or a bound on their \(q\)-value)._
Proof.: We treat each of the graphs in Table 1 separately, in the order they appear in the table.
\begin{table}
\begin{tabular}{c|c|c} Graph & \(q\)-value & Graph & \(q\)-value \\ \hline \(R_{6,1}\) & \(2\) & \(R_{8,3}\) & \(2\) \\ \(R_{7,1}\) & \(2\) & \(R_{8,4}\) & \(2\) \\ \(R_{7,2}\) & \(3\) & \(R_{8,5}\) & \(2\) \\ \(R_{8,1}\) & \(2\) & \(R_{8,6}\) & \(2\) \\ \(R_{8,2}\) & \(2\) & & \\ \end{tabular}
\begin{tabular}{c|c|c} Graph & \(q\)-value \\ \hline \(R_{8,3}\) & \(2\) & \(R_{9,1}\) & \(>2\) \\ \(R_{8,4}\) & \(2\) & \(R_{9,2}\) & \(>2\) \\ \(R_{8,5}\) & \(2\) & \(R_{9,3}\) & \(3\) \\ \(R_{8,6}\) & \(2\) & \(R_{10,1}\) & \(2\) \\ \end{tabular}
\end{table}
Table 1: The reduced list of thirteen \(4\)-regular graphs with diameter \(2\).
Figure 3: Thirteen 4-regular graphs with diameter 2.
\(R_{6,1}\): The graph \(R_{6,1}\) is a closed candle, \(R_{6,1}\cong H_{3}\). Thus \(q(R_{6,1})=2\) by Lemma 2.3.
\(R_{7,1}\): The following matrix \(M\) is a matrix in \({\cal S}(R_{7,1}-\{1,7\})\) (the graph \(R_{7,1}\) with the edge \(\{1,7\}\) deleted),
\[M=\frac{1}{6}\left[\begin{array}{rrrrrr}3&\sqrt{6}&-3&0&0&2\sqrt{3}&0\\ \sqrt{6}&0&\sqrt{6}&2\sqrt{2}&0&0&-4\\ -3&\sqrt{6}&-1&2\sqrt{3}&2\sqrt{2}&0&0\\ 0&2\sqrt{2}&2\sqrt{3}&-3&\sqrt{6}&1&0\\ 0&0&2\sqrt{2}&\sqrt{6}&-2&\sqrt{6}&2\sqrt{3}\\ 2\sqrt{3}&0&0&1&\sqrt{6}&-3&2\sqrt{2}\\ 0&-4&0&0&2\sqrt{3}&2\sqrt{2}&0\end{array}\right].\]
The matrix \(M\) is orthogonal and has the Strong Spectral Property; see pages 10 and 11 in [7]. Thus \(q(R_{7,1})=2\).
\(R_{7,2}\): Note that \(R_{7,2}\) can be constructed from \(K_{4,3}\) by adding edges \(\{1,2\}\) and \(\{3,4\}\). Suppose \(M\in{\cal S}(R_{7,2})\). We write \(M\) as
\[M=\left[\begin{array}{cc}C&B\\ B^{T}&D\end{array}\right]\]
where \(C\) is a \(4\times 4\) matrix, \(B=[b_{1}\ b_{2}\ b_{3}]\) has no zero entries, and \(D=\mbox{diag}(d_{1},d_{2},d_{3})\). Moreover,
\[C=\left[\begin{array}{cc}C_{1}&0\\ 0&C_{2}\end{array}\right]\]
where each \(C_{i}\in{\cal S}(K_{2})\).
Assume \(M\) is an orthogonal matrix. Using Lemma 1.3, the columns of \(B\) are pairwise orthogonal and \(Cb_{i}=-d_{i}b_{i}\) for \(i=1,2,3.\) Partition each \(b_{i}\) into vectors \(x_{i},y_{i}\in\mathbb{R}^{2}\). Now
\[\left[\begin{array}{c}C_{1}x_{i}\\ C_{2}y_{i}\end{array}\right]=\left[\begin{array}{cc}C_{1}&0\\ 0&C_{2}\end{array}\right]\left[\begin{array}{c}x_{i}\\ y_{i}\end{array}\right]=Cb_{i}=\left[\begin{array}{c}-d_{i}x_{i}\\ -d_{i}y_{i}\end{array}\right],\]
so each \(x_{i}\) is a \((-d_{i})\)-eigenvector for \(C_{1}\), and each \(y_{i}\) is a \((-d_{i})\)-eigenvector for \(C_{2}\).
The matrices \(C_{1}\) and \(C_{2}\) are each \(2\times 2\) non-scalar symmetric matrices, so each has two distinct eigenvalues. Thus, \(d_{1},d_{2},d_{3}\) cannot be all distinct. Without loss of generality, suppose \(d_{2}=d_{3}\). And since the dimension of the \((-d_{2})\)-eigenspace of \(C_{i}\) is 1, the \((-d_{2})\)-eigenvectors of \(C_{i}\) are scalar multiples of each other. That is, there exist \(\alpha,\beta\neq 0\) so that \(x_{3}=\alpha x_{2}\) and \(y_{3}=\beta y_{2}\).
Now we consider the \((2,2)\)-block of \(M^{2}\). We have
\[I_{3}-D^{2} = B^{T}B\] \[= \left[\begin{array}{rr}x_{1}^{T}&y_{1}^{T}\\ x_{2}^{T}&y_{2}^{T}\\ \alpha x_{2}^{T}&\beta y_{2}^{T}\end{array}\right]\left[\begin{array}{rr}x_{ 1}&x_{2}&\alpha x_{2}\\ y_{1}&y_{2}&\beta y_{2}\end{array}\right]\] \[= \left[\begin{array}{rr}x_{1}^{T}x_{1}+y_{1}^{T}y_{1}&0&0\\ 0&x_{2}^{T}x_{2}+y_{2}^{T}y_{2}&0\\ 0&0&\alpha^{2}x_{2}^{T}x_{2}+\beta^{2}y_{2}^{T}y_{2}\end{array}\right].\]
From
\[x_{2}^{T}x_{2}+y_{2}^{T}y_{2}=1-d_{2}^{2}=\alpha^{2}x_{2}^{T}x_{2}+\beta^{2}y_ {2}^{T}y_{2}\]
we conclude
\[(\alpha^{2}-1)x_{2}^{T}x_{2}+(\beta^{2}-1)y_{2}^{T}y_{2}=0. \tag{3.0.1}\]
Since the second and third columns of \(B\) are orthogonal we also have
\[\alpha x_{2}^{T}x_{2}+\beta y_{2}^{T}y_{2}=0,\]
thus
\[x_{2}^{T}x_{2}=-\frac{\beta}{\alpha}y_{2}^{T}y_{2}. \tag{3.0.2}\]
Substituting (3.0.2) into (3.0.1) we obtain
\[(\alpha^{2}-1)\left(-\frac{\beta}{\alpha}y_{2}^{T}y_{2}\right)+( \beta^{2}-1)y_{2}^{T}y_{2}=0 \Rightarrow -\alpha^{2}\beta+\beta+\alpha\beta^{2}-\alpha=0\] \[\Rightarrow (\beta-\alpha)(\alpha\beta+1)=0\] \[\Rightarrow \beta=\alpha\quad\mbox{or}\quad\alpha\beta=-1.\]
Since the columns of \(B\) are orthogonal, \(\alpha\neq\beta\). We show that \(\alpha\beta=-1\) leads to a contradiction which in turn implies that no orthogonal \(M\in\mathcal{S}(R_{7,2})\) exists. Hence, \(q(R_{7,2})>2\). Suppose \(\alpha\beta=-1\); then
\[I_{4}-\left[\begin{array}{cc}C_{1}^{2}&0\\ 0&C_{2}^{2}\end{array}\right] = I_{4}-C^{2}\ =\ BB^{T}\ =\ \left[\begin{array}{ccc}x_{1}&x_{2}& \alpha x_{2}\\ y_{1}&y_{2}&\beta y_{2}\end{array}\right]\left[\begin{array}{ccc}x_{1}^{T}& y_{1}^{T}\\ x_{2}^{T}&y_{2}^{T}\\ \alpha x_{2}^{T}&\beta y_{2}^{T}\end{array}\right]\] \[= \left[\begin{array}{ccc}x_{1}x_{1}^{T}+(1+\alpha^{2})x_{2}x_{2 }^{T}&x_{1}y_{1}^{T}+(1+\alpha\beta)x_{2}y_{2}^{T}\\ y_{1}x_{1}^{T}+(1+\alpha\beta)y_{2}x_{2}^{T}&y_{1}y_{1}^{T}+(1+\beta^{2})y_{2}y _{2}^{T}\end{array}\right]\] \[= \left[\begin{array}{ccc}x_{1}x_{1}^{T}+(1+\alpha^{2})x_{2}x_{2 }^{T}&x_{1}y_{1}^{T}\\ y_{1}x_{1}^{T}&y_{1}y_{1}^{T}+(1+\beta^{2})y_{2}y_{2}^{T}\end{array}\right].\]
So \(x_{1}y_{1}^{T}=0\). This implies that either \(x_{1}=0\) or \(y_{1}=0\) which is a contradiction since the entries of \(B\) are nonzero.
To show \(q(R_{7,2})=3\), we see that \(R_{7,2}\) results from joined duplication of a vertex of \(G189\) in [3] or [8] (i.e., the graph obtained from \(R_{7,2}\) by contracting edge \(\{3,4\}\) is isomorphic to \(G189\)). From Table 3 in [8] we find that \(q(G189)=3\). Thus \(q(R_{7,2})=3\).
\(R_{8,1}\): The graph \(R_{8,1}\) is a closed candle, \(R_{8,1}\cong H_{4}\). Thus \(q(R_{8,1})=2\) by Lemma 2.3.
\(R_{8,2}\): The following matrix \(M_{8,2}\) is a matrix in \(\mathcal{S}(R_{8,2})\),
\[M_{8,2}=\frac{1}{\sqrt{5}}\left[\begin{array}{ccccccc}1&1&1&0&0&1&0&-1\\ 1&1&-1&0&-1&0&0&1\\ 1&-1&0&\beta&0&0&\alpha&0\\ 0&0&\beta&0&1&1&0&\alpha\\ 0&-1&0&1&-1&1&-1&0\\ 1&0&0&1&1&-1&-1&0\\ 0&0&\alpha&0&-1&-1&0&\beta\\ -1&1&0&\alpha&0&0&\beta&0\end{array}\right],\]
where \(\alpha=(\sqrt{5}+1)/2\) and \(\beta=(\sqrt{5}-1)/2\). Since \(M_{8,2}\) is orthogonal, \(q(R_{8,2})=2\).
\(R_{8,3}\): The following matrix \(M_{8,3}\) is a matrix in \(\mathcal{S}(R_{8,3})\),
\[M_{8,3}=\frac{1}{\sqrt{12}}\left[\begin{array}{ccccccc}2&\sqrt{2}&0&-\sqrt{ 2}&\sqrt{2}&0&0&\sqrt{2}\\ \sqrt{2}&0&\sqrt{2}&0&-2&-2&0&0\\ 0&\sqrt{2}&-2&\sqrt{2}&0&-\sqrt{2}&-\sqrt{2}&0\\ -\sqrt{2}&0&\sqrt{2}&0&0&0&-2&2\\ \sqrt{2}&-2&0&0&1&0&-2&-1\\ 0&-2&-\sqrt{2}&0&0&-1&1&2\\ 0&0&-\sqrt{2}&-2&-2&1&-1&0\\ \sqrt{2}&0&0&2&-1&2&0&1\end{array}\right].\]
The matrix \(M_{8,3}\) is orthogonal, so \(q(R_{8,3})=2\).
\(R_{8,4}\): Note that in our labelling of \(R_{8,4}\), \(N_{1}(7)=\{1,3,6,8\}\) and \(N_{1}(8)=\{1,3,6,7\}\). Let \(G\) be the graph obtained from \(R_{8,4}\) by contracting the edge \(\{7,8\}\) to the vertex (78) (and replacing every pair of multiple edges by a single edge). The following matrix \(M\) is a matrix in \(\mathcal{S}(G)\),
\[M=\frac{1}{3}\left[\begin{array}{rrrrrr}0&-\sqrt{3}&0&0&\sqrt{3}&0&\sqrt{3} \\ -\sqrt{3}&1&-\sqrt{3}&1&1&0&0\\ 0&-\sqrt{3}&0&\sqrt{3}&0&0&-\sqrt{3}\\ 0&1&\sqrt{3}&1&1&\sqrt{3}&0\\ \sqrt{3}&1&0&1&1&-\sqrt{3}&0\\ 0&0&0&\sqrt{3}&-\sqrt{3}&0&\sqrt{3}\\ \sqrt{3}&0&-\sqrt{3}&0&0&\sqrt{3}&0\end{array}\right]\]
(here the vertices are ordered as \(\{1,2,3,4,5,6,(78)\}\)). The matrix \(M\) is orthogonal, so \(q(G)=2\). Since \(R_{8,4}\) is obtained from \(G\) by joined duplication of (78), \(q(R_{8,4})=2\).
\(R_{8,5}\): Note that \(R_{8,5}\cong K_{4}\Box K_{2}\). Thus by Corollary 6.8 in [2], we have \(q(R_{8,5})=2\).
\(R_{8,6}\): The following matrix \(M_{8,6}\) is a matrix in \(\mathcal{S}(R_{8,6})\),
\[M_{8,6}=\frac{1}{\sqrt{10}}\left[\begin{array}{rrrrrrrr}-\sqrt{2}&\sqrt{2}&- 1&0&0&0&-1&-2\\ \sqrt{2}&\sqrt{2}&-2&1&0&0&0&1\\ -1&-2&-\sqrt{2}&\sqrt{2}&1&0&0&0\\ 0&1&\sqrt{2}&\sqrt{2}&2&-1&0&0\\ 0&0&1&2&-\sqrt{2}&\sqrt{2}&-1&0\\ 0&0&0&-1&\sqrt{2}&\sqrt{2}&-2&1\\ -1&0&0&0&-1&-2&-\sqrt{2}&\sqrt{2}\\ -2&1&0&0&0&1&\sqrt{2}&\sqrt{2}\end{array}\right].\]
The matrix \(M_{8,6}\) is orthogonal, so \(q(R_{8,6})=2\).
\(R_{9,1}\): Consider a matrix \(M\in\mathcal{S}(R_{9,1})\) where we use variables for each edge and vertex. That is,
\[[M]_{ij}=\begin{cases}x_{ij}&\text{if $ij\in E(R_{9,1})$,}\\ x_{ii}&\text{if $i=j$, and}\\ 0&\text{if $ij\notin E(R_{9,1})$.}\end{cases}\]
Suppose \(M\) is an orthogonal matrix. Note that the edges of \(R_{9,1}\) can be partitioned into the 9-cycle \((1,2,3,4,5,6,7,8,9,1)\), and the 3-cycles \((1,4,7,1)\), \((2,5,8,2)\), and \((3,6,9,3)\). Since the edge \(\{1,2\}\) does not lie in any triangle in \(R_{9,1}\), we have
\[0=[M^{2}]_{12}=x_{11}x_{12}+x_{12}x_{22}.\]
Since \(x_{12}\neq 0\), we conclude that \(x_{22}=-x_{11}\). Repeating this argument for each edge of the 9-cycle, we see that \(x_{ii}=-x_{ii}\), or \(x_{ii}=0\) for all \(1\leq i\leq 9\).
Now consider the 3-cycle \((1,4,7,1)\). Taking account of the walks of length 2 between 1 and 7, we have
\[0=[M^{2}]_{17}=x_{11}x_{17}+x_{17}x_{77}+x_{14}x_{47}=x_{14}x_{47}.\]
But since \(x_{14},x_{47}\neq 0\), this is impossible. Thus there is no orthogonal matrix \(M\in\mathcal{S}(R_{9,1})\), and we conclude \(q(R_{9,1})>2\).
\(R_{9,2}\): Consider a matrix \(M\in\mathcal{S}(R_{9,2})\) where we use variables for each edge and vertex. That is,
\[[M]_{ij}=\begin{cases}x_{ij}&\text{if $ij\in E(R_{9,2})$,}\\ x_{ii}&\text{if $i=j$, and}\\ 0&\text{if $ij\notin E(R_{9,2})$.}\end{cases}\]
Suppose \(M\) is an orthogonal matrix. Note that \(\{6,7\}\in E(R_{9,2})\), but \(\{6,7\}\) is not included in any \(3\)-cycle in \(R_{9,2}\). Thus
\[0=[M^{2}]_{67}=x_{66}x_{67}+x_{67}x_{77}.\]
Since the variable \(x_{67}\neq 0\), this implies that \(x_{66}=-x_{77}\). Similarly, we see that edges \(\{2,6\}\) and \(\{3,7\}\) are not included in any \(3\)-cycles in \(R_{9,2}\). Considering \([M^{2}]_{26}\) and \([M^{2}]_{37}\) we derive \(x_{22}=-x_{66}\) and \(x_{33}=-x_{77}\). Combining these three equations, we have \(x_{22}=-x_{33}\). Now consider the paths of length \(2\) between vertices \(2\) and \(3\). We have
\[0=[M^{2}]_{23}=x_{22}x_{23}+x_{23}x_{33}+x_{12}x_{13}=x_{12}x_{13}.\]
But since \(x_{12},x_{13}\neq 0\), this is impossible. Thus there is no orthogonal matrix \(M\in\mathcal{S}(R_{9,2})\), and we conclude \(q(R_{9,2})>2\).
\(R_{9,3}\): Note that \(\{1,2,3\}\), \(\{4,5,9\}\), and \(\{6,7,8\}\) all induce \(K_{3}\) subgraphs in \(R_{9,3}\). Moreover, \(\{1,8,9\}\), \(\{2,5,6\}\), and \(\{3,4,7\}\) also induce \(K_{3}\) subgraphs in \(R_{9,3}\), and we see that \(R_{9,3}\cong K_{3}\Box K_{3}\). We prove in Lemma 3.3 that \(q(K_{m}\Box K_{n})=3\) for all \(m,n\geq 3\). This establishes \(q(R_{9,3})=3\).
\(R_{10,1}\): The graph \(R_{10,1}\) is a closed candle, \(R_{10,1}\cong H_{5}\). Thus \(q(R_{10,1})=2\) by Lemma 2.3.
In order to complete the preceding proof, we establish the following lemma, which shows that the bound in Proposition 3.1 of [8] is sharp for complete graphs.
**Lemma 3.3**.: _For \(m,n\geq 3\), we have \(q(K_{m}\Box K_{n})=3\)._
Proof.: Since \(q(K_{s})=2\) for any \(s\geq 2\) it follows from a basic application of Kronecker products that \(q(K_{m}\Box K_{n})\leq 3\) (this inequality can also be deduced from Proposition 3.1 in [8]). It remains to verify that in fact \(q(K_{m}\Box K_{n})\geq 3\).
If \(q(K_{m}\Box K_{n})=2\), then there exists a matrix \(C\in\mathcal{S}(K_{m}\Box K_{n})\) that satisfies \(C^{2}=I\) and is given by
\[C=\left[\begin{array}{ccccc}A_{11}&D_{12}&D_{13}&\ldots&D_{1m}\\ D_{12}&A_{22}&D_{23}&\ldots&D_{2m}\\ &&\ddots&&\\ D_{1m}&D_{2m}&D_{3m}&\ldots&A_{mm}\end{array}\right],\]
where \(D_{ij}\) is an \(n\times n\) diagonal matrix with nonzero diagonal entries for each \(1\leq i,j\leq m\) and \(A_{ii}\in\mathcal{S}(K_{n})\) for \(1\leq i\leq m\).
Let \([C^{2}]_{ij}\) denote the \((i,j)\) block of the matrix \(C^{2}\) partitioned conformally with \(C\) above. Then
\[[C^{2}]_{12} =A_{11}D_{12}+D_{12}A_{22}+\sum_{j=3}^{m}D_{1j}D_{2j}=0,\] \[[C^{2}]_{13} =A_{11}D_{13}+D_{13}A_{33}+\sum_{j\neq 1,3}D_{1j}D_{3j}=0,\] \[[C^{2}]_{23} =A_{22}D_{23}+D_{23}A_{33}+\sum_{j\neq 2,3}D_{2j}D_{3j}=0.\]
Since \(D_{ij}\) is invertible, we have
\[A_{22}=-D_{12}^{-1}A_{11}D_{12}-D_{12}^{-1}\left(\sum_{j=3}^{m}D_{1j}D_{2j} \right), \tag{3.0.3}\]
\[A_{33}=-D_{13}^{-1}A_{11}D_{13}-D_{13}^{-1}\left(\sum_{j\neq 1,3}D_{1j}D_{3j} \right), \tag{3.0.4}\]
\[A_{22}=-D_{23}A_{33}D_{23}^{-1}-\left(\sum_{j\neq 2,3}D_{2j}D_{3j}\right)D_{23} ^{-1}. \tag{3.0.5}\]
From (3.0.4) and (3.0.5) we have
\[A_{22}=D_{23}D_{13}^{-1}A_{11}D_{13}D_{23}^{-1}+D_{23}D_{13}^{-1}\left(\sum_{j \neq 1,3}D_{1j}D_{3j}\right)D_{23}^{-1}-\left(\sum_{j\neq 2,3}D_{2j}D_{3j} \right)D_{23}^{-1}. \tag{3.0.6}\]
Note that \([D_{12}]_{ij}=c_{i(n+j)}\), \([D_{13}]_{ij}=c_{i(2n+j)}\), and \([D_{23}]_{ij}=c_{(n+i)(2n+j)}\), and, by assumption, each such entry is nonzero when \(i=j\). By direct calculation we have
\[[D_{23}D_{13}^{-1}A_{11}D_{13}D_{23}^{-1}]_{ij}=\frac{c_{ij}c_{(n+i)(2n+i)}c_{ j(2n+j)}}{c_{i(2n+i)}c_{(n+j)(2n+j)}}\]
and
\[[D_{12}^{-1}A_{11}D_{12}]_{ij}=\frac{c_{ij}c_{j(n+j)}}{c_{i(n+i)}}.\]
Using (3.0.3) and (3.0.6), and letting \((i,j)=(1,2),(1,3)\), and \((2,3)\) (there are no contributions from the diagonal terms in (3.0.3) nor (3.0.6)) we have
\[\frac{c_{12}c_{(n+1)(2n+1)}c_{2(2n+2)}}{c_{1(2n+1)}c_{(n+2)(2n+2) }}=-\frac{c_{12}c_{2(n+2)}}{c_{1(n+1)}} \tag{3.0.7}\] \[\frac{c_{13}c_{(n+1)(2n+1)}c_{3(2n+3)}}{c_{1(2n+1)}c_{(n+3)(2n+3) }}=-\frac{c_{13}c_{3(n+3)}}{c_{1(n+1)}}\] (3.0.8) \[\frac{c_{23}c_{(n+2)(2n+2)}c_{3(2n+3)}}{c_{2(2n+2)}c_{(n+3)(2n+3) }}=-\frac{c_{23}c_{3(n+3)}}{c_{2(n+2)}}. \tag{3.0.9}\]
Manipulating equations (3.0.7) and (3.0.8) produces the equation
\[\frac{c_{2(2n+2)}c_{(n+3)(2n+3)}}{c_{(n+2)(2n+2)}c_{3(2n+3)}} = \frac{c_{2(n+2)}}{c_{3(n+3)}}\]
or
\[c_{3(n+3)}c_{2(2n+2)}c_{(n+3)(2n+3)} = c_{(n+2)(2n+2)}c_{3(2n+3)}c_{2(n+2)}.\]
However from (3.0.9) we have
\[c_{(n+2)(2n+2)}c_{3(2n+3)}c_{2(n+2)}=-c_{2(2n+2)}c_{(n+3)(2n+3)}c_{3(n+3)}.\]
which is a contradiction.
This completes the verification of the \(q\)-values listed in Table 1, which completes the proof of Theorem 2.4 for graphs with diameter at most 2.
## 4 Sporadic 4-Regular Graphs and Theorem 2.4
In this section we establish whether or not \(q(G)=2\) for a collection of sporadic graphs. These graphs are all connected with diameter at least 3 and arise within the proofs in Section 5.
**Lemma 4.1**.: _Table 2 lists candidate \(4\)-regular graphs with diameter at least 3 (also shown in Figure 4) and includes their \(q\)-values (or a bound on their \(q\)-value)._
Proof.: \(R_{10,2}\): To see that \(q(R_{10,2})=2\), note that the following matrix \(M_{10,2}\in R_{10,2}\) is orthogonal,
\[M_{10,2}=\frac{1}{4}\left[\begin{array}{rrrrrrrr}-\sqrt{2}&-2&0&0&\sqrt{2}& 0&0&2&0&-2\\ -2&\sqrt{2}&2&0&0&\sqrt{2}&0&0&-2&0\\ 0&2&-\sqrt{2}&-2&0&0&\sqrt{2}&0&0&-2\\ 0&0&-2&\sqrt{2}&2&0&0&-\sqrt{2}&-2&0\\ \sqrt{2}&0&0&2&-\sqrt{2}&2&0&0&0&-2\\ 0&\sqrt{2}&0&0&2&\sqrt{2}&-2&0&2&0\\ 0&0&\sqrt{2}&0&0&-2&-\sqrt{2}&-2&0&-2\\ 2&0&0&-\sqrt{2}&0&0&-2&\sqrt{2}&-2&0\\ 0&-2&0&-2&0&2&0&-2&0&0\\ -2&0&-2&0&-2&0&-2&0&0&0\end{array}\right].\]
\(R_{10,3}\): Note that contracting edges \(\{1,2\}\) and \(\{6,7\}\) in \(R_{10,3}\) gives a graph isomorphic to \(Q_{3}\). Since \(R_{10,3}\) can be obtained from \(Q_{3}\) by joined duplicating a pair of antipodal vertices, Corollary 3.3 from [3] implies that \(q(R_{10,3})\leq q(Q_{3})=2\). Thus \(q(R_{10,3})=2\).
\(R_{10,4}\): Theorem 5.3 in [4] shows that \(K_{n,n}\) with a perfect matching deleted has \(q\)-value 2 for all \(n\neq 1,3\). Since \(R_{10,4}\) is isomorphic to \(K_{5,5}\) with a perfect matching deleted, \(q(R_{10,4})=2\).
\(R_{12,1}\): Following the example after Lemma 1.3, set \(V_{1}=\{1,3,5,7,10,12\}\) and \(V_{2}=\{2,4,6,8,9,11\}\). Assume that there exists an \(M\in S(R_{12,1})\) with \(M^{2}=I\) and \(M\) is of the form
\[M=\left[\begin{array}{cc}C&B\\ B^{T}&D\end{array}\right],\]
where \(C\) and \(D\) are diagonal matrices. Then by Lemma 1.3 we know that \(B\) must be an orthogonal matrix. Consider the last two rows of \(B\) (whose nonzero patterns are the same since vertices 10 and 12 have the same neighbors) and suppose they are equal to \(u=[a,b,c,d,0,0]\) and \(v=[x,y,z,w,0,0]\). Since both \(u\) and \(v\) are orthogonal to row 4 of \(B\) we may deduce that \([z,w]=\alpha[c,d]\) for some nonzero scalar \(\alpha\). Similarly since both \(u\) and \(v\) are orthogonal to row 2 and to row 3 we conclude that \([y,z]=\beta[b,c]\) and \([x,y]=\gamma[a,b]\), where \(\beta\) and \(\gamma\) are nonzero. Now it follows that \(\alpha=\beta=\gamma\) and hence row 5 is a multiple of row 6 which contradicts the assumption that \(B\) is orthogonal. Hence \(q(R_{12,1})>2\).
\begin{table}
\begin{tabular}{c|r} Graph & \(q\)-value \\ \hline \(R_{10,2}\) & 2 \\ \(R_{10,3}\) & 2 \\ \(R_{10,4}\) & 2 \\ \(R_{12,1}\) & \(>2\) \\ \end{tabular} \begin{tabular}{c|r} Graph & \(q\)-value \\ \hline \(R_{12,2}\) & \(>2\) \\ \(R_{12,3}\) & 2 \\ \(R_{14,1}\) & 2 \\ \end{tabular}
\begin{tabular}{c|r} \(>2\) \\ \(2\) \\ \(2\) \\ \(2\) \\ \end{tabular} \\ \end{table}
Table 2: Sporadic 4-Regular Graphs with diameter at least 3.
Figure 4: Sporadic 4-regular graphs with diameter at least 3.
\(R_{12,2}\): Using a similar set up as in the case of \(R_{12,1}\) set \(V_{1}=\{2,3,4,5,11,12\}\) and \(V_{2}=\{1,6,7,8,9,10\}\). Assume that there exists an \(M\in S(R_{12,1})\) with \(M^{2}=I\) and \(M\) is of the form
\[M=\left[\begin{array}{cc}C&B\\ B^{T}&D\end{array}\right],\]
where \(C\) and \(D\) are diagonal matrices. Then by Lemma 1.3 we know that \(B\) must be an orthogonal matrix. Consider rows 2 and 3 of \(B\) (whose nonzero patterns are the same since vertices 3 and 4 have the same neighbors). Following a similar argument as used for the graph \(R_{12,1}\) (both rows 2 and 3 must be orthogonal to rows 1,5, and 6) we can deduce that rows 2 and 3 must be multiples of one another which is a contradiction. Hence \(q(R_{12,2})>2\).
\(R_{12,3}\): To see that \(q(R_{12,3})=2\), we present an orthogonal matrix \(M_{12,3}\in\mathcal{S}(R_{12,3})\) in block form, as above. We take \(V_{1}=\{1,2,3,4,5,6\}\) and \(V_{2}=\{7,8,9,10,11,12\}.\) Let
\[B_{12,3}=\left[\begin{array}{cccccc}-\zeta&-\beta&\zeta&-\sqrt{5}/5&0&0\\ \varepsilon&-\gamma&\beta&0&\delta&0\\ \sqrt{5}/5&0&0&-\zeta&-\beta&-\zeta\\ \sqrt{5}/5&0&\sqrt{5}/5&0&-\varepsilon&\zeta\\ 0&\delta&0&-\varepsilon&\gamma&\beta\\ 0&-\varepsilon&-\zeta&-\sqrt{5}/5&0&\sqrt{5}/5\end{array}\right],\]
where
\[\alpha=(\sqrt{5}+1)/2,\quad\beta=\sqrt{2\sqrt{5}/5-4/5},\quad\gamma=\sqrt{-7 \sqrt{5}/10+17/10},\quad\delta=\sqrt{5}/10+1/2,\]
\[\varepsilon=\alpha\beta,\ \ \mbox{and}\ \ \zeta=\alpha\gamma.\]
Thus
\[M_{12,3}=\left[\begin{array}{cc}0&B_{12,3}\\ B_{12,3}^{T}&0\end{array}\right]\]
has the desired properties.
\(R_{14,1}\): To see that \(q(R_{14,1})=2\), we provide an orthogonal matrix \(M_{14,1}\in\mathcal{S}(R_{14,1})\) in block form. We take \(V_{1}=\{1,2,3,4,5,6,7\}\) and \(V_{2}=\{8,9,10,11,12,13,14\}\), and let
\[B_{14,1}=\frac{1}{2}\left[\begin{array}{cccccc}1&0&1&0&0&-1&1\\ 1&1&0&1&0&0&-1\\ -1&1&1&0&1&0&0\\ 0&-1&1&1&0&1&0\\ 0&0&-1&1&1&0&1\\ 1&0&0&-1&1&1&0\\ 0&1&0&0&-1&1&1\end{array}\right].\]
Then
\[M_{14,1}=\left[\begin{array}{cc}0&B_{14,1}\\ B_{14,1}^{T}&0\end{array}\right]\]
has the desired properties. We note that \(M_{14,1}\) was presented in [19].
## 5 Proof of Theorem 2.4 for 4-Regular Graphs with Large Diameter
In this section we complete the proof of items (3), (4) and (5) of Theorem 2.4 for graphs with diameter at least 3. Recall from Section 1.1, in the distance partition from any vertex \(v\), every \(u\in N_{i}(v)\) has at
least two neighbors in \(N_{i-1}(v)\) for all \(i\geq 2\). These edges account for \(2n-2-\deg(v)=2n-6\) of the edges of \(G\). We consider the possible locations of the six extra edges not accounted for by these predecessors. Throughout the proofs in this section, once all of the edges incident with a vertex have been accounted for, we say that the vertex is _full_.
**Lemma 5.1**.: _Let \(G\) be a connected \(4\)-regular graph with diameter at least \(3\) and let \(v\) be a vertex for which \(\epsilon(v)\geq 3\). Suppose that every vertex in \(N_{2}(v)\) and \(N_{3}(v)\) has exactly two predecessors and that \(G[N_{1}(v)]\cup G[N_{2}(v)]\) contains at most three edges. Then \(q(G)>2\)._
Proof.: Assume \(q(G)=2\).
_Case 1:_\(N_{1}(v)\) is an independent set.
Each pair of vertices of \(N_{1}(v)\) share \(v\) as a common neighbor, so each pair must share a common neighbor in \(N_{2}(v)\) as well. There are \(\binom{4}{2}=6\) such pairings, each resulting in a distinct vertex in \(N_{2}(v)\) with two predecessors in \(N_{1}(v)\) and accounting for all the edges between \(N_{1}(v)\) and \(N_{2}(v)\), also implying that \(|N_{2}(v)|=6\).
Suppose each vertex in \(N_{2}(v)\) has a neighbor in \(N_{2}(v)\). Then there are three edges in \(G[N_{2}(v)]\) and \(G[N_{2}(v)]=3K_{2}\). Let \(xy\) be an edge in \(N_{2}(v)\) and let \(a,b\) be the predecessors of \(x\). Then \(a,b\) are not both predecessors of \(y\). Suppose that \(b\) and \(y\) are not adjacent. Then \(bxy\) is a unique shortest path of length \(2\) between \(b\) and \(y\).
Otherwise some vertex, say \(u\in N_{2}(v)\), has no neighbor in \(N_{2}(v)\). Vertex \(u\) shares a common neighbor in \(N_{1}(v)\) with four other vertices in \(N_{2}(v)\), call them \(x_{1},x_{2},x_{3},x_{4}\). To avoid unique paths from \(u\) to \(x_{i}\) for \(i=1,2,3,4\), \(u\) and \(x_{i}\) must have a common successor. These common successors are distinct since each has just \(2\) predecessors. But then \(\deg(u)\geq 6\).
_Case 2:_\(G[N_{1}(v)]\) contains exactly one edge.
This case follows the same logic as Case 1, with the exception that there are five pairings of nonadjacent vertices in \(N_{1}(v)\) that share \(v\) as a common neighbor so each of these pairs must have a common neighbor in \(N_{2}(v)\). As before, this accounts for all the edges between \(N_{1}(v)\) and \(N_{2}(v)\), so there are exactly five vertices in \(N_{2}(v)\). Since \(G[N_{2}(v)]\) contains at most two edges, some vertex \(u\) in \(N_{2}(v)\) has no neighbors in \(N_{2}(v)\). Vertex \(u\) shares a neighbor in \(N_{1}(v)\) with at least three other vertices in \(N_{2}(v)\). Arguing similarly to Case 1, \(\deg(u)\geq 5\).
_Case 3:_\(G[N_{1}(v)]\) contains exactly two edges.
Suppose the two edges do not share a vertex. Then the four pairs of nonadjacent vertices in \(N_{1}(v)\) that have \(v\) as a common neighbor must each have a common neighbor in \(N_{2}(v)\), accounting for all the edges between \(N_{1}(v)\) and \(N_{2}(v)\) and resulting in exactly four vertices in \(N_{2}(v)\). Since there is at most one edge in \(N_{2}(v)\), there is a vertex \(u\) in \(N_{2}(v)\) that has no neighbor in \(N_{2}(v)\). Let \(a\) be a neighbor of \(u\) in \(N_{1}(v)\) with \(ab\) one of the two edges in \(G[N_{1}(v)]\). Then \(uab\) is a unique shortest path of length \(2\) in \(G\).
Otherwise the two edges in \(G[N_{1}(v)]\) share a vertex, say \(c\). Then \(c\) has exactly one neighbor \(w\) in \(N_{2}(v)\). Let the non-neighbor of \(c\) in \(N_{1}(v)\) be \(d\). Since \(c\) and \(d\) have \(v\) as a common neighbor, the other predecessor of \(w\) in \(N_{1}(v)\) must be \(d\). To prevent a unique shortest path of length \(2\) between \(w\) and the neighbors of \(c\) in \(N_{1}(v)\), there must be an edge connecting \(w\) to a vertex, \(y\in N_{2}(v)\) whose two predecessors are the two neighbors of \(c\) in \(N_{1}(v)\). But this creates a unique shortest path \(dwy\) of length \(2\). Note that in this case we did not use the hypothesis that each vertex in \(N_{3}(v)\) has two predecessors. This will be used later in the proof of Lemma 5.3.
_Case 4:_\(G[N_{1}(v)]\) contains exactly three edges.
If \(G[N_{1}(v)]\cong P_{4}\), then the two endpoints of the \(P_{4}\) must have a common neighbor \(w\) in \(N_{2}(v)\). Then \(w\) has a unique shortest path of length \(2\) to both non-end vertices of \(P_{4}\).
If \(G[N_{1}(v)]\cong C_{3}\cup K_{1}\), the same argument as in Case 3 applies, with \(c\) being any vertex in the \(C_{3}\). Note again, that in these first two possibilities that the hypothesis that each vertex in \(N_{3}(v)\) has two predecessors was not used. This will be used at the end of the proof of Lemma 5.3.
Suppose \(G[N_{1}(v)]\cong K_{1,3}\). Then each pair of nonadjacent vertices in \(N_{1}(v)\) has a common neighbor in \(N_{1}(v)\), in addition to the common neighbor \(v\). But in order for each vertex in \(N_{1}(v)\) to have degree \(4\), there must be three vertices in \(N_{2}(v)\), each adjacent to a different pair of leaves of the \(K_{1,3}\). Each pair of vertices in \(N_{2}(v)\) has one common neighbor in \(N_{1}(v)\), and since \(N_{2}(v)\) is an independent set, this implies that each pair of vertices in \(N_{2}(v)\) must have a common neighbor in \(N_{3}(v)\). These neighbors are distinct because each vertex in \(N_{3}(v)\) has two predecessors. Note that all three vertices in \(N_{2}(v)\) are full. Let \(a\) be a leaf vertex of the \(K_{1,3}\); then there exist two paths of length \(2\), \(asx\) and \(aty\) for some \(s,t\in N_{2}(v)\) and \(x,y\in N_{3}(v)\), that are the unique \(ax\) and \(ay\) paths of length \(2\).
**Corollary 5.2**.: _Let \(G\) be a connected \(4\)-regular graph with diameter at least \(3\). If \(v\) is a vertex with \(\epsilon(v)\geq 3\) and for which there is no vertex in the distance partition from \(v\) with 3 or 4 predecessors, then \(q(G)>2\)._
Proof.: Let \(N_{d}(v)\) be the furthest distance set of \(v\). Since every vertex in \(N_{d}(v)\) has exactly two predecessors, we know that \(G[N_{d}(v)]\) is \(2\)-regular. So at least three of the six extra edges must be in \(N_{d}(v)\), and the maximum number of edges that could appear within the subgraphs \(G[N_{i}(v)]\) for \(1\leq i\leq d-1\) is three. So the hypotheses of Lemma 5.1 are satisfied and \(q(G)>2\).
**Lemma 5.3**.: _Let \(G\) be a connected \(4\)-regular graph with diameter at least \(3\) and let \(v\) be a vertex for which \(\epsilon(v)\geq 3\). If in the distance partition from \(v\) no vertex has four predecessors, then either \(G\cong R_{10,3}\), \(G\cong K_{3}\Box C_{4}\), or \(q(G)>2\)._
Proof.: Suppose \(q(G)=2\). We assume \(G\) has a vertex with three predecessors as otherwise the result follows from Corollary 5.2. We begin by establishing three claims.
_Claim #1:_ Any vertex with three predecessors cannot have a successor.
_Proof of Claim #1:_ Suppose some vertex \(w\) in \(N_{i}(v)\) has three predecessors in \(N_{i-1}(v)\) and one successor \(z\) in \(N_{i+1}(v)\). Note that \(i\) must be at least \(2\). Since \(z\) has at most three predecessors in \(N_{i}(v)\), there must be a neighbor \(x\) of \(z\) in \(N_{i+1}(v)\) or \(N_{i+2}(v)\). But then there is a unique shortest path of length \(2\) between \(w\) and \(x\), as \(w\) is full.
Observe that the proof of Claim #1 also implies that if \(G\) is a graph and \(v\in V(G)\) for which some vertex in the distance partition from \(v\) has four predecessors, any vertex \(w\in N_{i}(v)\) with three predecessors cannot have a successor \(z\) unless \(z\) is a vertex with four predecessors. We will use this in Case 4 of the proof of Lemma 5.4.
_Claim #2:_ Vertices with three predecessors occur in pairs, with each pair the endpoints of a path in some distance set \(N_{i}(v)\).
_Proof of Claim #2:_ Suppose \(w_{1}\in N_{i}(v)\) has three predecessors. Since it can have no successors but has degree \(4\), it must have a neighbor \(w_{2}\in N_{i}(v)\). So \(w_{1}\) is one end of a path \(w_{1}w_{2}\ldots w_{k}\) in \(N_{i}(v)\), where \(w_{2},\ldots,w_{k-1}\) have two predecessors each, and hence is full, and \(w_{k-1}\) is the only neighbor of \(w_{k}\) in \(N_{i}(v)\). If \(w_{k}\) has a successor \(y\), then \(yw_{k}w_{k-1}\) is a unique shortest path of length \(2\) between \(y\) and \(w_{k-1}\). So \(w_{k}\) must have three predecessors.
_Claim #3:_ A pair of vertices with three predecessors can occur only in \(N_{i}(v)\) for \(i\geq 3\).
_Proof of Claim #3:_ Let \(w\) be a vertex with three predecessors. Suppose \(w\in N_{2}(v)\). Note that \(w\) cannot have a successor. Let \(z\) be another vertex in \(N_{2}(v)\) with three predecessors, and no successor. Then there are at least two vertices in \(N_{1}(v)\) that are common predecessors of \(w\) and \(z\). Since \(\epsilon(v)\geq 3\), there must be a vertex \(x\in N_{2}(v)\) that has a successor \(y\in N_{3}(v)\). So \(x\) cannot have three predecessors and therefore must have two. If a predecessor of \(x\) is a common predecessor of \(w\) and \(z\), then there is a unique shortest path of length \(2\) between \(y\) and this predecessor of \(x\). So \(w\) and \(z\) must have exactly two shared predecessors, and the two predecessors of \(x\) each has exactly one of \(\{w,z\}\) as a successor. This implies that, via \(N_{1}(v)\), there is a path of length \(2\) between \(x\) and \(w\) (and between \(x\) and \(z\)). Since these paths cannot be unique shortest paths, and \(x\) cannot be adjacent to \(w\) (as it would then be on the \(wz\) path in \(N_{2}(v)\), and hence have degree at least \(5\)), \(x\) must have a common neighbor \(t\neq z\) in \(N_{2}(v)\) with \(w\). But
then \(yxt\) is a unique shortest path of length \(2\) between \(y\) and \(t\).
Let \(S\) represent the set of vertices with three predecessors. We have shown that no vertex in \(S\) can have a successor and all vertices in \(S\) must occur in pairs in the distance sets \(N_{i}(v)\) for \(i\geq 3\). A pair of such vertices requires a minimum of three of the six additional edges. It follows that \(|S|=2\) or \(|S|=4\), leaving at most three or zero edges that could appear in the subgraphs \(G[N_{i}(v)]\) for \(1\leq i\leq\epsilon(v)-1\), respectively. Let \(j\) represent the smallest value of \(i\) for which \(N_{i}(v)\) contains an element of \(S\). If \(j>3\), then Lemma 5.1 implies that \(q(G)>2\), so we only have to consider \(j=3\).
_Case 1:_\(N_{1}(v)\) is an independent set.
By the argument in Case 1 of Lemma 5.1, \(N_{2}(v)\) consists of six vertices, such that each vertex shares one common neighbor in \(N_{1}(v)\) with each of four other vertices in \(N_{2}(v)\). Consider \(u\in N_{2}(v)\), and denote by \(w\) the unique vertex in \(N_{2}(v)\) with which \(u\) does not share a neighbor in \(N_{1}(v)\).
_Case 1(a):_\(N_{2}(v)\) is also an independent set.
Let \(y_{1},y_{2}\) be the successors of \(u\) in \(N_{3}(v)\) and let \(x_{1},x_{2},x_{3},x_{4}\) be the vertices in \(N_{2}(v)\) that share a unique common predecessor with \(u\). Since there cannot be a unique shortest path of length \(2\) between \(u\) and \(x_{i}\) for any \(i\), each \(x_{i}\) must be adjacent to \(y_{1}\) or \(y_{2}\). Since neither \(y_{1}\) nor \(y_{2}\) has four predecessors, \(y_{1}\) must be adjacent to two of the \(x_{i}\) and \(y_{2}\) to the other two. Then \(y_{1},y_{2}\in S\). Neither \(y_{1}\) nor \(y_{2}\) is adjacent to \(w\), so repeating the same argument, there are successors \(z_{1},z_{2}\in S\) of \(w\) such that \(z_{1}\) is adjacent to two of the \(x_{i}\) and \(z_{2}\) to the other two. This accounts for all edges between \(N_{2}(v)\) and \(N_{3}(v)\), so \(N_{3}(v)=\{y_{1},y_{2},z_{1},z_{2}\}=S\). Note that each \(x_{i}\) is adjacent to exactly one of \(y_{1},y_{2}\) and one of \(z_{1},z_{2}\). Since none of \(y_{1},y_{2},z_{1},z_{2}\) has a successor by Claim #1, there are exactly two edges in \(G[N_{3}(v)]\) that do not share an endpoint. If \(y_{1}y_{2}\) is not an edge, then \(y_{1}uy_{2}\) is the unique shortest path of length \(2\). So \(y_{1}y_{2}\) is an edge, and \(z_{1}z_{2}\) is an edge. Without loss of generality, suppose that \(x_{1}\) is adjacent to \(y_{1}\). Then \(x_{1}\) and \(y_{2}\) are not adjacent and \(x_{1}y_{1}y_{2}\) is a unique shortest path of length \(2\).
_Case 1(b):_\(N_{2}(v)\) is not independent, so \(|S|=2\).
Suppose there is an edge \(ut\) in \(G[N_{2}(v)]\), not sharing an endpoint with any other edge in \(G[N_{2}(v)]\). Then there is a shortest path of length \(2\) through \(N_{1}(v)\) from \(u\) to at least three other vertices in \(N_{2}(v)\). So for each of these three vertices there must be a shortest path of length \(2\) through \(N_{3}(v)\). Since \(\deg(u)=4\), \(u\) can only have one successor, so there is a single vertex \(x\in N_{3}(v)\) on all three paths. But then \(x\) has four predecessors.
Now suppose \(ut\) and \(rt\) are edges in \(G[N_{2}(v)]\). Recall that \(w\) is the vertex in \(N_{2}(v)\) that shares no predecessors with \(u\). Let \(y\) and \(z\) be the remaining vertices in \(N_{2}(v)\). To avoid a unique shortest path of length \(2\) through \(N_{1}(v)\) from \(u\) to \(y\) or \(z\), there must be a vertex \(x\) in \(N_{3}(v)\) whose predecessors are \(u\), \(y\), and \(z\). Then \(tux\) is a unique shortest path of length \(2\).
Therefore Case 1 cannot occur, and \(N_{1}(v)\) is never an independent set.
_Case 2:_\(G[N_{1}(v)]\) contains exactly one edge, so \(|S|=2\).
We follow the proof of Case 2 of Lemma 5.1 but make more careful note of the location of the edges in \(N_{2}(v)\). Let \(a\) and \(b\) represent the adjacent vertices in \(N_{1}(v)\). Let \(u\) be the vertex in \(N_{2}(v)\) whose two predecessors are \(N_{1}(v)\setminus\{a,b\}\). Each of the four vertices in \(N_{2}(v)\setminus\{u\}\) has exactly one of \(a\) or \(b\) as a predecessor. Note that there are four shortest paths of length \(2\) between these vertices and the other vertex in \(\{a,b\}\). Since these paths cannot be unique, there must be two independent edges in \(N_{2}(v)\) that connect vertices with no common predecessor in \(\{a,b\}\). Note that if two adjacent vertices in \(N_{2}(v)\) do not share a predecessor in \(N_{1}(v)\setminus\{a,b\}\), then there are unique shortest paths of length \(2\) between the vertices \(N_{1}(v)\setminus\{a,b\}\) and the neighbors of their successors in \(N_{2}(v)\). So the independent edges between the vertices of \(N_{2}(v)\setminus\{u\}\) connect vertices so that each pair has one common predecessor in \(N_{1}(v)\setminus\{a,b\}\) and has one non-common predecessor in \(\{a,b\}\).
Each endpoint of the independent edges in \(N_{2}(v)\) has exactly one successor, and \(u\) must have two successors. It follows that there are six edges between the vertices in \(N_{2}(v)\) and the vertices in \(N_{3}(v)\)
Since the two vertices in \(S\cap N_{3}(v)\) each have three predecessors, there can be no other vertices in \(N_{3}(v)\) and \(S=N_{3}(v)\). From Claim #1, these two vertices must be adjacent to each other. Each of the four vertices in \(N_{2}(v)\setminus\{u\}\) are adjacent to one of the vertices in \(S\). The two predecessors other than \(u\) of each vertex in \(S\) must be a pair of nonadjacent vertices in \(N_{2}(v)\setminus\{u\}\) to avoid unique paths of length \(2\) between vertices in \(N_{2}(v)\setminus\{u\}\) and their non-neighbor in \(N_{3}(v)\). Additionally, the two successors of \(a\) (respectively, \(b\)) must have a common successor in \(S\), otherwise there is a unique shortest path of length \(2\) between \(a\) (respectively, \(b\)) and a vertex in \(S\). This implies that the predecessors of a vertex in \(S\) are \(u\) and two nonadjacent vertices in \(N_{2}(v)\) that share a predecessor in \(\{a,b\}\). Thus \(G\cong K_{3}\Box C_{4}\).
_Case 3:_\(G[N_{1}(v)]\) contains exactly two edges, so \(|S|=2\).
Then \(q(G)>2\) by Lemma 5.1, Case 3 and the observation at the end of its proof.
_Case 4:_\(G[N_{1}(v)]\) contains exactly three edges, so \(|S|=2\).
If \(G[N_{1}(v)]\cong P_{4}\) or \(G[N_{1}(v)]\cong C_{3}\cup K_{1}\), then by Case 4 of Lemma 5.1 and the observations within its proof, there is a pair of nonadjacent vertices with a unique path of length \(2\) between them. So we assume \(G[N_{1}(v)]\cong K_{1,3}\). As in that proof, \(N_{2}(v)\) consists of three vertices, each being the common successor of two leaves of the \(K_{1,3}\). Since \(j=3\), there are two adjacent vertices of \(S\) in \(N_{3}(v)\), each with three predecessors. Since \(N_{2}(v)\) is an independent set, each of its three vertices has two successors, so there are six edges between \(N_{2}(v)\) and \(N_{3}(v)\). It can now be verified that \(G\cong R_{10,3}\). For instance, let \(v\) correspond to vertex \(1\) in the drawing of \(R_{10,3}\) in Figure 4.
In all of the above cases, we either reach a contradiction to \(q(G)=2\), or \(G\) is isomorphic to either \(R_{10,3}\) or \(K_{3}\Box C_{4}\). Lemma 4.1 establishes \(q(R_{10,3})=2\). Since \(K_{3}\Box C_{4}\cong K_{3}\Box K_{2}\Box K_{2}\), Corollary 6.8 in [2] implies \(q(K_{3}\Box C_{4})=2\). This completes the proof.
**Lemma 5.4**.: _If \(G\) is a connected \(4\)-regular graph with \(q(G)=2\) and diameter at least \(3\), then \(G\) is either_
1. \(K_{3}\Box C_{4}\)_,_ \(K_{3,3}\Box K_{2}\)_, one of the graphs_ \(R_{10,2}\)_,_ \(R_{10,3}\)_,_ \(R_{10,4}\)_,_ \(R_{12,3}\)_, or_ \(R_{14,1}\) _given in Figure_ 4_;_
2. \(Q_{4}\)_; or,_
3. _a closed candle_ \(H_{k}\) _for some_ \(k\geq 6\)_._
Proof.: Recall that a full vertex is a vertex whose incident edges have all been accounted for.
Let \(v\) be a vertex of \(G\) with \(\epsilon(v)\geq 3\). If no vertex in the distance partition of \(v\) has four predecessors, then \(G\cong R_{10,3}\) or \(G\cong K_{3}\Box C_{4}\) by Lemma 5.3. For the remainder of this proof, we assume that the distance partition of \(v\) contains at least one vertex with four predecessors. Let \(d=\epsilon(v)\) be the index of the farthest distance set. Each vertex with four predecessors uses two of the six extra edges, so \(N_{d}(v)\) may contain no more than three vertices with four predecessors.
_Case 1:_ Exactly three vertices in \(N_{d}(v)\), say \(z_{1}\), \(z_{2}\), and \(z_{3}\), have four predecessors.
All six of the extra edges are used by \(z_{1}\), \(z_{2}\), and \(z_{3}\). By Case 1 of Lemma 5.1, \(N_{2}(v)\) contains exactly six vertices, each being the common successor of a different pair of vertices in \(N_{1}(v)\), and \(d=3\). Since \(N_{2}(v)\) has exactly six vertices with two successors each, \(N_{3}(v)=\{z_{1},z_{2},z_{3}\}\) and each vertex in \(N_{2}(v)\) has two neighbors in \(\{z_{1},z_{2},z_{3}\}\).
Let \(N_{2}(v)=\{y_{1},y_{2},y_{3},y_{4},y_{5},y_{6}\}\), and recall from Case 1 of Lemma 5.1 that each vertex in \(N_{2}(v)\) shares a common predecessor in \(N_{1}(v)\) with exactly four other vertices in \(N_{2}(v)\). This partitions \(N_{2}(v)\) into three pairs, say \(\{y_{1},y_{2}\}\), \(\{y_{3},y_{4}\}\), and \(\{y_{5},y_{6}\}\), where each pair does not share a predecessor in \(N_{1}(v)\). Since each \(y_{i}\) has two successors among \(\{z_{1},z_{2},z_{3}\}\), each of these pairs must share at least one successor in \(N_{3}(v)\). Suppose \(y_{1}\) and \(y_{2}\) share \(z_{1}\) as a common successor. To avoid \(y_{1}z_{1}y_{2}\) being a unique path of length \(2\), \(y_{1}\) and \(y_{2}\) must also share their second successor, say \(z_{2}\). Similarly, \(\{y_{3},y_{4}\}\) and \(\{y_{5},y_{6}\}\) each have two shared successors among \(\{z_{1},z_{2},z_{3}\}\). Without loss of generality, we may assume that the successors of \(\{y_{3},y_{4}\}\) are \(\{z_{1},z_{3}\}\) and the successors of \(\{y_{5},y_{6}\}\) are \(\{z_{2},z_{3}\}\). Thus \(G\cong R_{14,1}\), and Lemma 4.1 completes the proof.
_Case 2:_ Exactly two vertices in \(N_{d}(v)\), say \(z_{1}\) and \(z_{2}\), have four predecessors.
In this case there remain two extra edges. We treat the possibilities for these edges in the following four subcases.
_Case 2(a):_\(z_{1}\) and \(z_{2}\) share all their predecessors in \(N_{d-1}(v)\).
Denote the common predecessors of \(z_{1}\) and \(z_{2}\) by \(\{y_{1},y_{2},y_{3},y_{4}\}\). Suppose \(d>3\). Let \(\{x_{1},x_{2}\}\subseteq N_{d-2}(v)\) denote the predecessors of \(y_{1}\). If \(x_{1}\) has three predecessors, then \(y_{1}\) is its unique successor, and \(x_{1}y_{1}z_{1}\) is a unique shortest path of length 2 between \(x_{1}\) and \(z_{1}\). So we may let \(\{w_{1},w_{2}\}\subseteq N_{d-3}(v)\) denote the predecessors of \(x_{1}\). Since \(y_{1}\) is full and \(w_{1}x_{1}y_{1}\) is a shortest path of length 2, \(w_{1}\) must be adjacent to \(x_{2}\). Similarly, \(w_{2}\) is adjacent to \(x_{2}\). We repeat this argument, with \(x_{1}\) replacing \(y_{1}\), and then with \(w_{1}\) replacing \(x_{1}\), forming successive induced candle sections from \(\{x_{1},x_{2}\}\) until we reach, say, \(\{a_{1},a_{2}\}\subseteq N_{1}(v)\). Since both \(x_{1}\) and \(x_{2}\) are just one edge short of being full, there must exist a vertex, say \(y_{4}\), whose predecessors are not \(x_{1}\) or \(x_{2}\). Denote the predecessors of \(y_{4}\) by \(\{x_{3},x_{4}\}\). By the same argument, starting with \(y_{4}\) we see successive induced candle sections from \(\{x_{3},x_{4}\}\) until \(\{a_{3},a_{4}\}\subseteq N_{1}(v)\). Recall that \(\{x_{1},x_{2}\}\cap\{x_{3},x_{4}\}=\emptyset\). A vertex in \(\{w_{1},w_{2}\}\cap\{w_{3},w_{4}\}\) would have to have four successors, so \(\{w_{1},w_{2}\}\cap\{w_{3},w_{4}\}=\emptyset\). Continuing, the two candle sections must be disjoint, including \(\{a_{1},a_{2}\}\cap\{a_{3},a_{4}\}=\emptyset\). It follows that \(N_{1}(v)=\{a_{1},a_{2},a_{3},a_{4}\}\).
Suppose the two predecessors of \(y_{2}\) are \(x_{i}\in\{x_{1},x_{2}\}\) and \(x_{j}\in\{x_{3},x_{4}\}\). Then \(x_{i},x_{j}\), and \(y_{2}\) are all full, and \(x_{i}y_{2}x_{j}\) is a unique shortest path of length 2 between \(x_{i}\) and \(x_{j}\). So we assume without loss of generality that the predecessors of \(y_{2}\) are \(\{x_{1},x_{2}\}\), and of \(y_{3}\) are \(\{x_{3},x_{4}\}\).
Each vertex in \(N_{1}(v)\) is one edge short of being full, and any pair of these vertices shares \(v\) as a common neighbor. Suppose \(a_{1}\) is adjacent to one of \(\{a_{3},a_{4}\}\), say \(a_{3}\). Then \(a_{1}\) and \(a_{3}\) are both full, and \(a_{1}va_{4}\) is a unique shortest path of length 2 between \(a_{1}\) and \(a_{4}\). So \(a_{1}\), \(a_{3}\), and \(a_{4}\) must share a common successor \(u\in N_{2}(v)\). Similarly, \(a_{2}\), \(a_{3}\), and \(a_{4}\) must share a common successor, which must be \(u\) since \(a_{3}\) and \(a_{4}\) are full. Therefore, \(u\) has four predecessors, using up the remaining two extra edges, and all vertices listed so far are full. It follows that \(G\) cannot contain any additional vertices, and \(G\cong H_{k}\) for some even \(k\geq 6\) (as the number of vertices accounted for in the proof is divisible by 4). Recall that \(q(H_{k})=2\) by Lemma 2.3.
Suppose \(d=3\) and again write \(N_{1}(v)=\{a_{1},a_{2},a_{3},a_{4}\}\). The exact number of edges that connect \(\{y_{1},y_{2},y_{3},y_{4}\}\) to vertices in \(N_{1}(v)\) is eight. Furthermore, since \(z_{1}\) and \(z_{2}\) are full, each vertex in \(N_{1}(v)\) must have either zero or at least two successors in \(\{y_{1},y_{2},y_{3},y_{4}\}\), to prevent a unique shortest path of length 2 between \(N_{1}(v)\) and \(N_{3}(v)\). If, say \(a_{1}\), has no successor in \(\{y_{1},y_{2},y_{3},y_{4}\}\), then it must be adjacent to each of \(\{a_{2},a_{3},a_{4}\}\). However, to account for the edges between \(N_{1}(v)\) and \(N_{2}(v)\), some \(a_{i}\) must have three successors. This is impossible as \(\deg(a_{i})=4\) for all \(i\). So each \(a_{i}\) has two successors in \(\{y_{1},y_{2},y_{3},y_{4}\}\), accounting for eight edges. These eight edges and \(\{y_{1},y_{2},y_{3},y_{4}\}\cup\{a_{1},a_{2},a_{3},a_{4}\}\) form \(2C_{4}\) or \(C_{8}\). In the first case, it follows as in the previous paragraph that \(G\cong H_{6}\).
Suppose the eight edges connecting the vertices \(\{y_{1},y_{2},y_{3},y_{4}\}\) to \(\{a_{1},a_{2},a_{3},a_{4}\}\) form a \(C_{8}\). Then there are two pairs of vertices in \(N_{1}(v)\) that share \(v\) as a predecessor but share no successor in \(\{y_{1},y_{2},y_{3},y_{4}\}\). Without loss of generality, assume these pairs are \(\{a_{1},a_{3}\}\) and \(\{a_{2},a_{4}\}\). Suppose the two remaining extra edges are contained in \(N_{1}(v)\). Then the two non-isomorphic ways to place these edges are \(\{a_{1}a_{2},a_{3}a_{4}\}\) or \(\{a_{1}a_{3},a_{2}a_{4}\}\). In the first case, \(a_{1}va_{3}\) is a unique shortest path of length 2. If \(a_{1}a_{3}\) and \(a_{2}a_{4}\) are edges, and if the common successor of \(a_{1}\) and \(a_{4}\) is, say \(y_{1}\), then \(y_{1}a_{1}a_{3}\) is a unique shortest path of length 2 between \(y_{1}\) and \(a_{3}\). Therefore \(N_{1}(v)\) cannot contain two edges and there is at least one (and at most two) additional vertex (vertices) in \(N_{2}(v)\).
In either case, a vertex \(y_{5}\notin\{y_{1},y_{2},y_{3},y_{4}\}\) cannot have a successor, since there would be a unique shortest path of length 2 between this successor and the predecessors of \(y_{5}\) in \(N_{1}(v)\). So \(y_{5}\) must be a terminal vertex. Because we have only two extra edges remaining, \(y_{5}\) must have at most one neighbor in \(N_{2}(v)\) and hence must have at least three predecessors. If there were a second terminal vertex \(y_{6}\in N_{2}(v)\), then there would be at least six edges from \(\{y_{5},y_{6}\}\) to \(N_{1}(v)\), contradicting that all four vertices in \(N_{1}(v)\) are only one edge short of being full. The only possibility then is for \(N_{2}(v)\) to contain the one additional vertex, \(y_{5}\), with all four vertices in \(N_{1}(v)\) as its predecessors, using up the two remaining extra edges. Then all vertices are full, and in this case \(G\cong R_{12,1}\), which satisfies \(q(R_{12,1})>2\), by Lemma 4.1, and leads to a contradiction.
_Case 2(b):_\(z_{1}\) and \(z_{2}\) share exactly three of their four predecessors in \(N_{d-1}(v)\).
Denote the predecessors of \(z_{1}\) by \(\{y_{1},y_{2},y_{3},y_{4}\}\) and of \(z_{2}\) by \(\{y_{2},y_{3},y_{4},y_{5}\}\). Then \(y_{1}\) shares one common successor with \(y_{2}\), \(y_{3}\), and \(y_{4}\). Given that these three vertices have two predecessors each, they are full, as is their other successor \(z_{2}\). So \(y_{1}\) cannot be adjacent to any of them, nor can it have a second common successor with any of them. Therefore, \(y_{1}\) must have a common predecessor with each of \(y_{2}\), \(y_{3}\), and \(y_{4}\). In particular, let \(x_{1}\) be a common predecessor of \(y_{1}\) and \(y_{2}\). Then the path \(x_{1}y_{2}z_{2}\) implies \(x_{1}\) must be adjacent to \(y_{3}\), \(y_{4}\), or \(y_{5}\), which implies \(x_{1}\in N_{1}(v)\) and \(d=3\). Denote the three remaining vertices in \(N_{1}(v)\) by \(\{x_{2},x_{3},x_{4}\}\). Moreover, the fullness of \(z_{1}\), \(y_{2}\), \(y_{3}\), and \(y_{4}\) implies that any neighbor of \(y_{1}\) other than \(z_{1}\) must be its predecessor (to avoid a unique shortest path of length \(2\) between \(z_{1}\) and this neighbor of \(y_{1}\)), and hence \(y_{1}\) has three predecessors in \(N_{1}(v)\). Similarly, \(y_{5}\) has three predecessors in \(N_{1}(v)\), using up the two remaining extra edges.
We make two observations. First, since there are exactly twelve edges connecting \(\{y_{1},y_{2},y_{3},y_{4},y_{5}\}\) to vertices in \(N_{1}(v)\), any successor of \(x_{1}\), \(x_{2}\), \(x_{3}\), and \(x_{4}\) must be in \(\{y_{1},y_{2},y_{3},y_{4},y_{5}\}\), and
\[V(G)=\{z_{1},z_{2},y_{1},y_{2},y_{3},y_{4},y_{5},x_{1},x_{2},x_{3},x_{4},v\}.\]
Second, \(y_{1}\) and \(y_{5}\) share at least two predecessors.
Suppose \(y_{1}\) and \(y_{5}\) share all three predecessors, say \(\{x_{1},x_{2},x_{3}\}\). Then the three successors of \(x_{4}\) must be \(y_{2}\), \(y_{3}\), and \(y_{4}\). Finally, the remaining predecessor of \(y_{3}\) and \(y_{4}\) must be, without loss of generality, \(x_{2}\) and \(x_{3}\), respectively. Hence all vertices are full, and \(G\cong K_{3,3}\Box K_{2}\). Corollaries 6.5 and 6.8 of [2] show that \(q(K_{3,3}\Box K_{2})=2\).
Now suppose \(y_{1}\) and \(y_{5}\) share two predecessors, say \(\{x_{1},x_{2}\}\). Without loss of generality, let \(x_{3}\) and \(x_{4}\) be the third predecessors of \(y_{1}\) and \(y_{5}\), respectively. Either \(x_{1}\) and \(x_{2}\) share their third successor as well or they do not. Suppose \(x_{1}\) and \(x_{2}\) share all three successors, \(\{y_{1},y_{2},y_{5}\}\). Then \(y_{1}\), \(y_{2}\), and \(y_{5}\) are all full, and the four remaining edges between \(N_{2}(v)\) and \(N_{1}(v)\) are determined. In this case, \(G\cong R_{12,2}\), contradicting \(q(G)=2\) by Lemma 4.1. Therefore, we must have that \(x_{1}\) and \(x_{2}\) do not share a third common successor. Since the third successor of \(x_{1}\) is \(y_{2}\), let the third successor of \(x_{2}\) be \(y_{3}\). Again, the four remaining edges between \(N_{2}(v)\) and \(N_{1}(v)\) are determined, up to one choice, which yields graphs that are isomorphic. In this case, it can be verified that \(G\cong R_{12,3}\cong C(12,\pm 1,\pm 3)\), and Lemma 4.1 implies \(q(R_{12,3})=2\).
_Case 2(c):_\(z_{1}\) and \(z_{2}\) share exactly two of their four predecessors in \(N_{d-1}(v)\).
Suppose the predecessors of \(z_{1}\) are \(\{y_{1},y_{2},y_{3},y_{4}\}\) and the predecessors of \(z_{2}\) are \(\{y_{3},y_{4},y_{5},y_{6}\}\). With two predecessors, \(y_{3}\) is full (as is \(y_{4}\)). To resolve the unique shortest paths of length \(2\) through \(z_{1}\) or \(z_{2}\), \(y_{3}\) must share a predecessor with each of \(y_{1},y_{2},y_{5}\), and \(y_{6}\). It follows that the predecessors of \(y_{3}\) have three successors each, and therefore \(d=3\).
Since there are at least twelve edges from \(\{y_{1},y_{2},y_{3},y_{4},y_{5},y_{6}\}\) to \(N_{1}(v)\), and at most twelve edges from \(N_{1}(v)\) to \(N_{2}(v)\), we must have: \(N_{2}(v)=\{y_{1},y_{2},y_{3},y_{4},y_{5},y_{6}\}\), each vertex in \(N_{2}(v)\) has exactly two predecessors in \(N_{1}(v)\), and each vertex in \(N_{1}(v)\) has exactly three successors in \(N_{2}(v)\). Note that \(y_{3}\) and \(y_{4}\) are full while \(y_{1}\), \(y_{2}\), \(y_{5}\), and \(y_{6}\) are each one edge short of full. We label \(N_{1}(v)=\{x_{1},x_{2},x_{3},x_{4}\}\).
To prevent unique shortest paths of length \(2\) between \(x_{i}\) and \(z_{j}\), each \(x_{i}\) must be adjacent to at least one vertex in \(\{y_{3},y_{4}\}\). It follows that each \(x_{i}\) must have exactly one neighbor in \(\{y_{3},y_{4}\}\). Assume one successor of \(x_{1}\) is \(y_{3}\). To avoid unique paths of length \(2\) between \(x_{1}\) and both \(z_{1}\) and \(z_{2}\), \(x_{1}\) must have at least one more successor in \(\{y_{1},y_{2}\}\) and at least one more successor in \(\{y_{5},y_{6}\}\). Assume, without loss of generality, that the successors of \(x_{1}\) are \(\{y_{1},y_{3},y_{5}\}\). Suppose \(y_{1}\) has a successor, \(z_{3}\), in addition to \(z_{1}\). Then \(z_{3}\) must be adjacent to \(y_{5}\) (to prevent \(x_{1}y_{1}z_{3}\) being a unique shortest path of length \(2\) between \(x_{1}\) and \(z_{3}\)) and \(z_{3}\) must be adjacent to \(y_{6}\) (to prevent \(z_{3}y_{5}z_{2}\) being a unique shortest path of length \(2\) between \(z_{3}\) and \(z_{2}\)). This leaves all vertices in \(N_{2}(v)\) except \(y_{2}\) full, but any successor of \(y_{2}\) distinct from \(z_{1}\), \(z_{2}\), and \(z_{3}\) would create a unique shortest path of length \(2\) between this successor and a predecessor of \(y_{2}\). So the successor of \(y_{2}\) must be \(z_{3}\), contradicting the assumption of Case 2.
To become full, \(y_{1}\) must have exactly one neighbor in \(\{y_{2},y_{5},y_{6}\}\). If \(y_{1}\) is adjacent to \(y_{2}\), then \(x_{1}y_{1}y_{2}\) is a unique shortest path of length \(2\) between the full vertices \(x_{1}\) and \(y_{2}\). But if \(y_{1}\) is adjacent to \(y_{5}\) or \(y_{6}\), then \(z_{1}y_{1}y_{5}\) or \(z_{1}y_{1}y_{6}\) is a unique shortest path of length \(2\) between \(z_{1}\) and \(y_{5}\) or \(y_{6}\), respectively, which
would contradict \(q(G)=2\).
_Case 2(d):_\(z_{1}\) and \(z_{2}\) share exactly one, or none, of their four predecessors in \(N_{d-1}(v)\).
Here \(N_{d-1}(v)\) must contain seven or eight vertices, at least six of which, after accounting for two predecessors, are exactly one edge short of being full. If any such \(y\in N_{d-1}(v)\) has another successor \(z_{3}\in N_{d}(v)\), then \(z_{3}\) must have a neighbor \(z_{4}\in N_{d}(v)\setminus\{z_{1},z_{2}\}\). Since \(y\), \(z_{1}\), and \(z_{2}\) are full, \(yz_{3}z_{4}\) is a unique shortest path of length 2 between \(y\) and \(z_{4}\). In order for the vertices in \(N_{d-1}(v)\) to become full, at least six vertices require incidence with one of the two remaining extra edges, which is impossible.
_Case 3:_ Exactly one vertex, say \(z\), in \(N_{d}(v)\) has four predecessors.
Four extra edges remain. Denote the predecessors of \(z\) by \(\{y_{1},y_{2},y_{3},y_{4}\}\subseteq N_{d-1}(v)\).
_Case 3(a):_\(y_{1}\), \(y_{2}\), \(y_{3}\), and \(y_{4}\) each have three predecessors.
All remaining extra edges are used, and \(N_{d}(v)=\{z\}\) and \(N_{d-1}(v)=\{y_{1},y_{2},y_{3},y_{4}\}\). Suppose \(d=3\). Each vertex in \(N_{1}(v)\) has three successors in \(N_{2}(v)\) (to account for the twelve edges connecting \(N_{2}(v)\) to \(N_{1}(v)\)), and each \(y_{i}\) has one vertex in \(N_{1}(v)\) as its non-neighbor. If \(y_{i}\neq y_{j}\) have the same non-neighbor, then that vertex cannot have three successors. So the four vertices in \(N_{2}(v)\) are in bijection with the four vertices in \(N_{1}(v)\), via non-neighbors, determining the graph as \(G\cong R_{10,4}\). Lemma 4.1 shows \(q(R_{10,4})=2\).
Suppose \(d>3\). Each pair of vertices in \(\{y_{1},y_{2},y_{3},y_{4}\}\) shares \(z\) as a common successor, and each \(y_{i}\) is full with three predecessors, so each pair must share a common predecessor in \(N_{d-2}(v)\). The vertices in \(N_{d-2}(v)\) must have two predecessors each and therefore can have at most two successors in \(N_{d-1}(v)\). This forces \(N_{d-2}(v)\) to contain six vertices \(\{x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}\}\), each being the common predecessor of a different pair of vertices in \(\{y_{1},y_{2},y_{3},y_{4}\}\), exhausting all edges between \(\{y_{1},y_{2},y_{3},y_{4}\}\) and \(\{x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}\}\).
Consider \(y_{1}\) and, without loss of generality, let \(\{x_{1},x_{2},x_{3}\}\) denote its three predecessors. Then \(x_{4}\), \(x_{5}\), and \(x_{6}\) each must have their two successors in \(\{y_{2},y_{3},y_{4}\}\). We claim that \(x_{4}\), \(x_{5}\), and \(x_{6}\) must have a common predecessor in \(N_{d-3}(v)\). Note that if \(x_{4}\), \(x_{5}\) and \(x_{6}\) have a common predecessor, this predecessor must lie in \(N_{1}(v)\), and \(d=4\). Indeed, if any two of \(\{x_{4},x_{5},x_{6}\}\) share a predecessor, then all three must share a predecessor, in order to prevent a unique shortest path of length 2 between this predecessor and one of \(\{y_{2},y_{3},y_{4}\}\). However, if \(x_{4}\), \(x_{5}\), and \(x_{6}\) have three distinct predecessors in \(N_{d-3}(v)\) that are not predecessors of either of the other two, then each of these three predecessors must have two successors in \(\{x_{1},x_{2},x_{3}\}\), thereby causing \(x_{1}\), \(x_{2}\), and \(x_{3}\) to be full; the fourth vertex of \(N_{d-3}(v)=N_{1}(v)\) must then have \(x_{4}\), \(x_{5}\), and \(x_{6}\) as its three successors.
Let \(w_{1}\) be a shared predecessor of \(\{x_{4},x_{5},x_{6}\}\). Then there are two paths of length 2 between \(w_{1}\) and each of \(\{y_{2},y_{3},y_{4}\}\) and no paths of length 2 between \(w_{1}\) and \(y_{1}\). Repeat this procedure for \(y_{2}\), \(y_{3}\), and \(y_{4}\), each time finding a common predecessor \(w_{i}\) of
\[\{x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}\}\setminus\{\mbox{predecessors of $y_{i}$}\}\]
for \(i=2,3,4\). The choice of edges between \(\{y_{1},y_{2},y_{3},y_{4}\}\) and \(\{x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}\}\) therefore determines the edges between \(\{x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}\}\) and \(\{w_{1},w_{2},w_{3},w_{4}\}\). However, since all pairs of vertices in \(\{y_{1},y_{2},y_{3},y_{4}\}\) share a predecessor among the independent set \(\{x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}\}\) of vertices, all choices yield isomorphic graphs. All vertices are now full, so we have listed all elements of each distance set, and \(G\cong Q_{4}\). By [2] Corollary 6.9, \(q(Q_{4})=2\).
_Case 3(b):_ There exists a vertex in \(N_{d-1}(v)\), say \(y_{1}\), that does not have three predecessors.
The edges incident with \(z\) account for two of the extra edges. If \(y_{1}\) has another successor \(z_{2}\in N_{d}(v)\), then \(z_{2}\) must have a neighbor \(z_{3}\in N_{d}(v)\setminus\{z,z_{2}\}\); since \(y_{1}\) (with two predecessors) and \(z\) are both full, \(y_{1}z_{2}z_{3}\) is a unique shortest path of length 2 between \(y_{1}\) and \(z_{3}\). So \(y_{1}\) must have a neighbor \(u\in N_{d-1}(v)\), using up an extra edge.
Suppose \(u\notin\{y_{2},y_{3},y_{4}\}\). Then \(u\) must be adjacent to another vertex in \(\{y_{1},y_{2},y_{3},y_{4}\}\), say \(y_{2}\), to avoid a unique shortest path of length 2 between itself and \(z\). This leaves two remaining extra edges. Let \(x_{1}\) and \(x_{2}\) denote the two predecessors of \(u\). Suppose \(x_{1}\) has two predecessors. Then it cannot be adjacent
to both \(y_{1}\) and \(y_{2}\). So there must exist \(x_{3}\in N_{d-2}(v)\) (possibly \(x_{3}=x_{2}\)) such that \(x_{1}\) is adjacent to \(x_{3}\), and \(x_{3}\) is adjacent to \(y_{1}\) and \(y_{2}\). But then \(x_{3}\) can have only one predecessor, so we must have \(d=3\).
Let \(N_{1}(v)=\{x_{1},x_{2},x_{3},x_{4}\}\). If \(x_{1}\) and \(x_{2}\) are not incident with an edge of \(G[N_{1}(v)]\), then \(x_{1}\) and \(x_{2}\) both must be adjacent to \(\{u,y_{1},y_{2}\}\) (to avoid uniqueness of the shortest paths between each \(x_{1}\), \(x_{2}\) and each \(y_{1}\), \(y_{2}\) through \(u\)); but then \(y_{1}\) and \(y_{2}\) are full, and \(y_{1}zy_{3}\) is a unique shortest path of length \(2\) between \(y_{1}\) and \(y_{3}\). Hence \(G[N_{1}(v)]\) must contain an edge, using a third extra edge. It follows that there are exactly ten edges connecting \(N_{1}(v)\) to \(N_{2}(v)\). Then no vertex in \(N_{2}(v)\) can have three predecessors, which implies \(G[N_{2}(v)]\) must contain another edge, the last of the extra edges. Moreover, this edge must connect \(y_{3}\) and \(y_{4}\). Note this implies \(N_{1}(v)\) cannot be an independent set of vertices.
Suppose \(x_{2}\) is incident with the edge in \(G[N_{1}(v)]\), and \(x_{1}\) is not. Then \(x_{1}\) must be adjacent to \(\{u,y_{1},y_{2}\}\). And since \(x_{1}\) shares \(v\) as a common predecessor with \(x_{3}\) and \(x_{4}\), and since \(u\) is full while \(y_{1}\) and \(y_{2}\) are one edge short of being full, we must have, without loss of generality, \(y_{1}\) adjacent to \(x_{3}\) and \(y_{2}\) adjacent to \(x_{4}\). Since \(x_{1}\) and \(y_{1}\) are full, the uniqueness of the path \(x_{2}uy_{1}\) can only be avoided if \(x_{2}\) is adjacent to \(x_{3}\). Similarly, the uniqueness of the path \(x_{2}uy_{2}\) can only be avoided if \(x_{2}\) is adjacent to \(x_{4}\). But \(G[N_{1}(v)]\) can only have one edge.
So the edge in \(G[N_{1}(v)]\) must be \(x_{1}x_{2}\). Then \(x_{1}\) has only one more successor in \(N_{2}(v)\) which must simultaneously resolve \(v\) being the unique common neighbor of the pairs \(\{x_{1},x_{3}\}\) and \(\{x_{1},x_{4}\}\). This successor must then have three predecessors, which is impossible. Therefore \(u\in\{y_{2},y_{3},y_{4}\}\).
Without loss of generality, suppose \(u=y_{2}\). As at the start of this case, \(\{y_{1},y_{2},y_{3},y_{4}\}\) can have no more successors in \(N_{d}(v)\). And \(y_{1}\) and \(y_{2}\) cannot be incident to another edge in \(G[N_{d-1}(v)]\), in order to allow for each to have two predecessors. To prevent \(y_{1}\) and \(y_{3}\) having \(z\) as a unique common neighbor, they must therefore share a common predecessor, say \(x_{1}\in N_{d-2}(v)\). If \(x_{1}\) has two predecessors, then it is full, and \(x_{1}y_{1}y_{2}\) is a unique shortest path of length \(2\) between \(x_{1}\) and \(y_{2}\). So \(x_{1}\) must have only one predecessor, which implies \(d=3\).
Suppose that \(G[N_{2}(v)]\) contains no edges other than \(y_{1}y_{2}\); then \(y_{3}\) and \(y_{4}\) must have three predecessors each. Denote the other vertices in \(N_{1}(v)\) by \(x_{2}\), \(x_{3}\), and \(x_{4}\). Now, \(y_{3}\) and \(y_{4}\) must share at least two of their predecessors in \(N_{1}(v)\), but \(x_{1}\) cannot be this shared predecessor, since that would leave \(x_{1}y_{1}y_{2}\) as the unique shortest path of length \(2\) between \(x_{1}\) and \(y_{2}\). So assume the two shared predecessors of \(y_{3}\) and \(y_{4}\) are \(\{x_{3},x_{4}\}\). Neither \(x_{3}\) nor \(x_{4}\) can be adjacent to \(y_{1}\) or \(y_{2}\), as these adjacencies would result in a unique shortest path of length \(2\) between one of \(\{x_{3},x_{4}\}\) and one of \(\{y_{1},y_{2}\}\). With exactly one remaining extra edge, this edge must be \(x_{3}x_{4}\) to ensure that \(x_{3}\) and \(x_{4}\) each have four neighbors. It now follows that \(x_{1}\) is adjacent to \(y_{2}\), and hence the three remaining edges are \(x_{2}y_{1}\), \(x_{2}y_{2}\), and \(x_{2}y_{4}\). After connecting each \(x_{i}\) to \(v\), all vertices are full, and \(G\cong R_{10,3}\) (to see this let \(v\) be vertex \(3\) in the drawing of \(R_{10,3}\) in Figure 4). Applying Lemma 4.1 establishes \(q(R_{10,3})=2\).
Finally, suppose \(G[N_{2}(v)]\) contains another edge, which can only be \(y_{3}y_{4}\), as otherwise \(y_{1}\) or \(y_{2}\) would not have enough predecessors. So each \(y_{i}\) has exactly two predecessors. Then there are eight edges between \(N_{2}(v)\) and \(N_{1}(v)\), and each vertex in \(N_{1}(v)\) must have exactly two successors. Each pair \(\{y_{1},y_{3}\}\), \(\{y_{1},y_{4}\}\), \(\{y_{2},y_{3}\}\), and \(\{y_{2},y_{4}\}\) must share a single predecessor in \(N_{1}(v)\), causing each \(y_{i}\) to be full. Denote these unique common predecessors by \(x_{1}\), \(x_{2}\), \(x_{3}\), and \(x_{4}\), respectively. To simultaneously avoid the unique shortest paths of length \(2\) from \(x_{1}\) to \(y_{2}\) and \(y_{4}\), \(x_{1}\) must be adjacent to \(x_{4}\). Similarly, \(x_{2}\) must be adjacent to \(x_{3}\). This uses up the remaining extra edges. After connecting each \(x_{i}\) to \(v\), all vertices are full, and \(G\cong R_{10,2}\). Using Lemma 4.1 establishes \(q(R_{10,2})=2\).
_Case 4: \(N_{d}(v)\)_ contains no vertices with four predecessors. Since the distance partition from \(v\) contains at least one vertex with four predecessors, let \(b\) denote a vertex with four predecessors. This accounts for two of the six extra edges. By the proofs of Lemmas 5.1 and 5.3, \(N_{d}(v)\) must contain a pair of vertices with three predecessors each or a cycle of vertices with two predecessors each. Furthermore, each edge in \(N_{d}(v)\) is an extra edge and any vertex in \(N_{d}(v)\) with three predecessors accounts for an extra edge, and so \(G[N_{d}(v)]\) must be isomorphic to one of \(K_{2}\), \(P_{3}\), \(C_{3}\), or \(C_{4}\). Each of these options requires either three or four of the remaining extra edges. Therefore, there is at most one extra edge unaccounted for. This implies \(b\) is the unique vertex with four predecessors.
We first claim that \(b\in N_{2}(v)\). Indeed, suppose \(b\in N_{i}(v)\) for some \(2<i<d\), and let \(\{a_{1},a_{2},a_{3},a_{4}\}\subseteq N_{i-1}(v)\) be the four predecessors of \(b\). Then each \(a_{i}\) has at least two predecessors. With at most one of
the extra edges left, we may assume, without loss of generality, that \(a_{1}\) has a successor \(b_{1}\in N_{i}(v)\). Now, \(b_{1}\) cannot be a terminal vertex, as that would require more extra edges than are currently unaccounted for, so let \(c_{1}\in N_{i+1}(v)\) be a successor of \(b_{1}\). Then \(a_{1}b_{1}c_{1}\) is a unique shortest path of length \(2\). So \(b\in N_{2}(v)\).
We next claim that there are no vertices with three predecessors in \(N_{i}(v)\) for any \(i<d\). As observed at the end of Claim #1 in the proof of Lemma 5.3, any successor of a vertex with three predecessors must be a vertex with four predecessors. Since \(b\in N_{2}(v)\), any vertex with three predecessors in \(N_{i}(v)\) for \(i\geq 2\) cannot have a successor. It follows that such a vertex requires an edge in \(G[N_{i}(v)]\), which would require the use of at least three extra edges. A similar argument shows that any vertex in \(N_{i}(v)\) for \(2\leq i<d-1\) cannot have a unique successor in \(N_{i+1}(v)\). If there were a vertex \(x\) with two predecessors and a unique successor \(y\), it must be adjacent to another vertex in \(N_{i}(v)\). Then \(y\) must either be adjacent to a vertex in \(N_{i+2}(v)\) or \(N_{i+1}(v)\) yielding either a unique shortest path or too many extra edges. So, with the exception of \(b\), every vertex in \(N_{i}(v)\) for \(2\leq i<d\) has exactly two predecessors, and every vertex in \(N_{i}(v)\) for \(2\leq i<d-1\) has exactly two successors.
We have \(\{a_{1},a_{2},a_{3},a_{4}\}=N_{1}(v)\), and \(b\) is a common successor of these four vertices. With only one remaining extra edge, each \(a_{i}\) must have another successor in \(N_{2}(v)\). Let \(b_{1}\neq b\) be a successor of \(a_{1}\) in \(N_{2}(v)\), let \(a_{2}\) be the second predecessor of \(b_{1}\), and let \(c_{1}\in N_{3}(v)\) be a successor of \(b_{1}\). Note that this implies \(a_{1}\) and \(a_{2}\) are not adjacent. To avoid unique paths of length \(2\) between \(c_{1}\) and each of \(a_{1}\) and \(a_{2}\), the second predecessor of \(c_{1}\), say \(b_{2}\), must have \(a_{1}\) and \(a_{2}\) as its predecessors. Continuing in this way through \(N_{i}(v)\) for all \(3\leq i\leq d-2\), we see successive induced candle sections starting with \(\{a_{1},a_{2}\}\), and ending say at \(\{y_{1},y_{2}\}\subseteq N_{d-1}(v)\). Let \(z_{1}\) be the common successor of \(\{y_{1},y_{2}\}\) in \(N_{d}(v)\). This argument is similar to the argument we used in Case 2(a). Starting again from \(a_{3}\), we see successive induced candle sections starting with \(\{a_{3},a_{4}\}\) and ending at say \(\{y_{3},y_{4}\}\subseteq N_{d-1}(v)\) whose common successor is \(z_{2}\in N_{d}(v)\).
Note that all vertices in \(N_{i}(v)\) for \(i<d-1\) are full, which implies \(N_{d-1}(v)=\{y_{1},y_{2},y_{3},y_{4}\}\). Each \(N_{i}(v)\) is independent for \(i<d-1\), and we claim that \(N_{d-1}(v)\) is independent as well. To see this, \(y_{i}y_{j}\) an edge for \(i\in\{1,2\}\) and \(j\in\{3,4\}\) would create a unique shortest path of length \(2\) from \(N_{d-1}(v)\) to \(N_{d-2}(v)\). If \(y_{1}y_{2}\) is an edge, then \(y_{1}\) and \(y_{2}\) are full and hence \(y_{1}z_{1}z\) is a unique shortest path for any neighbor \(z\) of \(z_{1}\) in \(N_{d}(v)\). The same observation shows \(y_{3}y_{4}\) cannot be an edge. Hence the last remaining extra edge must be in \(G[N_{d}(v)]\). This implies that \(G[N_{d}(v)]\) is either isomorphic to \(P_{3}\) or \(C_{4}\). Note that if some vertex in \(N_{d}(v)\) has three predecessors, then without loss of generality, it is adjacent to \(y_{1}\), \(y_{2}\), and \(y_{3}\). This results in a unique shortest path of length \(2\) through \(y_{3}\). Thus \(G[N_{d}(v)]\cong C_{4}\).
We now claim that any successor of \(y_{1}\) or \(y_{2}\) must be a successor of both. Indeed, if \(z\) is a successor of \(y_{1}\), then the paths \(zy_{1}x_{1}\) and \(zy_{1}x_{2}\) force \(z\) to be adjacent to \(y_{2}\) as well. Similarly, any successor of \(y_{3}\) or \(y_{4}\) must be a successor of both. Hence we have four vertices in \(N_{d}(v)\) forming a cycle \(C_{4}\) where each vertex is adjacent to either \(\{y_{1},y_{2}\}\) or \(\{y_{3},y_{4}\}\). This implies that the vertices around the cycle must alternate between being adjacent to \(\{y_{1},y_{2}\}\) and \(\{y_{3},y_{4}\}\) and so \(G\cong H_{k}\) for some odd \(k\geq 7\) (as the size of \(V(G)\setminus\{b,v\}\) is divisible by \(4\)). Applying Lemma 2.3 establishes \(q(H_{k})=2\).
This completes the proof of Theorem 2.4.
## 6 Further Observations and Related Problems
Corollary 2.2 and Theorem 2.4 give a complete characterization of the \(r\)-regular graphs that admit two distinct eigenvalues for \(r\leq 4\). For \(r=4\), the closed candles \(H_{k}\) for \(k\geq 3\) give the only infinite family of \(4\)-regular graphs \(G\) with \(q(G)=2\). From these graphs we can construct an infinite family of \(r\)-regular graphs \(G\) with \(q(G)=2\) for all \(r\geq 5\).
From Corollary 6.8 in [2], if \(G\) is any graph with \(q(G)=2\), then \(q(G\Box K_{2})=2\). Define \(H_{k}^{d}=H_{k}\Box Q_{d}\) for any \(d\geq 0\) and \(k\geq 3\). Now from Lemma 2.3 and [2, Cor. 6.9] we immediately have the following result.
**Proposition 6.1**.: _For all \(k\geq 3\) and \(d\geq 0\), \(H_{k}^{d}\) is a \((4+d)\)-regular graph with \(q(H_{k}^{d})=2\)._
When \(r\geq 5\), the number of edges in an \(r\)-regular graph grows faster than the lower bound \(|E(G)|\geq 2n-4\). So, characterizing \(r\)-regular graphs with two distinct eigenvalues for \(r\geq 5\) could be difficult using the methods in this paper. This motivates questions about existence and structure of such graphs. That is, are there other infinite families of \(r\)-regular graphs \(G\) with \(q(G)=2\), and if so, can they be constructed with a method other than the Cartesian product used in Proposition 6.1?
**Problem 6.2**.: _Determine whether or not an infinite family of \(5\)-regular graphs \(G\) with \(q(G)=2\) disjoint from the family \(\{H_{k}^{1}\,:\,k\geq 3\}\) exists._
This paper builds on the work in [6], where we established a lower bound on the number of edges a graph on \(n\) vertices must have in order to have \(q(G)=2\), and characterized the graphs that meet the bound with equality. The graphs that meet the bound \(|E(G)|\geq 2n-4\) are \(Q_{3}\) and an infinite family of graphs called _double-ended candles_. The lower bound is improved to \(|E(G)|\geq 2n-3\) if \(G\) has an odd number of vertices, and the graphs that meet this improved bound are exactly the infinite family of graphs called _single-ended candles_. These graphs, along with the closed candles \(H_{k}\) and some of the sporadic graphs in Theorem 2.4, appear in a set of papers focusing on a different problem.
McKee and Smyth [19], Taylor [21], and Greaves [12, 13] all consider the following problem: for which graphs \(G\) is there a matrix _compatible_ with \(G\) whose eigenvalues are contained in the interval \([-2,2]\)? For this problem, "compatible" means that the matrix \(M\) has exactly the same zero pattern as \(A(G)\) off of the diagonal entries, and the entries of \(M\) are restricted to lie in the ring of integers for some quadratic field (McKee and Smyth consider matrices with integer entries. Taylor and Greaves consider matrices whose entries lie in the ring of integers for an imaginary quadratic extension of \(\mathbb{Q}\)). It follows from interlacing that if \(G\) has a compatible matrix whose eigenvalues are contained in \([-2,2]\), then so do all of its induced subgraphs. This motivates the characterization of the maximal graphs \(G\) for which there is a matrix compatible with \(G\) whose eigenvalues are contained in \([-2,2]\). Each of the papers [19], [12], [13], and [21] gives a characterization for the respective rings under consideration. The graphs that appear in these characterizations are a collection of small sporadic graphs, and a few infinite families: the single-ended candles, the double-ended candles, the closed candles, and the graphs obtained from a candle section by adding an edge between each pair of nearest degree \(2\) vertices.
We are currently unable to explain the overlap between the graphs in our characterizations of graphs with \(2n-4\), \(2n-3\), or \(2n\) edges that admit two distinct eigenvalues, and the graphs in the characterizations in [19], [12], [13], and [21]. It seems reasonable to expect that the maximal graphs whose compatible matrices have all of their eigenvalues contained in \([-2,2]\) should have all of their eigenvalues contained in \(\{-2,2\}\) and hence have \(q\)-value \(2\). But we do not have a proof of this claim. A complete explanation of this coincidence may help the investigation of \(r\)-regular graphs \(G\) with \(q(G)=2\) for \(r\geq 5\).
**Problem 6.3**.: _For a given integer \(l\geq 3\), determine whether the maximal graphs that admit an integer (or similarly constrained) matrix whose eigenvalues are contained in \([-l,l]\) have \(q(G)=2\)._
A regular graph \(G\) that is neither complete nor empty is _strongly regular_ if there exist constants \(a\) and \(c\) so that any two adjacent vertices in \(G\) have \(a\) common neighbors, and any two non-adjacent vertices have \(c\) common neighbors. The definition implies that the diameter of a connected strongly regular graph is at most \(2\). If \(G\) is a connected regular graph, then \(A(G)\) has three distinct eigenvalues if and only if \(G\) is a strongly regular graph (see, e.g., Lemma 10.2.1 in [11]). So \(q(G)\in\{2,3\}\) for a strongly regular graph \(G\). In [10], Furst and Grotts show that \(L(K_{n})\), the line graph of \(K_{n}\), has \(q(L(K_{n}))=2\) for all \(n\geq 3\) (note that \(L(K_{n})\) is a strongly regular graph for all \(n\geq 3\)).
A connected graph \(G\) is _distance regular_ if there are integers \(b_{i}\) and \(c_{i}\) for all \(i\geq 0\) so that for any two vertices \(u\) and \(v\) at distance \(i\) in \(G\),
\[|N_{1}(v)\cap N_{i+1}(u)|=b_{i}\quad\text{and}\quad|N_{1}(v)\cap N_{i-1}(u)|=c_ {i}.\]
(See [9] for an extensive treatment of distance-regular graphs.) Distance-regular graphs with diameter \(2\) are strongly regular. Since Corollary 2.2 and Theorem 2.4 give a complete characterization of the \(r\)-regular graphs that admit two distinct eigenvalues for \(r\leq 4\), they also determine the distance-regular graphs of valency at most \(4\) that admit two distinct eigenvalues. Excluding the complete graphs and
complete multipartite graphs in the list, these are \(Q_{3}\), \(R_{10,4}\), \(R_{14,1}\), and \(Q_{4}\). Note that these graphs all have diameter at least \(3\), and thus are not strongly regular. The closed candles \(H_{3}\) and \(H_{4}\) are complete multipartite graphs, and all \(H_{k}\) for \(k\geq 5\) are not distance regular.
**Problem 6.4**.: _Determine the distance-regular graphs \(G\) with \(q(G)=2\)._
We finish with two observations about the \(4\)-regular graphs \(G\) with \(q(G)=2\), and problems they raise. With the exception of \(K_{5}\) and \(R_{7,1}\), all of the \(4\)-regular graphs that appear in Theorem 2.4 have even order. In particular, if \(|V(G)|\geq 8\), then \(G\) must have even order.
**Problem 6.5**.: _Determine whether for every integer \(k\), there is some value \(n_{k}\) so that every \(k\)-regular graph \(G\) with \(|V(G)|\geq n_{k}\) and \(q(G)=2\) has even order._
The graphs \(H_{k}\) are all circulant graphs, as are some of the sporadic graphs listed in Theorem 2.4. More broadly, many of the graphs in our characterization are vertex transitive. It would be interesting to investigate the highly symmetric graphs that admit a matrix with two distinct eigenvalues.
**Problem 6.6**.: _Determine the circulant graphs \(G\) that have \(q(G)=2\). What can be said about the automorphism groups of regular graphs with \(q(G)=2\)?_
## Acknowledgements
Shaun M. Fallat was supported in part by an NSERC Discovery Research Grant, Application No.: RGPIN-2019-03934. Veronika Furst was supported in part by a Fort Lewis College Faculty Development Grant. Shahla Nasserasr was supported in part by a Rochester Institute of Technology COS Dean's Research Initiation Grant. Brendan Rooney was supported in part by an RIT COS Dean's Research Initiation Grant. Michael Tait was supported in part by NSF grant DMS-2011553 and a Villanova University Summer Grant.
This project began as part of the "Inverse Eigenvalue Problems for Graphs and Zero Forcing" Research Community sponsored by the American Institute of Mathematics (AIM). We thank AIM for their support, and we thank the organizers and participants for contributing to this stimulating research experience.
We thank Tracy Hall for the matrix associated with the graph \(R_{7,1}\) and Bryan Shader for the matrices used in the proof of Lemma 2.3.
|
2304.07337 | Learning to Learn Group Alignment: A Self-Tuning Credo Framework with
Multiagent Teams | Mixed incentives among a population with multiagent teams has been shown to
have advantages over a fully cooperative system; however, discovering the best
mixture of incentives or team structure is a difficult and dynamic problem. We
propose a framework where individual learning agents self-regulate their
configuration of incentives through various parts of their reward function.
This work extends previous work by giving agents the ability to dynamically
update their group alignment during learning and by allowing teammates to have
different group alignment. Our model builds on ideas from hierarchical
reinforcement learning and meta-learning to learn the configuration of a reward
function that supports the development of a behavioral policy. We provide
preliminary results in a commonly studied multiagent environment and find that
agents can achieve better global outcomes by self-tuning their respective group
alignment parameters. | David Radke, Kyle Tilbury | 2023-04-14T18:16:19Z | http://arxiv.org/abs/2304.07337v1 | # Learning to Learn Group Alignment: A Self-Tuning Credo Framework with Multiagent Teams
###### Abstract.
Mixed incentives among a population with multiagent teams has been shown to have advantages over a fully cooperative system; however, discovering the best mixture of incentives or team structure is a difficult and dynamic problem. We propose a framework where individual learning agents self-regulate their configuration of incentives through various parts of their reward function. This work extends previous work by giving agents the ability to dynamically update their group alignment during learning and by allowing teammates to have different group alignment. Our model builds on ideas from hierarchical reinforcement learning and meta-learning to learn the configuration of a reward function that supports the development of a behavioral policy. We provide preliminary results in a commonly studied multiagent environment and find that agents can achieve better global outcomes by self-tuning their respective group alignment parameters.
+
Footnote †: 2021
## 1. Introduction
Cooperation and teamwork are central to the success of many human endeavours. Recently, there has been increasing support for the study of cooperation and teams being central to the development of artificial intelligence (AI) and multiagent systems (MAS) (Becker et al., 2017; Becker et al., 2017). Similarly to humans, intelligent agents cooperating and working in teams can enhance their capabilities beyond those of a single agent. However, recent work has shown that agents defined to be fully cooperative can be sub-optimal; agents that are not fully aligned with their teammates can achieve more globally favorable results (Becker et al., 2017; Becker et al., 2017).
This paper extends a recently proposed model, _credo_(Ralke et al., 2018). Credo regulates how an individual learning agent optimizes for multiple objectives in the presence of teams. Specifically, credo represents how much the agent optimizes for the goals of different groups they belong to: their individual goals, the goals of any teams they belong to, and the goals of the entire system. In previous experiments within multiagent reinforcement learning (MARL) environments, the credo model showed that the best global outcomes for a population of agents were achieved when agents in a larger group were somewhat selfish or when agents were mostly aligned with a smaller sub-team, robust to some amount of selfishness. While credo was predetermined and fixed in these past experiments, the results motivate the key research question this paper aims to address: can giving agents the ability to dynamically tune their credo allow them to learn favorable group alignments automatically?
In this paper, we conceptualize and provide a preliminary approach that enables agents to self-tune their credo. We provide theoretical foundations as to the motivation behind self-tuning credo in the context of different team structures and group alignment. Further, we detail our framework that endows agents with the ability to tune their own credo. The framework borrows implementation concepts from hierarchical reinforcement learning (HRL) and meta-learning. Conceptually, each agent has a credo-tuning policy and a behavioral policy to maintain the decentralized nature of individual learning agents. The values of an agent's credo ultimately shapes their reward function, and thus, the optimization landscape of their behavioral policy. The dual-layer structure is similar to high and low-level policies in HRL, while the credo-tuning policy learning to shape the optimization landscape for the behavioral policy reflects that of a meta-learning problem.
We present preliminary results in a widely studied MARL environment, the Cleanup Gridworld Game (Zhou et al., 2017), and outline future plans for evaluation. We show that, when starting from a known sub-optimal group alignment (i.e., sub-optimal credo), agents that tune their respective credos with our framework move to a better group alignment and learn a more globally favorable joint policy. While favorable team structures and group alignments have been explored in our preliminary testing environment, we describe our plans to test our framework in an environment where these are not known. The goal of this work is to enable agents to optimize their behaviors towards the various groups they belong to in any environment - enabling agents to learn better joint policies while eliminating the need for researchers and practitioners to engineer specific team structures and credo parameters. With this paper we make the following contributions:
* We provide theoretical motivation behind dynamically tuning credo (Section 4.1).
* We conceptualize an agent framework to allow agents to self-regulate their own individual credo (Section 4.2).
* We present preliminary results demonstrating the efficacy of our framework (Section 5.4) and outline future work (Section 6).
## 2. Preliminaries
We model our base environment as a stochastic game \(\mathcal{G}=(\mathcal{N},S,\{A\}_{i\in\mathcal{N}},\{R\}_{i\in\mathcal{N}},P, \gamma,\Sigma)\). \(\mathcal{N}\) is our set of all agents that learn online from experience (with size \(N\in\mathbb{N}\)) and \(S\) is the state space, observable by all agents, where \(s_{i}\) is a single state observed by agent \(i\). \(A=A_{1}\times\ldots\times A_{N}\) is the joint action space for all agents where \(A_{i}\) is the action space of agent \(i\). \(R=R_{1}\times\ldots\times R_{N}\) is the joint reward space for all agents where \(R_{i}\) is the reward function of agent \(i\) defined as \(R_{i}:S\times A\times S\mapsto\mathbb{R}\), a real-numbered reward for
taking an action in an initial state and resulting in the next state. \(P:S\times A\mapsto\Delta(S)\) represents the transition function which maps a state and joint action into a next state with some probability and \(\gamma\) represents the discount factor so that \(0\leq\gamma<1\). \(\Sigma\) represents the policy space of all agents, and the policy of agent \(i\) is represented as \(\pi_{i}:S\mapsto A_{i}\) which specifies an action that the agent should take in an observed state.1
Footnote 1: We can also allow for randomized policies.
We use "common interest" to refer to when agents share their reward and a _team_ is a set of individual agents that can have some degree of common interest for team-level goals. Given a population, multiple teams with different preferences and interests that are not in zero-sum competition may co-exist. The collection of all teams is referred to as a team _structure_. We denote the set of all teams as \(\mathcal{T}\), the teams agent \(i\) belongs to as \(\mathcal{T}_{i}\), and a specific team as \(T_{i}\in\mathcal{T}_{i}\).
The credo model presented in (Hardt et al., 2017) relaxes the assumption that teammates are fully aligned though common interest to allow settings where agents may only partially optimize for a team's goal. For example, an agent may optimize their policy for the performance of one or multiple teams, while also being somewhat oriented towards it's own personal goals. This is done by decomposing an agent's reward function to be a combination of their individual environmental reward \(IR_{i}=R_{i}\), the rewards \(i\) receives from each team they belong to \(TR_{i}^{T_{i}}\forall T_{i}\in\mathcal{T}_{i}\), and the reward \(i\) receives from the system of \(|N|\) agents \(SR_{i}\). \(TR_{i}^{T_{i}}\) and \(SR_{i}\) can be implemented with any function to aggregate and distribute rewards.
Each agent has a credo vector of parameters where the sum of all parameters is 1, represented \(\text{cr}_{i}=\langle\psi_{i},\phi_{i}^{T_{1}},\dots,\phi_{i}^{T_{|\mathcal{T}_ {i}|}},\omega_{i}\rangle\), where \(\psi\) is the credo parameter for \(i\)'s individual reward \(IR_{i}\), \(\phi_{i}^{T_{i}}\) is the credo parameter for the reward \(TR_{i}^{T_{i}}\) from team \(T_{i}\in\mathcal{T}_{i}\), and \(\omega_{i}\) is the credo parameter for the reward \(i\) receives from the system \(SR_{i}\). The parameter notation is organized by increasing order of group size, so that \(\mathbf{cr}_{i}=\langle\text{self},\dots,\text{teams},\dots,\text{system}\rangle\), where \(|\text{self}|\prec|\text{teams}|\leq|\text{system}|\). An agent's credo-based reward \(R_{i}^{\mathbf{cr}}\) is a weighted combination of that agent's credo parameters and reward from that group. Expanded in Section 4.2, in this work we implement functions for \(TR_{i}^{T_{i}}\) and \(SR_{i}\) specially designed for the self-tuning scenario that maintain consistency with the original implementation in (Hardt et al., 2017).
## 3. Related Work
Humans have developed with an inherent bias towards teamwork. However, humans are only able to reliably maintain social relationships with a maximum number of individuals, causing them to form smaller groups (Brock et al., 2017). Analyzing between-team behaviors is often done in organizational psychology (OP), focusing on the concept of social identification or people's perceptions of their goals (Hardt et al., 2017; Goyal et al., 2017). Team members may need to balance tendencies for their own personal goals with the goals of their team or the entire system (Brock et al., 2017; Goyal et al., 2017). Humans are continuously learning; thus, this balance of how humans optimize for goals is likely to be dynamic instead of static.
In AI, the concept of multiple non-conflicting teams within a larger system has been primarily explored for task completion (Brock et al., 2017; Goyal et al., 2017), and more recently been used in social dilemma scenarios (Hardt et al., 2017). Furthermore, agents optimizing their behavior while balancing between personal and group-level goals has been of growing interest to the AI community (Brock et al., 2017; Goyal et al., 2017). One such example of this is ad hoc teamwork which relies on the ability to assess the goals of an individual or group to best optimize a cooperative utility function (Goyal et al., 2017; Goyal et al., 2017).
A previously proposed model, credo, considers how agents optimize for various goals in the context of multiple non-conflicting teams within a larger system (Hardt et al., 2017). Credo defines how agents optimize for various groups they belong to, namely themselves, any teams they belong to, and the entire system. While credo showed how groups with mixed motives or some degree of selfishness can significantly outperform fully cooperative populations, all agents' credo parameters were initialized the same and kept constant throughout experiments. Other work has studied concepts of dynamic reward sharing and the emergence of coordination; however, that work relied on random perturbations of reward sharing parameters and did not consider the existence of defined team structures (Brock et al., 2017).
In this paper, we propose an agent framework where agents are able to self-tune their individual credo parameters for groups they belong to. Our model builds on hierarchical reinforcement learning (HRL) concepts to define multiple policies within a single individual learning agent. Given a population of credo-tuning agents, we hope to develop continuously evolving policies that overcome sub-optimal team definitions, recover favorable joint policies, and preserve cooperation across multiple learning entities.
## 4. Self-tuning Credo
In this section we provide motivation for allowing agents to self-tune their credo parameters (Section 4.1) and detail our framework that allows agents to do this (Section 4.2).
### Motivation
The main motivation behind allowing agents to self-tune their credo parameters is to recover a more favorable joint policy despite a sub-optimal group environment. For example, Figure 1, adapted from the original credo paper (Hardt et al., 2017), shows a 33% increase in mean population reward in the Cleanup gridworld game when a population of six agents were 80% cooperative and 20% selfish compared to fully cooperative (Scenario 1). Both scenarios highlighted in Figure 1 achieve the highest mean population reward because agents tend to learn better joint policies under certain combinations of team structure and credo definitions.
Defining a team structure that best supports how individual agents learn is a difficult problem. Recent work has used a fully cooperative population (i.e., shared reward function) to compare results with; however, the credo model has shown how a fully aligned population may be sub-optimal. Providing agents with the ability to self-tune their credo parameters allows agents to regulate their internal reward function through group alignment. For example, credo-tuning agents defined in one large cooperative population may discover the benefits of being slightly selfish on their own and converge to Scenario 1 in Figure 1. The pressures to tune credo and recover different reward signals are highly correlated with the size of the reward-sharing group. In this section, we detail how features of group size impact reward signals (Section 4.1.1) and how
tuning credo can be leveraged to recover stronger reward signals (Section 4.1.2).
#### 4.1.1. Reward Signals and Group Size
Consider a scenario with a fully aligned cooperative population of \(N\) agents with only behavioral policies (i.e., only one group, the entire system). In this setting, all agents share rewards at every timestep. Thus, if agent \(i\) collects a reward of \(r\) at time \(t\), all agents receive a reward of \(\frac{F}{N}\) (assuming no other agent collects a non-zero reward from the environment). The size of the reward-sharing group has two impacts on the reward function and agents' abilities to perform effective credit assignment.
**Probability of non-zero reward approaches one:** Starting with three common assumptions in reinforcement learning (RL), assume agents 1) are initialized with random policies, 2) fully explore the state space in the limit, and 3) each have equal probability of collecting a non-zero reward from the environment \(P(r)_{i}\) (i.e., \(P(r)_{i}=P(r)_{j}\) for any agents \(i\) and \(j\)). The probability of **any** agent collecting a non-zero reward is: \(1-P(r)^{N}\). The derivative of this is positive, \(f^{\prime}(1-P(r)^{N})=N\cdot P(r)^{N-1}\); thus, agents in a reward-sharing group are monotonically more likely to receive a non-zero feedback signal at any timestep as the size of that group increases. This probability approaches 1 as \(N\rightarrow\infty\) in the limit.
**Variance of non-zero reward approaches zero:** Agents receiving non-zero reward for their actions causes them to assign credit to these actions. More positive reward for certain state-action pairs will result in them executing these state-action pairs more often in the future, and vice versa for negative reward. However, while the **probability** of receiving a non-zero reward approaches 1 as \(N\) increases, the derivative \(f^{\prime}\left(\frac{F}{N}\right)=-\frac{F}{N^{2}}\) implies the value of this non-zero reward monotonically approaches 0 as \(N\) increases. With a large group, the reward that each agent receives at every timestep will be a function of the expected number of agents that obtain rewards at any timestep. As \(N\rightarrow\infty\), the variance of this reward approaches zero. Thus, agents would be unable to perform effective credit assignment if the size of their reward-sharing group is too large.
#### 4.1.2. Recovering a Stronger Reward Signal
The previous subsection describes a scenario where agents lose the ability to perform effective credit assignment if the size of a reward sharing group is too large (assuming agents fully share rewards). The credo model removes the assumption that agents fully share rewards to analyze situations where agents can learn from multiple types of groups they belong to. Thus, regulating credo could allow agents to recover meaningful feedback signals from their actions in environments where credit assignment becomes challenging (i.e., if the reward-sharing group is too large).
Consider again the results from the original credo paper shown in Figure 1. Agents defined in a fully aligned population (one team of six agents) fail to converge to the most efficient joint policy; however, agents are able to recover a better joint policy when agents are 20% selfish (Scenario 1). Agents that can self-regulate their credo parameters may recover better joint policies despite a sub-optimal environment that can impose credit assignment challenges, such as poorly defined team structures or group alignment.
### Self-Tuning Credo Framework
This section details how we extend the credo-based reward function design and our proposed self-tuning credo framework.
#### 4.2.1. Extending Credo
Recall from Section 2 that agent \(i\)'s credo is defined as a vector of parameters that sum to 1, represented \(\mathbf{c}\mathbf{r}_{i}=\langle\psi_{i},\phi_{i}^{T_{i}},\ldots,\phi_{i}^{T_{ i}}|\sigma_{i}\rangle\), where \(\psi\) is the credo parameter for \(i\)'s individual reward \(IR_{i}\), \(\phi_{i}^{T_{i}}\) is the credo parameter for the reward \(TR_{i}^{T_{i}}\) from team \(T_{i}\in\mathcal{T}_{i}\), and \(\omega_{i}\) is the credo parameter for the reward \(i\) receives from the system \(SR_{i}\). In this paper, we define agent \(i\)'s credo-based reward function \(R_{i}^{\mathbf{c}\mathbf{r}}\) to be calculated as:
\[R_{i}^{\mathbf{c}\mathbf{r}}=\psi_{i}IR_{i}+\sum_{T_{i}\in\mathcal{T}_{i}} \frac{\phi_{i}^{T_{i}}}{\sum_{j\in T_{i}}\phi_{j}^{T_{i}}}TR_{i}^{T_{i}}+\frac {\omega_{i}}{\sum_{j\in N}\omega_{j}}SR_{i}. \tag{1}\]
Different from the original implementation, Equation 1 allocates team and system rewards based on the _ratio_ of an agent's credo parameter for that group compared to the sum of credo parameters of other agents in that group. This is necessary modification for the scenario when agents may have different credo parameters for the same group. To maintain consistency with past work, we modify \(TR_{i}^{T_{i}}\) and \(SR_{i}\) to be the weighted sum of agents' rewards and their credo parameter for that specific group:
\[TR_{i}^{T_{i}}=\sum_{j\in T_{i}}\phi_{j}^{T_{i}}R_{j}(S,A_{j},S),\]
\[SR_{i}=\sum_{j\in N}\omega_{j}R_{j}(S,A_{j},S).\]
This ensures all rewards that are collected from the environment are re-allocated to the various groups and scaled according to all credo parameters. These modifications are equivalent to the previous credo setting when all agents have the same credo, but expand
Figure 1. Mean population reward for every credo parameter in the Cleanup environment from (Garfani et al., 2017). These experiments have \(|\mathcal{T}|=3\) teams of two agents each. Two scenarios achieve the highest reward: when credo has slight self-focus paired with high system-focus (green star) and when team-focus is high (yellow stars).
the reward function dynamics to when teammates may not have the same credo for a team.
#### 4.2.2. Agent Architecture
An overview of our proposed agent framework is given in Figure 2. The architecture of the agent is a multi-level policy inspired by HRL, where each layer influences the learning problem of the other. The "low-level" policy, \(\pi_{i}\), is a typical behavioral policy that takes actions \(a_{i}\) conditioned on an observed state \(s_{i}\) within an environment. At each timestep, rewards are shared with other agents according to the agent's credo parameters \(\mathbf{cr}_{i}\). The "high-level" policy, \(\pi_{i}^{\mathbf{cr}}\), modifies the agent's credo parameters at a slower time scale. Conditioned on the previous credo parameters, \(\mathbf{cr}_{i}\), and the corresponding low-level policy's reward over \(E\geq 1\) episodes, \(\overline{\mathbf{r}_{i}^{E}}\), the high-level agent produces updated credo parameters, \(\mathbf{cr}_{i}^{r}\). The top-layer policy operates at a slower time scale than the low-level behavioral policy to allow the low-level policy to gain experience with a particular credo and stabilize learning.
Both policies learn from experience using RL. They both aim to individually maximize their sum of discounted future rewards and neither policy directly observes the other (i.e., they are both individual learning policies). However, each policy directly influences the optimization landscape of the other. The behavior of the low-level policy determines the reward feedback for the high-level policy; if the behavioral agent fails to gain reward, the high-level credo-tuning policy fails to get positive feedback. Concurrently, the credo output of the high-level policy shapes the reward function of the low-level policy for the next set of \(E\) episodes.
As mentioned in Section 4.1.1, tuning the amount of shared reward within groups regulates 1) the probability of an agent receiving a non-zero reward from a group with more teammates, and 2) the variance of their reward signal. Thus, the high-level credo policy shapes the influence of these two aspects with respect to all groups referenced in the credo vector to guide the learning process of the low-level behavioral policy (self, any teams, and system).
## 5. Evaluation
This section outlines our implementation details, experimental environment and setup, and presents preliminary experimental results.
### Implementation
**Low-level Behavioral Policy:** We implement the behavioral policies of our agents with Proximal Policy Optimization (PPO) (Han et al., 2017). The PPO implementation in (Han et al., 2017) used an older version of the RLlib library (version 0.8.5) which made interconnecting the credo-tuning framework infeasible. Thus, we adapted the same architecture as the agents in (Han et al., 2017) to the current version of RLlib (version 2.1.0) to incorporate the credo-tuning agent architecture shown in Figure 2.2
Footnote 2: [https://docs.rayio/en/latest/rllib/index.html](https://docs.rayio/en/latest/rllib/index.html)
**High-level Credo Policy:** As a preliminary construction, and to reduce sample complexity, we implement the high-level credo policy as a \(Q\)-Learning agent with \(\epsilon\)-greedy exploration (\(\epsilon=20\%\)) (Kirkpatrick et al., 2017). Consistent with the original credo paper, we define agents to belong to only one team, making credo vectors with three parameters (i.e., \(\mathbf{cr}_{i}=\langle\psi_{i},\phi_{i},\omega_{i}\rangle\)). We limit possible agent credos to intervals of 0.2, creating a state space of 21 possible states (shown in Figure 1 from (Han et al., 2017)). With three credo parameters, the agent can choose from any of seven actions. The action space consists of either increasing/decreasing any combination of credo parameters (six actions) or doing nothing (one action). For example, if \(\mathbf{cr}_{i}=(0.2,0.0,0.8)\), the agent can take an action to decrease self-focus and increase system focus (by increments of 0.2) to result in \(\mathbf{cr}_{i}^{\prime}=\langle 0.0,0.0,1.0\rangle\). If the agent chooses an action that would increase any credo parameter above 1.0 or below 0.0, no action is taken and \(\mathbf{cr}_{i}=\mathbf{cr}_{i}^{\prime}\). The behavioral policies are updated with \(\mathbf{cr}_{i}^{\prime}\) for the next \(E\) episodes.
### Environment
We perform our preliminary evaluation in the Cleanup Gridworld Game (Zheng et al., 2017). Cleanup is a temporally and spatially extended Markov game representing a sequential social dilemma. We keep the underlying environment unchanged from previous setups (Brock et al., 2018) except for the team and system reward functions. Agent observability is limited to an egocentric \(15\times 15\) pixel window and consuming an apple yields +1 reward. Apple regrowth rate is dependent on the cleanliness of an adjacent river. To be successful in cleanup, agents must learn to balance actions of consuming apples and cleaning the river (which returns no positive reward). Agent rewards are determined by their credo \(\mathbf{cr}_{i}\) which is updated at regular intervals. Consistent with the original credo paper, we set the size of each team to be two agents \(|T_{i}|=2\), creating \(|\mathcal{T}|=3\) disjoint teams from the population of \(N=6\) agents (Han et al., 2017). Agents are implemented with PPO behavioral policies, \(Q\)-learning credo policies, and experiments last for \(3.2\times 10^{8}\) environment steps and credo parameters are updated every 96,000 environment steps.
Figure 2. Overview of the proposed credo-tuning agent framework. Each agent has two policies that operate at different time scales: a low-level behavioral policy that acts within an environment and a high-level credo-tuning policy that operates every \(E\geq 1\) episodes. The credo-tuning policy shapes the optimization landscape for the behavioral policy while the learned behavior impacts the reward function for the credo-tuning policy.
### Experiment
We design an experiment to evaluate if credo-tuning agents can overcome a sub-optimal initialization to recover a joint policy that achieves higher mean population rewards (Scenario 1 or 2 in Figure 1). We initialize agents to be fully system-focused (i.e., \(\mathbf{c}\mathbf{r}_{i}=\langle 0.0,0.0,1.0\rangle\)). The low-level behavioral policy trains, and the high-level credo policy updates \(\mathbf{c}\mathbf{r}_{i}\), every 96 episodes (rollouts of six workers with 16 environment copies each). This is equivalent to initializing agents with credo parameters in the bottom left corner of Figure 1; however, agents' credo policies are now able to adjust the agent's credo parameters.
This setting directly evaluates our discussion in Section 4.1.1. One of the key findings in previous work is that some amount of mixed incentives can achieve more favorable global outcomes than a fully cooperative population (Cleanup, 2016; Dwork et al., 2017). In Cleanup, agents that are slightly self-focused or fully team-focused (Scenario 1 and 2 respectively in Figure 1) learn a better global joint policy through division of labor. In these settings, agents divide labor and learn to specialize into roles of four apple picking agents and two river cleaning agents. When agents are fully system-focused, they specialize into the sub-optimal joint policy of three apple pickers and three river cleaners.
We hypothesize that a full system-focused group learns this sub-optimal joint policy due to more a difficult credit assignment problem. Agents in Scenario 1 of Figure 1 learn the best joint policy by recovering slightly stronger reward signals by being 20% self-focused. The design of this experiment evaluates the ability for credo-tuning agents to recover stronger reward signals and converge to a better joint policy. Intuitively, this can be thought of as agents learning to configure their credo parameters such that they converge to high-reward areas of Figure 1.
### Preliminary Results
This section shows preliminary results of the experiment detailed in Section 5.3. In the credo tuning experiment, all agents are initialized in the Cleanup environment with \(\mathbf{c}\mathbf{r}_{i}=\langle 0.0,0.0,1.0\rangle\). Each agent's behavioral policy updates every 96,000 environmental timesteps (96 episodes), at which point the high-level credo policy modifies the agent's credo parameters. The behavioral policy never observes the credo parameters but instead experiences changes to their reward function over the next batch of episodes. We compare credo tuning to two configurations where credo remains static. In the static team-focus experiment, agents maintain \(\mathbf{c}\mathbf{r}_{i}=\langle 0.0,1.0,0.0\rangle\) for the entire experiment and fully share rewards with their teammates. In the static system-focus experiment, agents maintain \(\mathbf{c}\mathbf{r}_{i}=\langle 0.0,0.0,1.0\rangle\) for the entire experiment and share rewards with all agents (i.e., a fully cooperative system). In all experiments, there are six agents that are divided into three disjoint teams of two agents each (i.e., \(N=6\), \(|\mathcal{T}|=3\), and \(|T_{i}|=2\)). We execute four trials of each experiment configuration.
We observe the same patterns with the static experiments as in previous work (Dwork et al., 2017; Dwork et al., 2017): full team-focus performs significantly better than full system-focus. However, we found that updating the PPO agents from RLlib 0.8.5 to RLlib 2.1.0 modified their learning curves so that agents learn more gradually (despite no changes to the algorithm configurations). Thus, while our direct learning curves are not comparable to past work, the overall result remains consistent and we extend the duration of the experiments from \(1.6\times 10^{8}\) to \(3.2\times 10^{8}\) environment steps.
#### 5.4.1. Reward
Figure 3 shows the mean population reward and 95% confidence intervals obtained by the population of agents in the three different credo scenarios: static system-focus, static team-focus, and credo-tuning agents that were initialized to be system-focused. The \(y\)-axis shows mean population reward and the \(x\)-axis shows timesteps of the experiment. Consistent with past work, we find that static agents that are fully team-focused (blue) perform significantly better than static system-focused agents (red). This is due to team-focused agents converging to a more efficient division of labor joint policy with two river cleaning agents and four apple picking agents, whereas system-focused agents converge to three agents each cleaning the river or picking apples.
Recall from Figure 1 that full team-focus credo is one setting that achieves the highest reward in this configuration; thus, we treat full team-focus as an upper-bound result in this domain. The goal
Figure 4. Inverse Gini index curve in the Cleanup environment for each experiment in our evaluation. Results are the mean across 4 trials for each experiment reported with 95% confidence intervals. Static system-focus credo is defined to have full equality and is always 1. This shows that credo-tuning agents converge to slightly higher equality than the static team-focused experiment.
Figure 3. Reward curves in the Cleanup environment for each experiment in our evaluation. Results are the mean across 4 trials for each experiment reported with 95% confidence intervals. The static team-focus environment has been shown to achieve the highest mean population reward in Cleanup with different credo (Figure 1 Scenario 2). This shows that credo-tuning agents that are initialized with system-focus credo can increase their mean population reward to improve towards the level of team-focused agents.
of credo-tuning agents is not to overtake the team-focus credo, but converge to credo parameters that achieve higher reward than their initialized settings (i.e., fully system-focused credo; red line). The green line in Figure 3 shows the mean population reward and 95% confidence intervals for the credo-tuning agents initialized with full system-focus credos. Through the first 800,000 timesteps of the experiment, the credo-tuning agents (green) learn along the same trajectory as the system-focused agents (red). However, giving agents the ability to modify their credo parameters leads to the population achieving roughly 21% more mean population reward than the system-focus credo by the end of the experiment (320 for credo-tuning agents compared to 264 for static system-focus agents). This shows the ability for credo-tuning agents to achieve more mean population reward despite a known sub-optimal team and credo parameter initialization.
#### 5.4.2. Reward Equality
Since certain roles in the environment do not produce reward and teammates are able to define different credos, it is important to consider population equality to examine if tuning credo leads to significant inequality among the population. We model population reward equality as the inverse Gini index, similar to past work (Ginelli et al., 2016; Ginelli et al., 2017):
\[Equality=1-\frac{\sum_{i=0}^{N}\sum_{j=0}^{N}|R_{i}^{\text{cer}}-R_{j}^{\text{cer }}|}{2N^{2}\overline{R^{\text{cer}}}}, \tag{2}\]
where values closer to 1 represent more equality. Figure 4 shows our equality results, where the \(y\)-axis shows the mean inverse Gini index with 95% confidence intervals and the \(x\)-axis is the number of timesteps. Since the static system-focus scenario defines agents to fully share rewards, the inverse Gini index is always equal to 1. After some initial learning, we find that the credo-tuning agents converge to a setting where the population has higher mean equality than the static team-focused setting. While this is likely impacted by the credo initialization and is worthy of further exploration, we find that credo-tuning agents discover a setting that achieves high reward while maintaining high equality across the population.
#### 5.4.3. Division of Labor
We now analyze the credo-tuning experiment specifically. Figure 5 shows the amount of apples consumed (top) and cleaning beam actions (bottom) by each credo-tuning agent in one trial where the agents are initialized to be fully system-focused (green line in Figures 3 and 4). Despite being initialized as system-focused, these agents have team membership to one of three teams (\(T_{0}\), \(T_{1}\), or \(T_{2}\)) to modify their credo towards. Agents are labeled so that \(a_{0}(T_{0})\) represents agent 0 on team 0 and teammates in the plots are colored with different shades of the same color.
Figure 5. Amount of apples consumed (top) and cleaning beam actions (bottom) by each agent for one trial of the credo-tuning experiment with agents initialized with system-focused credo (green line in Figures 3 and 4). Agents are labeled so that \(a_{0}(T_{0})\) is agent 0 on team 0. Teammates are colored with different shades of the same color. Whereas system-focused agents converge to a joint policy of three apple pickers and three cleaning agents, credo-tuning agents recover the better joint policy of four apple pickers and two cleaning agents autonomously (same as fully team-focused agents) and generate more reward (Figure 3).
Figure 6. Credos of all six agents over time in the same credo-tuning trial as Figure 5. Each plot shows the credo parameters for a different agent shown in Figure 5. Each \(y\)-axis represents credo parameter space and each \(x\)-axis represents timesteps. We observe heterogeneous credo parameters emerge across the population; however, \(a_{4}\) becomes more self- and team-focused as it switches roles to become an apple picking agent.
Similar to the known result of static system-focused agents shown in past work (Kumar et al., 2017), the agents in the credo-tuning experiment initially specialize into roles of three apple picker and three river cleaning agents. However, the advantage of agents being able to tune their credo causes the \(a_{4}(T_{2})\) agent to learn to pick apples in the second half of the experiment. This recovers the global joint policy of four apple picker agents and two river cleaning agents (joint policy of the static team-focused agents) despite agents being initialized with full system-focused credo. This causes an increase in mean population reward from the static system-focused scenario towards the full team-focused scenario. While the mean population reward level of team-focused agents is not quite reached, these agents recover the same global joint policy; thus, while we are unable to make certain claims, perhaps longer training time would see convergence to the reward level of the team-focused population (blue in Figure 3) given this joint policy.
to evolve online while maintaining the decentralized aspects of individual learning agents.
The goal of this work is to allow decentralized agents to self-regulate their credo to overcome sub-optimal initializations of credo or team structures and recover favorable policies. While previous work has shown how team structure has a significant impact on the policies that agents learn, discovering the structure that guides agents towards globally favorable results may be a hard domain dependent problem. Our preliminary results have shown how our multi-tiered learning architecture can allow agents to achieve more globally favorable results despite being initialized in a known sub-optimal configuration. The broader implications of this work allow agents to autonomously recover the learning benefits of teams and group alignment in any environment. This mitigates the burden of researchers or practitioners having to engineer team structures or credo in settings where favorable configurations may be unknown.
|
2304.08004 | Intersection patterns and incidence theorems | Let $A$ and $B$ be sets in a finite vector space. In this paper, we study the
magnitude of the set $A\cap f(B)$, where $f$ runs through a set of
transformations. More precisely, we will focus on the cases that the set of
transformations is given by orthogonal matrices or orthogonal projections. One
of the most important contributions of this paper is to show that if $A,
B\subset \mathbb{F}_q^d$ satisfy some natural conditions then, for almost every
$g\in O(d)$, there are at least $\gg q^d$ elements $z\in \mathbb{F}_q^d$ such
that \[|A\cap (g(B)+z)| \sim \frac{|A||B|}{q^d}.\] This infers that $|A-gB|\gg
q^d$ for almost every $g\in O(d)$. In the flavor of expanding functions, with
$|A|\le |B|$, we also show that the image $A-gB$ grows exponentially. In two
dimensions, the result simply says that if $|A|=q^x$ and $|B|=q^y$, as long as
$0<x\le y<2$, then for almost every $g\in O(2)$, we can always find
$\epsilon=\epsilon(x, y)>0$ such that $|A-gB|\gg |B|^{1+\epsilon}$. To prove
these results, we need to develop a new and robust incidence bound between
points and rigid motions by using a number of techniques including algebraic
methods and discrete Fourier analysis. Our results are essentially sharp in odd
dimensions. | Thang Pham, Semin Yoo | 2023-04-17T06:11:35Z | http://arxiv.org/abs/2304.08004v3 | # Intersection patterns and incidence theorems
###### Abstract.
Let \(A\) and \(B\) be sets in a finite vector space. In this paper, we study the magnitude of the set \(A\cap f(B)\), where \(f\) runs through a set of transformations. More precisely, we will focus on the cases that the set of transformations is given by orthogonal matrices or orthogonal projections. One of the most important contributions of this paper is to show that if \(A,B\subset\mathbb{F}_{q}^{d}\) satisfy some natural conditions then, for almost every \(g\in O(d)\), there are at least \(\gg q^{d}\) elements \(z\in\mathbb{F}_{q}^{d}\) such that
\[|A\cap(g(B)+z)|\sim\frac{|A||B|}{q^{d}}.\]
This infers that \(|A-gB|\gg q^{d}\) for almost every \(g\in O(d)\). In the flavor of expanding functions, with \(|A|\leq|B|\), we also show that the image \(A-gB\) grows exponentially. In two dimensions, the result simply says that if \(|A|=q^{x}\) and \(|B|=q^{y}\), as long as \(0<x\leq y<2\), then for almost every \(g\in O(2)\), we can always find \(\epsilon=\epsilon(x,y)>0\) such that \(|A-gB|\gg|B|^{1+\epsilon}\). To prove these results, we need to develop new and robust incidence bounds between points and rigid motions by using a number of techniques including algebraic methods and discrete Fourier analysis. Our results are essentially sharp in odd dimensions.
Key words and phrases:Intersection, Group action, Rigid motion, Incidences, Distances 2020 Mathematics Subject Classification: 52C10, 42B05, 11T23
###### Contents
* 1 Introduction
* 1.1 Intersection patterns I
* 1.2 Intersection patterns II
* 1.3 Incidences between points and rigid motions
* 1.4 Growth estimates under orthogonal matrices
* 1.5 Intersection pattern III
* 2 Preliminaries-key lemmas
* 2.1 Results over arbitrary finite fields
* 2.2 Results over prime fields
2.3 An extension for general sets
* 3 Warm-up incidence theorems
* 4 Incidence theorems: proofs
* 5 Intersection pattern I: proofs
* 6 Intersection pattern II: proofs
* 7 Growth estimates under orthogonal matrices: proofs
* 8 Intersection pattern III: proofs
* 9 Acknowledgements
## 1. Introduction
Let \(A\) and \(B\) be compact sets in \(\mathbb{R}^{d}\). One of the fundamental problems in Geometric Measure Theory is to study the relations between the Hausdorff dimensions of \(A\), \(B\), and \(A\cap f(B)\), where \(f\) runs through a set of transformations.
This study has a long history in the literature. A classical theorem due to Mattila [18, Theorem 13.11] or [19, Theorem 7.4] states that for Borel sets \(A\) and \(B\) in \(\mathbb{R}^{d}\) of Hausdorff dimension \(s_{A}\) and \(s_{B}\) with
\[s_{A}+s_{B}>d\text{ and }s_{B}>\frac{d+1}{2}\]
and assume in addition that the Hausdorff measures satisfy \(\mathcal{H}^{s_{A}}(A)>0\) and \(\mathcal{H}^{s_{B}}(B)>0\), then, for almost every \(g\in O(d)\), one has
\[\mathcal{L}^{d}\left(\left\{z\in\mathbb{R}^{d}\colon\dim_{H}(A\cap(z-gB))\geq s _{A}+s_{B}-d\right\}\right)>0.\]
This result means that for almost every \(g\in O(d)\), the set of \(z\)s such that \(\dim_{H}(A\cap(z-gB))\geq s_{A}+s_{B}-d\) has positive Lebesgue measure. This has been extended for other sets of transformations, for instance, the group generated by the orthogonal group and the homotheties [12, 17], the set of translations restricted on Cantor sets [1, Chapter 1], and the set of orthogonal projections [21]. A number of variants and applications can be found in a series of papers [1, 4, 5, 7, 22] and references therein.
Let \(\mathbb{F}_{q}\) be a finite field of order \(q\), where \(q\) is a prime power. In this paper, we introduce the finite field analog of this type of questions and study the primary properties with an
emphasis on the group of orthogonal matrices and the set of orthogonal projections. More precisely, we consider the following three main questions in this paper.
**Question 1.1**.: _Given \(A,B\subset\mathbb{F}_{q}^{d}\) and \(g\in O(d)\), under what conditions on \(A\), \(B\), and \(g\) can we have_
\[|A\cap(g(B)+z)|\geq\frac{|A||B|}{q^{d}} \tag{1.1}\]
_or a stronger form_
\[|A\cap(g(B)+z)|\sim\frac{|A||B|}{q^{d}} \tag{1.2}\]
_for almost every \(z\in\mathbb{F}_{q}^{d}\)?_
**Question 1.2**.: _Given \(P\subset\mathbb{F}_{q}^{2d}\) and \(g\in O(d)\), under what conditions on \(P\) and \(g\) can we have_
\[|S_{g}(P)|:=|\{x-gy\colon(x,y)\in P\}|\gg q^{d}?\]
**Question 1.3**.: _Let \(A,B\subset\mathbb{F}_{q}^{d}\) and \(m\) be a positive integer._
1. _If_ \(|A|,|B|>q^{m}\)_, then under what conditions on_ \(A\) _and_ \(B\) _can we have_ \[|\pi_{W}(A)\cap\pi_{W}(B)|\gg q^{m}\] _for almost every_ \(W\in G(d,m)\)_?_
2. _If_ \(|B|<q^{m}<|A|\)_, then under what conditions on_ \(A\) _and_ \(B\) _can we have_ \[|\pi_{W}(A)\cap\pi_{W}(B)|\gg|B|\] _for almost every_ \(W\in G(d,m)\)_?_
_Here \(\pi_{W}(X)\) denotes the orthogonal projection of \(X\) onto \(W\)._
**Main ideas (sketch):** This paper presents more than twenty new theorems on these three questions, for the reader's convenience, we want to briefly explain the main steps at the beginning. Our study of the three above questions relies mainly on incidence theorems. While an incidence bound between points and affine subspaces due to Pham, Phuong, and Vinh [25] is sufficient to prove sharp results on Question 1.3, we need to develop incidence theorems between points and rigid-motions for the first two questions. In \(\mathbb{R}^{2}\), such an incidence structure has been studied intensively in the breakthrough solution of the Erdos distinct distances problem [6, 8]. In this paper, we present the reverse direction, namely, from the distance problem to incidence theorems. This strategy allows
us to take advantage of recent developments on the distance topic, as a consequence, we are able to establish a complete picture in any dimensions over finite fields. Our paper provides two types of incidence results: over arbitrary fields \(\mathbb{F}_{q}\) and over prime fields \(\mathbb{F}_{p}\). The method we use is the discrete Fourier analysis in which estimates on the following sum
\[\sum_{||m||=||m^{\prime}||}|\widehat{A}(m)|^{2}|\widehat{B}(m^{\prime})|^{2} \ \ \text{for any}\ A,B\subset\mathbb{F}_{q}^{d},\]
play the crucial role. While we use results from the Restriction theory due to Chapman, Erdogan, Hart, Iosevich, and Koh [2], and Iosevich, Koh, Lee, Pham, and Shen [10] to bound this sum effectively over arbitrary finite fields, the proofs of better estimates over prime fields are based on the recent \(L^{2}\) distance estimate due to Murphy, Petridis, Pham, Rudnev, and Stevens [24] which was proved by using algebraic methods and Rudnev's point-plane incidence bound [26]. This presents surprising applications of the Erdos-Falconer distance problem. We would like to emphasize here that the approach we use to bound the above sum over prime fields is also one of the novelties of this paper, and that sum will have more applications in other topics.
### Intersection patterns I
Let us recall the first question.
**Question 1.1**.: _Given \(A\),\(B\subset\mathbb{F}_{q}^{d}\) and \(g\in O(d)\), under what conditions on \(A\), \(B\), and \(g\) can we have_
\[|A\cap(g(B)+z)|\geq\frac{|A||B|}{q^{d}} \tag{1.1}\]
_or a stronger form_
\[|A\cap(g(B)+z)|\sim\frac{|A||B|}{q^{d}} \tag{1.2}\]
_for almost every \(z\in\mathbb{F}_{q}^{d}\)?_
We note that for any \(g\in O(d)\), we have
\[\sum_{z\in\mathbb{F}_{q}^{d}}|A\cap(g(B)+z)|=|A||B|.\]
Thus, there always exists \(z\in\mathbb{F}_{q}^{d}\) such that \(|A\cap(g(B)+z)|\geq\frac{|A||B|}{q^{d}}\).
Let us start with some examples.
**Example 1:** Let \(A\) be a subspace of dimension \(k\) with \(1\leq k<d\), \(A=B\), and \(\operatorname{Stab}(A)\) be the set of matrices \(g\in O(d)\) such that \(gA=A\). It is well-known that
\(q^{\binom{d-k}{2}}\). For any \(g\in\operatorname{Stab}(A)\), we have
\[|A\cap(gB+z)|=\begin{cases}0&\text{if }z\not\in A\\ |A|&\text{if }z\in A\end{cases}.\]
**Example 2:** In \(\mathbb{F}_{q}^{d}\) with \(d\) odd, given \(0<c<1\), for each \(g\in O(d)\), there exist \(A,B\subset\mathbb{F}_{q}^{d}\) with \(|A|=|B|=cq^{\frac{d+1}{2}}\) and \(|A-gB|\leq 2cq^{d}\). To see this, let \(A=\mathbb{F}_{q}^{\frac{d-1}{2}}\times\{0\}^{\frac{d-1}{2}}\times X\) where \(X\) is an arithmetic progression of size \(cq\) and \(B=g^{-1}\left(\mathbb{F}_{q}^{\frac{d-1}{2}}\times\{0\}^{\frac{d-1}{2}}\times X\right)\). It is clear that the number of \(z\) such that \(A\cap(gB+z)\neq\emptyset\) is at most \(|A-gB|\leq q^{d-1}|X-X|\leq 2cq^{d}\).
These two examples suggest that if we want
\[|A\cap(g(B)+z)|\sim\frac{|A||B|}{q^{d}},\]
for almost every \(z\in\mathbb{F}_{q}^{d}\), then \(A\) and \(B\) cannot be small and \(g\) cannot be chosen arbitrary in \(O(d)\). Our first theorem describes this phenomenon in detail.
**Theorem 1.4**.: _Let \(A\) and \(B\) be sets in \(\mathbb{F}_{q}^{d}\). Then there exists \(E\subset O(d)\) with_
\[|E|\ll\frac{|O(d-1)|q^{2d}}{|A||B|},\]
_such that for any \(g\in O(d)\setminus E\), there are at least \(\gg q^{d}\) elements \(z\) satisfying_
\[|A\cap(g(B)+z)|\sim\frac{|A||B|}{q^{d}}.\]
This theorem is valid in the range \(|A||B|\gg q^{d+1}\), since
\[\frac{|O(d-1)|q^{2d}}{|A||B|}\ll|O(d)|\text{ when }|A||B|\gg q^{d+1}.\]
The condition \(|A||B|\gg q^{d+1}\) is sharp in odd dimensions for comparable sets \(A\) and \(B\). A construction will be provided in Section 5. When one set is of small size, we can hope for a better estimate, and the next theorem presents such a result.
**Theorem 1.5**.: _Let \(A\) and \(B\) be sets in \(\mathbb{F}_{q}^{d}\). Assume in addition either (\(d\geq 3\) odd) or (\(d\equiv 2\mod 4,q\equiv 3\mod 4\)). Then there exists \(E\subset O(d)\) such that for any \(g\in O(d)\setminus E\), there are at least \(\gg q^{d}\) elements \(z\) satisfying_
\[|A\cap(g(B)+z)|\sim\frac{|A||B|}{q^{d}}.\]
_In particular,_
1. _If_ \(|A|<q^{\frac{d-1}{2}}\)_, then one has_ \(|E|\ll\frac{q^{(d^{2}+d)2}}{|A||B|}\)_._
2. _If_ \(q^{\frac{d-1}{2}}\leq|A|\leq q^{\frac{d+1}{2}}\)_, then one has_ \(|E|\ll\frac{q^{(d^{2}+1)2}}{|B|}\)_._
In Theorem 1.5, the roles of \(A\) and \(B\) are symmetric. In a reduced form, the theorem says that if \(|A|\leq|B|\), \(|B|\gg q^{\frac{d+1}{2}}\), and \(|A||B|\gg q^{d}\), then
\[|A\cap(g(B)+z)|\sim\frac{|A||B|}{q^{d}}.\]
Our proof involves a number of results from Restriction theory in which the conditions on \(d\) and \(q\) are necessary. We do not know if it is still true for the case \(d\equiv 2\mod 4\) and \(q\equiv 1\mod 4\), so it is left as an open question.
The sharpness construction of Theorem 1.4 can be modified to show that these two statements are also optimal in odd dimensions. In even dimensions, there is no evidence to believe that the above theorems are sharp. In the next theorem, we present an improvement in two dimensions.
**Theorem 1.6**.: _Assume that \(q\equiv 3\mod 4\). Let \(A\) and \(B\) be sets in \(\mathbb{F}_{q}^{2}\) with \(|A|\leq|B|\). Then there exists \(E\subset O(2)\) with_
\[|E|\ll\frac{q^{3}}{|A|^{1/2}|B|}\]
_such that for any \(g\in O(2)\setminus E\), there are at least \(\gg q^{2}\) elements \(z\) satisfying_
\[|A\cap(g(B)+z)|\sim\frac{|A||B|}{q^{2}}.\]
_Remark 1.7_.: Note that this theorem is stronger than Theorem 1.5 in the range \(|A|\geq q\). When \(|A|\geq|B|\) we can switch between the roles of \(A\) and \(B\) to obtain a similar result, namely, there exists \(E\subset O(2)\) with
\[|E|\ll\frac{q^{3}}{|A||B|^{1/2}}\]
such that for any \(g\in O(2)\setminus E\), there are at least \(\gg q^{2}\) elements \(z\) satisfying
\[|A\cap(g(B)+z)|\sim\frac{|A||B|}{q^{2}}.\]
If our sets lie on the plane over a prime field \(\mathbb{F}_{p}\), then further improvements can be made.
**Theorem 1.8** (Medium A).: _Assume that \(p\equiv 3\mod 4\). Let \(A\) and \(B\) be sets in \(\mathbb{F}_{p}^{2}\) with \(|A|\leq|B|\). Then there exists \(E\subset O(2)\) such that for any \(g\in O(2)\setminus E\), there are at least
elements z satisfying_
\[|A\cap(g(B)+z)|\sim\frac{|A||B|}{p^{2}}.\]
_In particular,_
(1) _If \(p\leq|A|\leq p^{5/4}\) and \(p^{5/4}\leq|B|\leq p^{4/3}\), then \(|E|\ll\frac{p^{59/24}}{|A|^{2\beta_{3}}|B|^{1/2}}\)._
(2) _If \(p^{5/4}\leq|A|\leq p^{4/3}\) and \(p^{5/4}\leq|B|\leq p^{4/3}\), then \(|E|\ll\frac{p^{9/4}}{|A|^{1/2}|B|^{1/2}}\)._
**Theorem 1.9** (Large B).: _Assume that \(p\equiv 3\mod 4\). Let \(A\) and \(B\) be sets in \(\mathbb{F}_{p}^{2}\) with \(|A|\leq|B|\). Then there exists \(E\subset O(2)\) such that for any \(g\in O(2)\setminus E\), there are at least \(\gg p^{2}\) elements z satisfying_
\[|A\cap(g(B)+z)|\sim\frac{|A||B|}{p^{2}}.\]
_In particular,_
(1) _If \(p\leq|A|\leq p^{5/4}\) and \(|B|>p^{4/3}\), then \(|E|\ll\frac{p^{17/6}}{|A|^{2\beta_{3}}|B|^{3/4}}\)._
(2) _If \(p^{5/4}\leq|A|\leq p^{4/3}\) and \(|B|>p^{4/3}\), then \(|E|\ll\frac{p^{21/8}}{|A|^{1/2}|B|^{3/4}}\)._
_Remark 1.10_.: We do not believe the results in even dimensions are optimal, but proving improvements is outside the realm of methods of this paper.
_Remark 1.11_.: In the above two theorems, the sets \(A\) and \(B\) cannot be both small since, otherwise, it might lead to a contradiction from the inequalities that \(p^{2}\ll|A-gB|\ll|B|^{2}\). To compare to the theorems over arbitrary finite fields, we include the following table.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & \(|A|\leq p^{\frac{1}{2}}\) & \(p^{\frac{1}{2}}<|A|\leq p^{\frac{3}{4}}\) & \(p^{\frac{3}{4}}<|A|\leq p\) & \(p<|A|\leq p^{\frac{5}{4}}\) & \(p^{\frac{5}{4}}<|A|\leq p^{\frac{4}{3}}\) \\ \hline \(|B|\leq p^{\frac{3}{4}}\) & \(\diagup\) & \(\diagup\) & \(\diagup\) & \(\diagup\) & \(\diagup\) & \(\diagup\) \\ \hline \(p^{\frac{3}{4}}<|B|\leq p\) & \(\diagup\) & \(\diagup\) & \(\diagup\) & \(\diagup\) & \(\diagup\) \\ \hline \(p<|B|\leq p^{\frac{5}{4}}\) & \(\diagup\) & \(\diagup\) & \(\diagup\) & \(\diagup\) & \(\diagup\) \\ \hline \(p^{\frac{5}{4}}<|B|\leq p^{\frac{4}{3}}\) & \(\diagup\) & \(\diagup\) & unknown & \(\diagup\) & \(\diagup\) \\ \hline \(p^{\frac{4}{3}}<|B|\) & \(\diagup\) & \(\diagup\) & \(\diagup\) & \(|B|^{3}<p^{2}|A|^{2}\) & \(|B|<p^{\frac{3}{2}}\) \\ \hline \end{tabular}
\end{table}
Table 1. In this table, by “\(\vee\)” we mean better result, by “\(\varnothing\)” we mean weaker result, by “\(f(|A|,|B|)\)” we mean better result under the condition \(f(|A|,|B|)\), by “unknown” we mean there is no non-trivial result in this range yet, and by “\(\diagup\)” we mean invalid range corresponding to \(|B|\leq|A|\) or \(|A||B|\ll p^{2}\).
### Intersection patterns II
For \(g\in O(d)\), define the map \(S_{g}\colon\mathbb{F}_{q}^{2d}\to\mathbb{F}_{q}^{d}\) by
\[S_{g}(x,y)=x-gy.\]
The results in the previous subsection say that if \(P=A\times B\) with \(A,B\subset\mathbb{F}_{q}^{d}\) and \(|A||B|\) is larger than a certain threshold, then for almost every \(g\in O(d)\), one has \(|S_{g}(P)|\gg q^{d}\).
In this subsection, we present a result for general sets \(P\subset\mathbb{F}_{q}^{2d}\). With the same approach, we have the following theorem.
**Theorem 1.12**.: _Given \(P\subset\mathbb{F}_{q}^{2d}\), there exists \(E\subset O(d)\) with \(|E|\ll q^{2d}|O(d-1)|/|P|\) such that, for all \(g\in O(d)\setminus E\), we have \(|S_{g}(P)|\gg q^{d}\)._
_Remark 1.13_.: We always have \(|S_{g}(P)|q^{d}\geq|P|\), since for each \(z\in S_{g}(P)\) and for each \(x\in\mathbb{F}_{q}^{d}\), there is at most one \(y\in\mathbb{F}_{q}^{d}\) such that \((x,y)\in P\). Thus, \(|S_{g}(P)|\geq|P|q^{-d}\) for all \(g\in O(d)\).
### Incidences between points and rigid motions
We now move to incidence theorems.
Let \(P\) be a set of points in \(\mathbb{F}_{q}^{d}\times\mathbb{F}_{q}^{d}\) and \(R\) be a set of rigid motions in \(\mathbb{F}_{q}^{d}\), i.e. maps of the form \(gx+z\) with \(g\in O(d)\) and \(z\in\mathbb{F}_{q}^{d}\). We define the incidence \(I(P,R)\) as follows:
\[I(P,R)=\#\{(x,y,g,z)\in P\times R:x=gy+z\}.\]
We first provide a universal incidence bound.
**Theorem 1.14**.: _Let \(P\subset\mathbb{F}_{q}^{d}\times\mathbb{F}_{q}^{d}.\) Then we have_
\[\left|I(P,R)-\frac{|P||R|}{q^{d}}\right|\ll q^{(d^{2}-d+2)/4}\sqrt{|P||R|}.\]
In this theorem and the next ones, the quantities \(|P||R|/q^{d}\) and \(q^{(d^{2}-d+2)/4}\sqrt{|P||R|}\) are referred to the main and error terms, respectively.
Under some additional conditions on \(d\) and \(q\), if one set is of pretty small size compared to the other, then we can prove stronger incidence bounds.
**Theorem 1.15**.: _Let \(P=A\times B\) for \(A,B\subset\mathbb{F}_{q}^{d}.\) Assume in addition that either (\(d\geq 3\) odd) or (\(d\equiv 2\mod 4\) and \(q\equiv 3\mod 4\))._
(1) _If \(|A|<q^{\frac{d-1}{2}}\), then_
\[\left|I(P,R)-\frac{|P||R|}{q^{d}}\right|\ll q^{(d^{2}-d)/4}\sqrt{|P||R|}.\]
(2) _If \(q^{\frac{d-1}{2}}\leq|A|\leq q^{\frac{d+1}{2}}\), then_
\[\left|I(P,R)-\frac{|P||R|}{q^{d}}\right|\ll q^{(d^{2}-2d+1)/4}\sqrt{|P||R||A|}.\]
In terms of applications (Theorem 1.4 and Theorem 1.5), one can expect that these two incidence theorems are sharp in odd dimensions. However, this is not true for even dimensions, and the next theorems present improvements in two dimensions.
**Theorem 1.16**.: _Assume that \(q\equiv 3\mod 4.\) Let \(P=A\times B\) for \(A,B\subset\mathbb{F}_{q}^{2}.\) Then we have_
\[\left|I(P,R)-\frac{|P||R|}{q^{2}}\right|\ll q^{1/2}|P|^{1/2}|R|^{1/2}\min\big{(} |A|^{1/4},|B|^{1/4}\big{)}.\]
A direct computation shows that this incidence theorem is better than the previous in the range \(|A|\geq q\).
In the plane over prime fields, we have the following three major improvements corresponding to three cases: \(|A|\leq p\) and \(|B|\leq p^{4/3}\), \(|A|\geq p\) and \(|B|\leq p^{4/3}\), and \(|B|>p^{4/3}\), respectively.
**Theorem 1.17** (\(|\mathbf{A}|\leq\mathbf{p}\) and \(|\mathbf{B}|\leq\mathbf{p}^{4/3}\)).: _Assume that \(p\equiv 3\mod 4.\) Let \(P=A\times B\) for \(A,B\subset\mathbb{F}_{p}^{2}\) with \(|A|\leq|B|.\) The following hold._
(1) _If \(p^{3/4}\leq|A|\leq p\) and \(p^{5/4}\leq|B|\leq p^{4/3}\), then_
\[\left|I(P,R)-\frac{|P||R|}{p^{2}}\right|\ll p^{1/16}|P|^{3/4}|R|^{1/2}|A|^{1/1 2}.\]
(2) _If \(|A|\leq p\) and \(p\leq|B|\leq p^{5/4}\), then_
\[\left|I(P,R)-\frac{|P||R|}{p^{2}}\right|\ll p^{3}|P|^{1/2}|R|^{1/2}\left(\frac {1}{p^{5}}+\frac{|P|^{1/3}|A|^{1/3}}{p^{17/3}}\right)^{1/2}.\]
(3) _If \(|A|\leq p\) and \(|B|\leq p\), then_
\[\left|I(P,R)-\frac{|P||R|}{p^{2}}\right|\ll p^{3}|P|^{1/2}|R|^{1/2}\left(\frac {1}{p^{5}}+\frac{|P|^{2/3}}{p^{6}}\right)^{1/2}.\]
**Theorem 1.18** (\(|\mathbf{A}|\geq\mathbf{p}\) and \(|\mathbf{B}|\leq\mathbf{p}^{4/3}\)).: _Assume that \(p\equiv 3\mod 4.\) Let \(P=A\times B\) for \(A,B\subset\mathbb{F}_{p}^{2}\) with \(|A|\leq|B|.\) The following hold._
(1) _If \(p\leq|A|\leq p^{5/4}\) and \(p\leq|B|\leq p^{5/4}\), then_
\[\left|I(P,R)-\frac{|P||R|}{p^{2}}\right|\ll p^{1/3}|P|^{2/3}|R|^{1/2}.\]
_._
_Remark 1.20_.: The incidence theorems in two dimensions offer both the upper and lower bounds which depend simultaneously on the exponents of \(p\), \(|P|\), and \(|R|\). Hence, it is very difficult to come up with a conjecture that is sharp for most ranges.
### Growth estimates under orthogonal matrices
For \(A,B\subset\mathbb{F}_{q}^{d}\), we have seen that there exists a set \(E\subset O(d)\) such that for all \(g\in O(d)\setminus E\), one has \(|A-gB|\gg q^{d}\). In this subsection, we represent this type of results in the language of expanding functions, namely, assume \(|A|\leq|B|\), we want to have a weaker conclusion of the form \(|A-gB|\gg|B|^{1+\epsilon}\) for a given \(\epsilon>0\). Note that in this setting, we can obtain non-trivial results for small sets \(A\) and \(B\). With the same proof, we have the following theorems.
**Theorem 1.21**.: _Let \(\epsilon>0\). Given \(A,B\subset\mathbb{F}_{q}^{d}\) with \(|A|\leq|B|\) and \(|B|^{1+\epsilon}<q^{d}/2\), there exists \(E\subset O(d)\) with_
\[|E|\ll\frac{|O(d-1)|q^{d}|B|^{\epsilon}}{|A|}\]
_such that for all \(g\in O(d)\setminus E\), we have_
\[|A-gB|\gg|B|^{1+\epsilon}.\]
**Theorem 1.22**.: _Let \(\epsilon>0\). Assume either (\(d\geq 3\) odd) or (\(d\equiv 2\mod 4,q\equiv 3\mod 4\)). Given \(A,B\subset\mathbb{F}_{q}^{d}\) with \(|A|\leq|B|\) and \(|B|^{1+\epsilon}<q^{d}/2\), there exists \(E\subset O(d)\) such that for all \(g\in O(d)\setminus E\), we have_
\[|A-gB|\gg|B|^{1+\epsilon}.\]
_In particular,_
(1) _If_ \(|A|<q^{\frac{d-1}{2}}\)_, then one has_ \(|E|\ll\frac{q^{\frac{d^{2}-d}{2}}|B|^{\epsilon}}{|A|}\)_._
(2) _If_ \(q^{\frac{d-1}{2}}\leq|A|\leq q^{\frac{d+1}{2}}\)_, then one has_ \(|E|\ll q^{\frac{d^{2}-2d+1}{2}}|B|^{\epsilon}\)_._
The two above theorems say that when the sizes of \(A\) and \(B\) belong to certain ranges, then the image \(A-gB\) grows exponentially for almost every \(g\in O(d)\). However, in two dimensions, the statement is much more beautiful: if \(|A|=q^{x}\) and \(|B|=q^{y}\), as long as \(0<x\leq y<2\), then for almost every \(g\in O(2)\), we can always find \(\epsilon=\epsilon(x,y)>0\) such that \(|A-gB|\gg|B|^{1+\epsilon}\).
**Theorem 1.23**.: _Let \(\epsilon>0\). Assume that \(q\equiv 3\mod 4\). Given \(A,B\subset\mathbb{F}_{q}^{2}\) with \(|A|\leq|B|\) and \(|B|^{1+\epsilon}<q^{2}/2\), there exists \(E\subset O(2)\) with_
\[|E|\ll\frac{q|B|^{\epsilon}}{|A|^{1/2}}\]
_such that for all \(g\in O(2)\setminus E\), we have_
\[|A-gB|\gg|B|^{1+\epsilon}.\]
In this paper, we do not compute the exponent \(\epsilon(x,y)\) explicitly in terms of \(x\) and \(y\), but it can be improved when we replace \(\mathbb{F}_{q}\) by \(\mathbb{F}_{p}\). The ranges of improvements are the same as those indicated in Table 2.
**Theorem 1.24** (Small \(A\)).: _Let \(\epsilon>0\). Assume that \(p\equiv 3\mod 4\). Given \(A,B\subset\mathbb{F}_{p}^{2}\) with \(|A|\leq|B|\) and \(|B|^{1+\epsilon}<p^{2}/2\), there exists \(E\subset O(2)\) such that for all \(g\in O(2)\setminus E\), we have_
\[|A-gB|\gg|B|^{1+\epsilon}.\]
_In particular,_
(1) _If \(p^{3/4}\leq|A|\leq p\) and \(p^{5/4}\leq|B|\leq p^{4/3}\), then \(|E|\ll\frac{p^{1/8}|B|^{1/2+\epsilon}}{|A|^{1/3}}\)._
(2) _If \(|A|\leq p\) and \(p\leq|B|\leq p^{5/4}\), then \(|E|\ll\frac{p|B|^{\epsilon}+p^{1/3}|B|^{\epsilon}|P|^{1/3}|A|^{1/3}}{|A|}\)._
(3) _If \(|A|\leq p\) and \(|B|\leq p\), then \(|E|\ll\frac{|B|^{\epsilon}|P|^{2/3}}{|A|}\)._
**Theorem 1.25** (Medium \(A\)).: _Let \(\epsilon>0\). Assume that \(p\equiv 3\mod 4\). Given \(A,B\subset\mathbb{F}_{p}^{2}\) with \(|A|\leq|B|\) and \(|B|^{1+\epsilon}<p^{2}/2\), there exists \(E\subset O(2)\) such that for all \(g\in O(2)\setminus E\), we have_
\[|A-gB|\gg|B|^{1+\epsilon}.\]
_In particular,_
(1) _If \(p\leq|A|\leq p^{5/4}\) and \(p\leq|B|\leq p^{5/4}\), then \(|E|\ll\frac{p^{23}|B|^{1/3+\epsilon}}{|A|^{2/3}}\)._
(2) _If \(p\leq|A|\leq p^{5/4}\) and \(p^{5/4}\leq|B|\leq p^{4/3}\), then \(|E|\ll\frac{p^{11/24}|B|^{1/2+\epsilon}}{|A|^{2/3}}\)._
(3) _If \(p^{5/4}\leq|A|\leq p^{4/3}\) and \(p^{5/4}\leq|B|\leq p^{4/3}\), then \(|E|\ll\frac{p^{1/4}|B|^{1/2+\epsilon}}{|A|^{1/2}}\)._
**Theorem 1.26** (Large \(B\)).: _Let \(\epsilon>0\). Assume that \(p\equiv 3\mod 4\). Given \(A,B\subset\mathbb{F}_{p}^{2}\) with \(|A|\leq|B|\) and \(|B|^{1+\epsilon}<p^{2}/2\), there exists \(E\subset O(2)\) such that for all \(g\in O(2)\setminus E\), we have_
\[|A-gB|\gg|B|^{1+\epsilon}.\]
_In particular,_
(1) _If \(p\leq|A|\leq p^{5/4}\) and \(|B|>p^{4/3}\), then \(|E|\ll\frac{p^{56}|B|^{1/4+\epsilon}}{|A|^{2/3}}\)._
(2) _If \(p^{5/4}\leq|A|\leq p^{4/3}\) and \(|B|>p^{4/3}\), then \(|E|\ll\frac{p^{5/8}|B|^{1/4+\epsilon}}{|A|^{1/2}}\)._
### Intersection pattern III
For a finite set \(E\subset\mathbb{R}^{d}\) and a subspace \(V\) of \(\mathbb{R}^{d}\), the orthogonal projection of \(E\) onto \(V\) is defined by
\[\pi_{V}(E):=\{x\in V:(x+V^{\perp})\cap E\neq\emptyset\}, \tag{1.3}\]
where \(V^{\perp}\) denotes the orthogonal complement of \(V\).
In \(\mathbb{F}_{q}^{d}\) or vector spaces over arbitrary finite fields, due to the fact that there exist null-vectors, i.e. vectors \(v\) with \(v\cdot v=0\), the orthogonal projection of \(E\) onto \(V\) is defined by
\[\pi_{V}(E):=\{x+V^{\perp}\colon(x+V^{\perp})\cap E\neq\emptyset,\ x\in\mathbb{F }_{p}^{d}\}.\]
The elements of \(\pi_{V}(E)\) are \((d-m)\)-dimensional affine planes of \(\mathbb{F}_{q}^{d}\) when \(\dim V=m\). We also note that as in the Euclidean we have the property that
\[\dim(V)+\dim(V^{\perp})=d,\]
for all subspaces \(V\subset\mathbb{F}_{q}^{d}\).
Chen [3] proved the following result.
**Theorem 1.27**.: _[_3_, Theorem 1.2.]_ _Let \(E\subset\mathbb{F}_{q}^{d}\)._
(1) _For any \(N<\frac{|E|}{2}\),_
\[|\{W\in G(d,m)\colon|\pi_{W}(E)|\leq N\}|\leq 4q^{(d-m)m-m}N.\]
(2) _For any \(\delta\in(0,1)\),_
\[|\{W\in G(d,m)\colon|\pi_{W}(E)|\leq\delta q^{m}\}|\leq 2\left(\frac{\delta}{1 -\delta}\right)q^{m(d-m)+m}|E|^{-1}.\]
**Corollary 1.28**.: _[_3_, Corollary 1.3.]_ _Let \(E\subset\mathbb{F}_{q}^{d}\) with \(|E|=q^{s}\)._
(1) _If \(s\leq m\) and \(t\in(0,s]\), then_
\[|\{W\in G(d,m)\colon|\pi_{W}(E)|\leq q^{t}/10\}|\leq\frac{1}{2}q^{m(d-m-1)+t}.\]
(2) _If \(s>m\), then_
\[|\{W\in G(d,m)\colon|\pi_{W}(E)|\leq q^{m}/10\}|\leq\frac{1}{2}q^{m(d-m+1)-s}.\]
(3) _If \(s>2m\), then_
\[\{W\in G(d,m)\colon|\pi_{W}(E)|\neq q^{m}\}\leq 4q^{(d-m)(m+1)-s}.\]
As mentioned earlier, we study the following question.
**Question 1.29**.: _Let \(A,B\subset\mathbb{F}_{q}^{d}\) and \(m\) be a positive integer._
(1) _If \(|A|,|B|>q^{m}\), then under what conditions on \(A\) and \(B\) can we have_
\[|\pi_{W}(A)\cap\pi_{W}(B)|\gg q^{m}\]
_for almost every_ \(W\in G(d,m)\)_?_
2. _If_ \(|B|<q^{m}<|A|\)_, then under what conditions on_ \(A\) _and_ \(B\) _can we have_ \[|\pi_{W}(A)\cap\pi_{W}(B)|\gg|B|\] _for almost every_ \(W\in G(d,m)\)_?_
The next theorem provides a partial optimal solution to this question.
**Theorem 1.30**.: _Let \(A,B\subset\mathbb{F}_{q}^{d}\) and \(m\) be a positive integer._
1. _If_ \(|A|,|B|>q^{m}\)_, then there are at least_ \(\gg q^{m(d-m)}\) _subspaces_ \(W\in G(d,m)\) _such that_ \[|\pi_{W}(A)\cap\pi_{W}(B)|\gg q^{m}.\]
2. _If_ \(|A|,|B|>q^{2m}\)_, then there are at least_ \(\gg q^{m(d-m)}\) _subspaces_ \(W\in G(d,m)\) _such that_ \[|\pi_{W}(A)\cap\pi_{W}(B)|=q^{m}.\]
3. _Let_ \(m\geq d/2\)_. If_ \(|A|>100q^{m}\)_,_ \(|B|<q^{m}/2\)_, and_ \(|A||B|>160q^{2m}\)_, then there are at least_ \(\gg q^{m(d-m)}\) _subspaces_ \(W\in G(d,m)\) _such that_ \[|\pi_{W}(A)\cap\pi_{W}(B)|\gg|B|.\]
We now discuss the sharpness of this theorem. In the first statement, it is clear that the condition \(|A|,|B|>q^{m}\) can not be replaced by \(|A|,|B|>q^{m-\epsilon}\) for any \(\epsilon>0\). The second statement is sharp when \(d=2\) and \(m=1\). To see this, let \(K\) be a Kakeya set in \(\mathbb{F}_{q}^{2}\) of size \(\frac{q^{2}-1}{2}+q\), i.e. a set contains a full line in all directions, such an example can be found in [23]. Set \(A=B=\mathbb{F}_{q}^{2}\setminus K\). It is clear that \(|\pi_{W}(A)|,|\pi_{W}(B)|\neq q\) for all directions \(W\). The third statement is also sharp in \(\mathbb{F}_{q}^{2}\) in the following sense: for any \(\epsilon>0\) and any positive constant \(c\), there exist \(A,B\subset\mathbb{F}_{q}^{2}\), \(|A||B|=q^{2-\epsilon}\), and there are at most \(cq^{m(d-m)}\) subspaces \(W\in G(d,m)\) such that \(|\pi_{W}(A)\cap\pi_{W}(B)|\gg|B|\). This is a long construction, so we omit it here and present it in detail in Section 8. When \(m>1\), we do not have any examples for its sharpness.
To keep this paper not too long, we do not want to make a full comparison between the results in this paper and those in the fractal. There is one crucial point we have to mention here that while Theorem 1.4, Theorem 1.5, and Theorem 1.12 are directly in line with Mattila's results in [18, Theorem 13.11] and [20], we are not aware of any results in the continuous setting that are similar to those in \(\mathbb{F}_{q}^{2}\) or \(\mathbb{F}_{p}^{2}\). This suggests that there might be room for improvements in \(\mathbb{R}^{2}\).
## 2. Preliminaries-key lemmas
Let \(f\colon\mathbb{F}_{q}^{n}\to\mathbb{C}\) be a complex valued function. The Fourier transform \(\widehat{f}\) of \(f\) is defined by
\[\widehat{f}(m)\colon=q^{-n}\sum_{x\in\mathbb{F}_{q}^{n}}\chi(-m\cdot x)f(x),\]
here, we denote by \(\chi\) a non-trivial additive character of \(\mathbb{F}_{q}\). Note that \(\chi\) satisfies the following orthogonality property
\[\sum_{\alpha\in\mathbb{F}_{q}^{n}}\chi(\beta\cdot\alpha)=\left\{\begin{array} []{ll}0&\text{if}\quad\beta\neq(0,\ldots,0),\\ q^{n}&\text{if}\quad\beta=(0,\ldots,0).\end{array}\right.\]
We also have the Fourier inversion formula as follows
\[f(x)=\sum_{m\in\mathbb{F}_{q}^{n}}\chi(m\cdot x)\widehat{f}(m).\]
With these notations in hand, the Plancherl theorem states that
\[\sum_{m\in\mathbb{F}_{q}^{n}}|\widehat{f}(m)|^{2}=q^{-n}\sum_{x\in\mathbb{F}_ {q}^{n}}|f(x)|^{2}.\]
In this paper, we denote the quadratic character of \(\mathbb{F}_{q}\) by \(\eta\), precisely, for \(s\neq 0\), \(\eta(s)=1\) if \(s\) is a square and \(-1\) otherwise. The convention that \(\eta(0)=0\) will be also used in this paper.
This section is devoted to proving upper bounds of the following sum
\[\sum_{||m||=||m^{\prime}||}|\widehat{A}(m)|^{2}|\widehat{B}(m^{\prime})|^{2} \ \text{ for any }A,B\subset\mathbb{F}_{q}^{d},\]
which is the key step in our proofs of incidence theorems.
### Results over arbitrary finite fields
We first start with a direct application of the Plancherel theorem.
**Theorem 2.1**.: _Let \(A\),\(B\) be sets in \(\mathbb{F}_{q}^{d}\). Then we have_
\[\sum_{||m||=||m^{\prime}||}|\widehat{A}(m)|^{2}|\widehat{B}(m^{\prime})|^{2} \leq\frac{|A||B|}{q^{2d}}.\]
To improve this result, we need to recall a number of lemmas in the literature.
For any \(j\neq 0\), let \(S_{j}\) be the sphere centered at the origin of radius \(j\) defined as follows:
\[S_{j}\colon=\{x\in\mathbb{F}_{q}^{d}\colon x_{1}^{2}+\cdots+x_{d}^{2}=j\}.\]
The next lemma provides the precise form of the Fourier decay of \(S_{j}\) for any \(j\in\mathbb{F}_{q}\). A proof can be found in [9] or [13].
**Lemma 2.2**.: _For any \(j\in\mathbb{F}_{q}\), we have_
\[\widehat{S_{j}}(m)=q^{-1}\delta_{0}(m)+q^{-d-1}\eta^{d}(-1)G_{1}^{d}(\eta,\chi) \sum_{r\in\mathbb{F}_{q}^{*}}\eta^{d}(r)\chi\Big{(}jr+\frac{\|m\|}{4r}\Big{)},\]
_where \(\eta\) is the quadratic character, and \(\delta_{0}(m)=1\) if \(m=(0,\ldots,0)\) and \(\delta_{0}(m)=0\) otherwise. Moreover, for \(m,m^{\prime}\in\mathbb{F}_{q}^{d}\), we have_
\[\sum_{j\in\mathbb{F}_{q}}\widehat{S_{j}}(m)\overline{\widehat{S_{j}}(m^{ \prime})}=\frac{\delta_{0}(m)\delta(m^{\prime})}{q}+\frac{1}{q^{d+1}}\sum_{s \in\mathbb{F}_{q}^{*}}\chi(s(||m||-||m^{\prime}||)).\]
For \(A\subset\mathbb{F}_{q}^{d}\), define
\[M^{*}(A)=\max_{j\neq 0}\sum_{m\in S_{j}}|\widehat{A}(m)|^{2},\text{ and }M(A)=\max_{j\in\mathbb{F}_{q}}\sum_{m\in S_{j}}|\widehat{A}(m)|^{2}.\]
We recall the following result from [13], which is known as the finite field analog of the spherical average in the classical Falconer distance problem [16, Chapter 3].
**Theorem 2.3**.: _Let \(A\subset\mathbb{F}_{q}^{d}\). We have_
(1) _If \(d=2\), then \(M^{*}(A)\ll q^{-3}|A|^{3/2}\)._
(2) _If \(d\geq 4\) even, then \(M^{*}(A)\ll\min\left\{\frac{|A|}{q^{d}},\ \frac{|A|}{q^{d+1}}+\frac{|A|^{2}}{q \frac{3d+1}{2}}\right\}\)._
(3) _If \(d\geq 3\) odd, then \(M(A)\ll\min\left\{\frac{|A|}{q^{d}},\ \frac{|A|}{q^{d+1}}+\frac{|A|^{2}}{q \frac{3d+1}{2}}\right\}\)._
In some specific dimensions, a stronger estimate was proved in [10] for the sphere of zero radius.
**Theorem 2.4**.: _Let \(A\subset\mathbb{F}_{q}^{d}\). Assume \(d\equiv 2\mod 4\) and \(q\equiv 3\mod 4\), then we have_
\[\sum_{m\in S_{0}}|\widehat{A}(m)|^{2}\ll\frac{|A|}{q^{d+1}}+\frac{|A|^{2}}{q \frac{3d+2}{2}}.\]
We are now ready to improve Theorem 2.1.
**Theorem 2.5**.: _Let \(A,B\) be sets in \(\mathbb{F}_{q}^{d}\). Assume that either (\(d\geq 3\) odd) or (\(d\equiv 2\mod 4\) and \(q\equiv 3\mod 4\)), then the following hold._
(1) _If \(|A|\leq q^{\frac{d-1}{2}}\), then_
\[\sum_{||m||=||m^{\prime}||}|\widehat{A}(m)|^{2}|\widehat{B}(m^{\prime})|^{2} \leq\frac{|A||B|}{q^{2d+1}}.\]
(2) _If \(q^{\frac{d-1}{2}}\leq|A|\leq q^{\frac{d+1}{2}}\), then_
\[\sum_{||m||=||m^{\prime}||}|\widehat{A}(m)|^{2}|\widehat{B}(m^{\prime})|^{2}\leq \frac{|A|^{2}|B|}{q^{\frac{5d+1}{2}}}.\]
Proof.: The proof follows directly from Theorem 2.3 and Theorem 2.4. More precisely,
\[\sum_{||m||=||m^{\prime}||}|\widehat{A}(m)|^{2}|\widehat{B}(m^{\prime})|^{2} \leq\max_{t\in\mathbb{F}_{q}}\sum_{m\in\mathbb{S}_{t}}|\widehat{A}(m)|^{2}\cdot \sum_{m^{\prime}\in\mathbb{F}_{q}^{d}}|\widehat{B}(m^{\prime})|^{2}\leq\max_{ t\in\mathbb{F}_{q}}\sum_{m\in\mathbb{S}_{t}}|\widehat{A}(m)|^{2}\cdot\frac{|B|}{q^{d}}.\]
This completes the proof.
In two dimensions, we can obtain a better estimate as follows.
**Theorem 2.6**.: _Let \(A,B\) be sets in \(\mathbb{F}_{q}^{2}\). Assume in addition that \(q\equiv 3\mod 4\), then we have_
\[\sum_{||m||=||m^{\prime}||}|\widehat{A}(m)|^{2}|\widehat{B}(m^{\prime})|^{2} \ll\frac{|A||B|}{q^{5}}\cdot\min\{|A|^{1/2},|B|^{1/2}\}.\]
Proof.: We note that when \(q\equiv 3\mod 4\), the circle of radius zero contains only one point which is \((0,0)\), so
\[|\widehat{A}(0,0)|^{2}|\widehat{B}(0,0)|^{2}=\frac{|A|^{2}|B|^{2}}{q^{8}}.\]
Applying Theorem 2.3, as above, one has
\[\sum_{||m||=||m^{\prime}||\neq 0}|\widehat{A}(m)|^{2}|\widehat{B}(m^{\prime})|^{2 }\leq M^{*}(A)\sum_{m}|\widehat{B}(m)|^{2}\ll\frac{|A|^{3/2}|B|}{q^{5}},\]
and
\[\sum_{||m||=||m^{\prime}||\neq 0}|\widehat{A}(m)|^{2}|\widehat{B}(m^{\prime})|^{2 }\leq M^{*}(B)\sum_{m}|\widehat{A}(m)|^{2}\leq\frac{|A||B|^{3/2}}{q^{5}}.\]
Thus, the theorem follows.
### Results over prime fields
To improve Theorem 2.6 over prime fields, we need to introduce the following notation.
For \(P\subset\mathbb{F}_{q}^{2d}\). Define
\[N(P):=\#\{(x,y,u,v)\in P\times P:||x-u||=||y-v||\}.\]
The following picture describes the case \(P=A\times B\).
We note that \(N(A\times B)\) counts the number of pairs in \(A\times A\) and \(B\times B\) of the same distance. This quantity is not the same as the \(L^{2}\) distance estimate of the set \(A\times B\).
\(\bullet\)
Therefore,
\[N(P) =\frac{|A|^{2}|B|^{2}}{q}+q^{3d-1}\sum_{m,m^{\prime}}|\widehat{A}(m)| ^{2}|\widehat{B}(m^{\prime})|^{2}\sum_{s\neq 0}\chi(s(||m||-||m^{\prime}||))\] \[=\frac{|A|^{2}|B|^{2}}{q}-q^{3d-1}\sum_{||m||\neq||m^{\prime}||}| \widehat{A}(m)|^{2}|\widehat{B}(m^{\prime})|^{2}+q^{3d}\sum_{||m||=||m^{\prime }||}|\widehat{A}(m)|^{2}|\widehat{B}(m^{\prime})|^{2}.\]
This completes the proof.
**Theorem 2.8**.: _For \(A\subset\mathbb{F}_{p}^{2},\ p\equiv 3\mod 4\ with\ |A|\ll p^{4/3}\), and \(P=A\times A\), we have_
\[N(P)=\#\{(x,y,z,w)\in A^{4}:||x-y||=||z-w||\}\leq\frac{|A|^{4}}{p}+C\min\left\{ p^{2/3}|A|^{8/3}+p^{1/4}|A|^{3},|A|^{10/3}\right\},\]
_for some large constant \(C>0\)._
**Corollary 2.9**.: _For \(A\subset\mathbb{F}_{p}^{2},\ p\equiv 3\mod 4\ with\ |A|\ll p^{4/3}\), and \(P=A\times A\), there exists a large constant \(C\) such that the following hold._
(1) _If \(|A|\leq p\), then \(N(P)\leq C|A|^{10/3}\)._
(2) _If \(p\leq|A|\leq p^{5/4}\), then \(N(P)\leq p^{-1}|A|^{4}+Cp^{2/3}|A|^{8/3}\)._
(3) _If \(p^{5/4}\leq|A|\leq p^{3/2}\), then \(N(P)\leq p^{-1}|A|^{4}+Cp^{1/4}|A|^{3}\)._
_Remark 2.10_.: We want to add a remark here that [24, Theorem 4] presents a bound on the number of isosceles triangles in \(A\). This implies the bound for \(N(P)\) as stated in Theorem 2.8 since \(N(P)\) is at most the number of isosceles triangles times the size of \(A\) by the Cauchy-Schwarz inequality.
**Theorem 2.11**.: _Let \(A,B\subset\mathbb{F}_{p}^{2}\) with \(|A|\leq|B|\ and\ p\equiv 3\mod 4.\) Define_
\[I_{A,B}:=p^{6}\sum_{||m||=||m^{\prime}||}|\widehat{A}(m)|^{2}|\widehat{B}(m^{ \prime})|^{2}.\]
_Then the following hold._
(1) _If \(|A|\leq p\ and\ p^{5/4}\leq|B|\leq p^{4/3}\), then \(I_{A,B}\ll p^{1/8}(p|A|^{2}+|A|^{10/3})^{1/2}|B|^{3/2}\)._
(2) _If \(|A|\leq p\ and\ p\leq|B|\leq p^{5/4}\), \(I_{A,B}\ll(p|A|^{2}+|A|^{10/3})^{1/2}\cdot p^{1/3}|B|^{4/3}\)._
(3) _If \(|A|\leq p\ and\ |B|\leq p\), then \(I_{A,B}\leq(p|A|^{2}+|A|^{10/3})^{1/2}\cdot(p|B|^{2}+|B|^{10/3})^{1/2}\)._
(4) _If \(p\leq|A|\leq p^{5/4}\ and\ p\leq|B|\leq p^{5/4}\), then \(I_{A,B}\ll p^{2/3}|A|^{4/3}|B|^{4/3}\)._
(5) _If \(p\leq|A|\leq p^{5/4}\ and\ p^{5/4}\leq|B|\leq p^{4/3}\), then \(I_{A,B}\ll p^{11/24}|A|^{4/3}|B|^{3/2}\)._
(6) _If \(p^{5/4}\leq|A|\leq p^{4/3}\) and \(p^{5/4}\leq|B|\leq p^{4/3}\), then \(I_{A,B}\ll p^{14}|A|^{3/2}|B|^{3/2}\)._
_._
7. _If_ \(|A|\leq p\) _and_ \(|B|>p^{4/3}\)_, then_ \(I_{A,B}\ll p^{1/2}(p|A|^{2}+|A|^{103})^{1/2}|B|^{5/4}\)_._
8. _If_ \(p\leq|A|\leq p^{5/4}\) _and_ \(|B|>p^{4/3}\)_, then_ \(I_{A,B}\ll p^{5/6}|A|^{4/3}|B|^{5/4}\)_._
9. _If_ \(p^{5/4}\leq|A|\leq p^{4/3}\) _and_ \(|B|>p^{4/3}\)_, then_ \(I_{A,B}\ll p^{5/8}|A|^{3/2}|B|^{5/4}\)_._
To see how good this theorem is, we need to compare with the results obtained by Theorem 2.5 and Theorem 2.6. More precisely, the two theorems give
\[I_{A,B}\ll\begin{cases}q|A|^{3/2}|B|&\text{ if }|A|\geq q\\ q^{1/2}|A|^{2}|B|&\text{ if }q^{\frac{1}{2}}\leq|A|<q\\ q|A||B|&\text{ if }|A|<q^{\frac{1}{2}}\end{cases}.\]
The following table gives the information we need.
Proof.: We have
\[I_{A,B} =p^{6}\sum_{t\in\mathbb{F}_{p}}\sum_{m,m^{\prime}\in S_{t}}|\widehat {A}(m)|^{2}|\widehat{B}(m^{\prime})|^{2}=p^{6}\sum_{t\in\mathbb{F}_{p}}\left( \sum_{m\in S_{t}}|\widehat{A}(m)|^{2}\right)\cdot\left(\sum_{m^{\prime}\in S _{t}}|\widehat{B}(m^{\prime})|^{2}\right)\] \[\leq p^{6}\left(\sum_{t\in\mathbb{F}_{p}}\sum_{m,m^{\prime}\in S _{t}}|\widehat{A}(m)|^{2}|\widehat{A}(m^{\prime})|^{2}\right)^{1/2}\cdot\left( \sum_{t\in\mathbb{F}_{p}}\sum_{m,m^{\prime}\in S_{t}}|\widehat{B}(m)|^{2}| \widehat{B}(m^{\prime})|^{2}\right)^{1/2}\] \[=I_{A,A}^{1/2}\cdot I_{B,B}^{1/2}.\]
From Theorem 2.7, one has
\[I_{A,A}=N(A\times A)+p^{5}\sum_{||m||\neq||m^{\prime}||}|\widehat{A}(m)|^{2}| \widehat{A}(m^{\prime})|^{2}-\frac{|A|^{4}}{p},\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & \(|A|\leq p^{\frac{1}{2}}\) & \(p^{\frac{1}{2}}<|A|\leq p^{\frac{3}{4}}\) & \(p^{\frac{3}{4}}<|A|\leq p\) & \(p<|A|\leq p^{\frac{5}{4}}\) & \(p^{\frac{5}{4}}<|A|\leq p^{\frac{4}{3}}\) \\ \hline \(|B|\leq p^{\frac{3}{4}}\) & \(=\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) \\ \hline \(p^{\frac{3}{4}}<|B|\leq p\) & \(\varnothing\) & \(|A|^{3}>|B|^{2}\) & \(\surd\) & \(\surd\) & \(\surd\) \\ \hline \(p<|B|\leq p^{\frac{5}{4}}\) & \(\varnothing\) & \(|A|>(p|B|)^{\frac{1}{3}}\) & \(\surd\) & \(\surd\) & \(\surd\) \\ \hline \(p^{\frac{5}{4}}<|B|\leq p^{\frac{4}{3}}\) & \(\varnothing\) & \(\varnothing\) & \(|B|<p^{\frac{3}{4}}|A|^{\frac{2}{3}}\) & \(\surd\) & \(\surd\) \\ \hline \(p^{\frac{3}{4}}<|B|\) & \(\varnothing\) & \(\varnothing\) & \(\varnothing\) & \(|B|^{3}<p^{2}|A|^{2}\) & \(|B|<p^{\frac{3}{2}}\) \\ \hline \end{tabular}
\end{table}
Table 3. In this table, by " \(=\) ” we mean the same result, by “\(\surd\)” we mean better result, by “\(\varnothing\)” we mean weaker result, by “\(f(|A|,|B|)\)” we mean better result under the condition \(f(|A|,|B|)\), and by “\(\diagup\)” we mean invalid range corresponding to \(|B|\leq|A|\).
and
\[I_{B,B}=N(B\times B)+p^{5}\sum_{||m||\neq||m^{\prime}||}|\widehat{B}(m)|^{2}| \widehat{B}(m^{\prime})|^{2}-\frac{|B|^{4}}{p}.\]
By the Plancherel, we know that
\[p^{5}\sum_{||m||\neq||m^{\prime}||}|\widehat{A}(m)|^{2}|\widehat{A}(m^{\prime} )|^{2}\leq p|A|^{2},\ p^{5}\sum_{||m||\neq||m^{\prime}||}|\widehat{B}(m)|^{2}| \widehat{B}(m^{\prime})|^{2}\leq p|B|^{2}.\]
Hence,
1. If \(|A|\leq p\), then Corollary 2.9 gives \(I_{A,A}\ll p|A|^{2}+|A|^{10/3}\).
2. If \(p\leq|A|\leq p^{5/4}\), then Corollary 2.9 gives \(I_{A,A}\ll p|A|^{2}+p^{2/3}|A|^{8/3}\ll p^{2/3}|A|^{8/3}\).
3. If \(p^{5/4}\leq|A|\leq p^{4/3}\), then Corollary 2.9 gives \(I_{A,A}\ll p|A|^{2}+p^{1/4}|A|^{3}\ll p^{1/4}|A|^{3}\).
The sum \(I_{B,B}\) is estimated in the same way, namely,
1. If \(|B|\leq p\), then Corollary 2.9 gives \(I_{B,B}\ll p|B|^{2}+|B|^{10/3}\).
2. If \(p\leq|B|\leq p^{5/4}\), then Corollary 2.9 gives \(I_{B,B}\ll p|B|^{2}+p^{2/3}|B|^{8/3}\ll p^{2/3}|B|^{8/3}\).
3. If \(p^{5/4}\leq|B|\leq p^{4/3}\), then Corollary 2.9 gives \(I_{B,B}\ll p|B|^{2}+p^{1/4}|B|^{3}\ll p^{1/4}|B|^{3}\).
When \(|B|>p^{4/3}\), we use Theorem 2.6 to get that \(I_{B,B}\ll p|B|^{5/2}\).
Combining these estimates gives us the desired result.
### An extension for general sets
In this subsection, we want to bound the sum
\[\sum_{\begin{subarray}{c}(m,m^{\prime})\neq(0,0),\\ ||m||=||m^{\prime}||\end{subarray}}|\widehat{P}(m,m^{\prime})|^{2}, \tag{2.1}\]
where \(P\) is a general set in \(\mathbb{F}_{q}^{2d}\).
**Theorem 2.12**.: _Let \(P\subset\mathbb{F}_{q}^{d}\times\mathbb{F}_{q}^{d}\). We have_
\[q^{3d-1}(q-1)\sum_{\begin{subarray}{c}(m,m^{\prime})\neq(0,0),\\ ||m||=||m^{\prime}||\end{subarray}}|\widehat{P}(m,m^{\prime})|^{2}\ll q^{d}|P|.\]
To prove this result, as in the prime field case, we use a double counting argument to bound \(N(P)\). To prove a connection between \(N(P)\) and the sum (2.1), a number of results on exponential sums are needed.
For each \(a\in\mathbb{F}_{q}\setminus\{0\}\), the Gauss sum \(\mathcal{G}_{a}\) is defined by
\[\mathcal{G}_{a}=\sum_{t\in\mathbb{F}_{q}\setminus\{0\}}\eta(t)\chi(at).\]
The next lemma presents the explicit form the Gauss sum which can be found in [15, Theorem 5.15].
**Lemma 2.13**.: _Let \(\mathbb{F}_{q}\) be a finite field of order \(q=p^{\ell}\), where \(p\) is an odd prime and \(\ell\in\mathbb{N}\). We have_
\[\mathcal{G}_{1}=\left\{\begin{array}{ll}(-1)^{\ell-1}q^{\frac{1}{2}}&\text{ if }\quad p\equiv 1\mod 4\\ (-1)^{\ell-1}i^{\ell}q^{\frac{1}{2}}&\text{ if }\quad p\equiv 3\mod 4. \end{array}\right.\]
We also need the following simple lemma, its proof can be found in [14].
**Lemma 2.14**.: _For \(\beta\in\mathbb{F}_{q}^{k}\) and \(s\in\mathbb{F}_{q}\setminus\{0\}\), we have_
\[\sum_{\alpha\in\mathbb{F}_{q}^{k}}\chi(s\alpha\cdot\alpha+\beta\cdot\alpha)= \eta^{k}(s)\mathcal{G}_{1}^{k}\chi\left(\frac{||\beta||}{-4s}\right).\]
Let \(V\subset\mathbb{F}_{q}^{2d}\) be the variety defined by
\[x_{1}^{2}+\cdots+x_{d}^{2}-y_{1}^{2}-\cdots-y_{d}^{2}=0.\]
The Fourier transform of \(V\) can be computed explicitly in the following lemma.
**Lemma 2.15**.: _Let \((m,m^{\prime})\in\mathbb{F}_{q}^{2d}\)._
(1) _If \((m,m^{\prime})=(0,0)\), then_
\[\widehat{V}(m,m^{\prime})=\frac{1}{q}+\frac{q^{d}(q-1)}{q^{2d+1}}.\]
(2) _If \((m,m^{\prime})\neq(0,0)\) and \(||m||=||m^{\prime}||\), then_
\[\widehat{V}(m,m^{\prime})=\frac{q^{d}(q-1)}{q^{2d+1}}.\]
(3) _If \((m,m^{\prime})\neq(0,0)\) and \(||m||\neq||m^{\prime}||\), then_
\[\widehat{V}(m,m^{\prime})=\frac{-1}{q^{d+1}}.\]
Proof.: By Lemma 2.14, we have
\[\widehat{V}(m,m^{\prime})=\frac{1}{q^{2d}}V(x,y)\chi(-x\cdot m-y \cdot m^{\prime})\] \[=\frac{1}{q^{2d+1}}\sum_{x,y\in\mathbb{F}_{q}^{d}}\sum_{s\in \mathbb{F}_{q}}\chi(s(x_{1}^{2}+\cdots+x_{d}^{2}-y_{1}^{2}-\cdots-y_{d}^{2})) \chi(-x_{1}m_{1}-\cdots-x_{d}m_{d})\chi(-y_{1}m_{1}^{\prime}-\cdots-y_{d}m_{d} ^{\prime})\] \[=\frac{1}{q^{2d+1}}\sum_{x,y}\chi(-x\cdot m)\chi(-y\cdot m^{ \prime})+\frac{1}{q^{2d+1}}\mathcal{G}_{1}^{2d}\eta^{d}(-1)\sum_{s\neq 0}\chi \left(\frac{1}{4s}(||m||-||m^{\prime}||)\right).\]
By Lemma 2.13, we have \(\mathcal{G}_{1}^{2d}\eta^{d}(-1)=q^{d}\). Thus, the lemma follows from the orthogonality of the character \(\chi\).
In the following, we compute \(N(P)\) explicitly which is helpful to estimate the sum (2.1).
**Lemma 2.16**.: _For \(P\subset\mathbb{F}_{q}^{d}\times\mathbb{F}_{q}^{d}\), we have_
\[N(P)=\left(\frac{1}{q}+\frac{q-1}{q^{d+1}}\right)|P|^{2}+q^{3d-1}(q-1)\sum_{ \begin{subarray}{c}(m,m^{\prime})\neq(0,0),\\ ||m||=||m^{\prime}||\end{subarray}}|\widehat{P}(m,m^{\prime})|^{2}-q^{3d-1} \sum_{||m||\neq||m^{\prime}||}|\widehat{P}(m,m^{\prime})|^{2}.\]
Proof.: We have
\[N(P)=\sum_{x,u,y,v}P(x,u)P(y,v)V(x-y,u-v)\] \[=\sum_{x,u,y,v}P(x,u)P(y,v)\sum_{m,m^{\prime}}\widehat{V}(m,m^{ \prime})\chi((x-y)m+(u-v)m^{\prime})\] \[=q^{4d}\sum_{m,m^{\prime}}\widehat{V}(m,m^{\prime})|\widehat{P}(m,m^{\prime})|^{2}.\]
By Lemma 2.15, we obtain
\[N(P)=\left(\frac{1}{q}+\frac{q-1}{q^{d+1}}\right)|P|^{2}+q^{3d-1}(q-1)\sum_{ \begin{subarray}{c}(m,m^{\prime})\neq(0,0),\\ ||m||=||m^{\prime}||\end{subarray}}|\widehat{P}(m,m^{\prime})|^{2}-q^{3d-1} \sum_{||m||\neq||m^{\prime}||}|\widehat{P}(m,m^{\prime})|^{2},\]
and the lemma follows.
We now bound \(N(P)\) by a different argument.
**Theorem 2.17**.: _For \(P\subset\mathbb{F}_{q}^{d}\times\mathbb{F}_{q}^{d}\). We have_
\[\left|N(P)-\frac{|P|^{2}}{q}\right|\ll q^{d}|P|.\]
_Remark 2.18_.: This theorem is sharp in odd dimensions. More precisely, it cannot be improved to the form
\[N(P)\ll\frac{|P|^{2}}{q}+q^{d-\epsilon}|P|,\]
for any \(\epsilon>0\). Since otherwise, it would say that any set \(A\subset\mathbb{F}_{q}^{d}\) with \(|A|\gg q^{\frac{d+1}{2}-\frac{\epsilon}{2}}\) has at least \(\gg q\) distances. This is not possible due to examples in [9, 11].
Proof.: To prove this lemma, we start with the following observation that
\[||x-u||=||y-v||\]
can be written as
\[-2x\cdot u+2y\cdot v=||y||+||v||-||x||-||u||.\]
We now write \(N(P)\) as follows
\[N(P) =\frac{1}{q}\sum_{s\in\mathbb{F}_{q}}\sum_{\begin{subarray}{c}(x,y) \in P,\\ (u,v)\in P\end{subarray}}\chi(s(-2x\cdot u+2y\cdot v-||y||-||v||+||x||+||u||))\] \[=\frac{|P|^{2}}{q}+\frac{1}{q}\sum_{s\neq 0}\sum_{\begin{subarray} {c}(x,y)\in P,\\ (u,v)\in P\end{subarray}}\chi(s(-2x\cdot u+2y\cdot v-||y||-||v||+||x||+||u||))\] \[=\frac{|P|^{2}}{q}+\frac{1}{q}\sum_{s\neq 0}\sum_{\begin{subarray} {c}(x,y,||x||-||y||)\in P^{\prime}\\ (u,v)\in P\end{subarray}}\chi(s((x,y)\cdot(-2u,2v)-||y||-||v||+||x||+||u||))\] \[=\frac{|P|^{2}}{q}+\operatorname{Error},\]
here \(P^{\prime}:=\{(x,y,||x||-||y||)\colon(x,y)\in P\}\subset\mathbb{F}_{q}^{2d+1}\). We now estimate the term \(\operatorname{Error}\).
\[\operatorname{Error}^{2}\leq\frac{|P|}{q^{2}}\sum_{(x,y,t)\in \mathbb{F}_{q}^{2d+1}}\sum_{s,s^{\prime}\neq 0}\sum_{\begin{subarray}{c}(u,v) \in P,\\ (u^{\prime},v^{\prime})\in P\end{subarray}}\chi(s(-2x\cdot u+2y\cdot v-||y||- ||v||+||x||+||u||))\] \[\cdot\chi\left(s^{\prime}(2x\cdot u^{\prime}-2y\cdot v^{\prime}+|| y||+||v^{\prime}||-||x||-||u^{\prime}||)\right)\] \[=\frac{|P|}{q^{2}}\sum_{(x,y,t)\in\mathbb{F}_{q}^{2d+1}}\sum_{s,s ^{\prime}\neq 0}\sum_{\begin{subarray}{c}(u,v)\in P,\\ (u^{\prime},v^{\prime})\in P\end{subarray}}\chi(x\cdot(-2su+2s^{\prime}u^{ \prime}))\cdot\chi(y\cdot(2sv-2s^{\prime}v^{\prime}))\cdot\chi(t(s-s^{\prime}))\] \[\cdot\chi(s(||u||-||v||)-s^{\prime}(||u^{\prime}||-||v^{\prime}||))\] \[\leq|P|^{2}q^{2d}.\]
In other words, we obtain
\[N(P)\leq\frac{|P|^{2}}{q}+q^{d}|P|.\]
This completes the proof.
With Lemma 2.16 and Theorem 2.17 in hand, we are ready to prove Theorem 2.12.
Proof of Theorem 2.12.: Indeed, one has
\[q^{3d-1}(q-1)\sum_{\begin{subarray}{c}(m,m^{\prime})\neq(0,0),\\ ||m||=||m^{\prime}||\end{subarray}}|\widehat{P}(m,m^{\prime})|^{2}=N(P)-\frac {|P|^{2}}{q}-\frac{q-1}{q^{d+1}}|P|^{2}+q^{3d-1}\sum_{||m||\neq||m^{\prime}||} |\widehat{P}(m,m^{\prime})|^{2}.\]
By Plancherel theorem, we have
\[\sum_{||m||\neq||m^{\prime}||}|\widehat{P}(m,m^{\prime})|^{2}\leq\frac{|P|}{q^{2d}}.\]
So, the theorem follows directly from Theorem 2.17.
## 3. Warm-up incidence theorems
In this section, we present direct incidence bounds which can be proved by using the Cauchy-Schwarz inequality and the results on \(N(P)\) from the previous section.
**Theorem 3.1**.: _Let \(P\) be a set of points in \(\mathbb{F}_{q}^{d}\times\mathbb{F}_{q}^{d}\) and \(R\) be a set of rigid motions in \(\mathbb{F}_{q}^{d}\). Then we have_
\[I(P,R)\ll|R|^{1/2}|O(d-1)|^{1/2}\left(\frac{|P|^{2}}{q}+Cq^{d}|P|\right)^{1/2} +|R|.\]
Proof.: For each \(r\in R\), denote \(I(P,r)\) by \(i(r)\). Then, it is clear that
\[I(P,R)=\sum_{r\in R}i(r)\leq|R|^{1/2}\left(\sum_{r\in R}i(r)^{2}\right)^{1/2}. \tag{3.1}\]
We observe that, for each \(r\in R\), \(i(r)^{2}\) counts the number of pairs \((a_{1},b_{1}),(a_{2},b_{2})\in P\) on \(r\). This infers \(||a_{1}-a_{2}||=||b_{1}-b_{2}||\). Thus, the sum \(\sum_{r\in R}i(r)^{2}\) can be bounded by
\[|O(d-1)|N(P)+I(P,R),\]
where we used the fact that the stabilizer of a non-zero element is at most \(|O(d-1)|\), and the term \(I(P,R)\) comes from pairs \((a_{1},b_{1}),(a_{2},b_{2})\in P\) with \(a_{1}=a_{2}\) and \(b_{1}=b_{2}\). Therefore,
\[\sum_{r\in R}i(r)^{2}\ll|O(d-1)|N(P)+I(P,R).\]
Using Theorem 2.17, the theorem follows.
If we use the trivial bound \(N(P)\leq|P|^{2}\), then the next theorem is obtained.
**Theorem 3.2**.: _Let \(P\) be a set of points in \(\mathbb{F}_{q}^{d}\times\mathbb{F}_{q}^{d}\) and \(R\) be a set of rigid motions in \(\mathbb{F}_{q}^{d}\). Then we have_
\[I(P,R)\ll|P||R|^{1/2}|O(d-1)|^{1/2}+|R|.\]
Compared to Theorem 1.14 and Theorem 1.15, these two incidence theorems only give weaker upper bounds and tell us nothing about the lower bounds.
In two dimensions over prime fields, if \(P=A\times B\) with \(|A|,|B|\leq p\), then Corollary 2.9 says that \(N(P)\ll|A|^{5/3}|B|^{5/3}\). As above, the next theorem is a direct consequence.
**Theorem 3.3**.: _Let \(P=A\times B\) with \(A,B\subset\mathbb{F}_{p}^{2}\) and \(p\equiv 3\mod 4\). Assume that \(|A|,|B|\leq p\), then we have_
\[I(P,R)\ll|P|^{5/6}|R|^{1/2}+|R|.\]
_In particular, if \(|P|=|R|=N\) then_
\[I(P,R)\ll N^{4/3}.\]
## 4. Incidence theorems: proofs
Let us present a framework that will work for most cases.
We have
\[I(P,R)= \sum_{\begin{subarray}{c}(x,y)\in P,\\ (g,z)\in R\end{subarray}}1_{x=g,y+z}=\frac{1}{q^{d}}\sum_{m\in\mathbb{F}_{q}^{ d}}\sum_{\begin{subarray}{c}(x,y)\in P,\\ (g,z)\in R\end{subarray}}\chi(m\cdot(x-gy-z))\] \[= \frac{|P||R|}{q^{d}}+\frac{1}{q^{d}}\sum_{m\in\mathbb{F}_{q}^{d} \setminus\{0\}}\sum_{\begin{subarray}{c}(x,y)\in P,\\ (g,z)\in R\end{subarray}}\chi(m\cdot(x-gy-z))\] \[= \frac{|P||R|}{q^{d}}+q^{d}\sum_{m\neq 0}\sum_{(g,z)\in R}\widehat{P }(-m,gm)\chi(-mz)=:I+II,\]
where \(\widehat{P}(u,v)=q^{-4d}\sum_{(x,y)\in P}\chi(-xu-yv)\). We next bound the second term. By the Cauchy-Schwarz inequality, we have
\[II \leq q^{d}|R|^{1/2}\left(\sum_{(g,z)\in O(d)\times\mathbb{F}_{q}^ {d}}\sum_{m_{1},m_{2}\neq 0}\widehat{P}(-m_{1},gm_{1})\overline{\widehat{P}(-m_{ 2},gm_{2})}\chi(z(-m_{1}+m_{2}))\right)^{1/2}\] \[= q^{d}|R|^{1/2}\left(q^{d}\sum_{g\in O(d)}\sum_{m\neq 0}|\widehat{ P}(m,-gm)|^{2}\right)^{1/2}\] \[= q^{\frac{3d}{2}}|R|^{1/2}\left(\sum_{g\in O(d)}\sum_{m\neq 0}| \widehat{P}(m,-gm)|^{2}\right)^{1/2}.\]
We now consider two cases.
**Case 1:** If \(P\) is a general set in \(\mathbb{F}_{q}^{2d}\), then we have
\[\sum_{g\in O(d)}\sum_{m\neq 0}|\widehat{P}(m,-gm)|^{2}\leq|O(d-1)|\sum_{ \begin{subarray}{c}(m,m^{\prime})\neq(0,0),\\ ||m||=||m^{\prime}||\end{subarray}}|\widehat{P}(m,m^{\prime})|^{2},\]
where we used the fact that the stabilizer of a non-zero element in \(\mathbb{F}_{q}^{d}\) is at most \(|O(d-1)|\). From here, we apply Theorem 2.12 to obtain Theorem 1.14.
**Case \(2\):** If \(P\) is of the structure \(A\times B\), where \(A,B\subset\mathbb{F}_{q}^{d}\), then
\[\widehat{P}(m,-gm)=\widehat{A}(m)\widehat{B}(-gm).\]
Thus,
\[\sum_{g\in O(d)}\sum_{m\neq 0}|\widehat{P}(m,-gm)|^{2}=\sum_{g \in O(d)}\sum_{m\neq 0}|\widehat{A}(m)|^{2}|\widehat{B}(-gm)|^{2}\] \[\leq|O(d-1)|\sum_{m\neq 0}|\widehat{A}(m)|^{2}\sum_{\begin{subarray} {c}m\neq 0,\\ ||m^{\prime}||=||m||\end{subarray}}|\widehat{B}(m^{\prime})|^{2}\] \[\leq|O(d-1)|\sum_{||m||=||m^{\prime}||}|\widehat{A}(m)|^{2}| \widehat{B}(m^{\prime})|^{2},\]
where we again used the fact that the stabilizer of a non-zero element in \(\mathbb{F}_{q}^{d}\) is at most \(|O(d-1)|\).
From here, we apply Theorem 2.5, Theorem 2.6, and Theorem 2.11 to obtain Theorem 1.15, Theorem 1.16, Theorem 1.17, Theorem 1.18, and Theorem 1.19, except Theorem 1.17 (2) and (3).
To prove these two statements, we need to bound the sum \(\sum_{\begin{subarray}{c}g\in O(2),\\ m\neq 0\end{subarray}}|\widehat{P}(m,-gm)|^{2}\) in a different way. More precisely,
\[\sum_{\begin{subarray}{c}g\in O(2),\\ m\neq 0\end{subarray}}|\widehat{P}(m,-gm)|^{2}=\frac{1}{p^{8}}\sum_{ \begin{subarray}{c}g\in O(d),\\ m\neq 0\end{subarray}}\sum_{\begin{subarray}{c}g\in O(d),\\ m\neq 0\end{subarray}}\sum_{\begin{subarray}{c}x_{1},y_{1},x_{2},y_{2} \\ \end{subarray}}P(x_{1},y_{1})P(x_{2},y_{2})\chi(m(x_{1}-gy_{1}-x_{2}+gy_{2}))\] \[=\frac{1}{p^{8}}\left(p^{2}\sum_{g\in O(2)x_{1},x_{2},y_{1},y_{2} }\,P(x_{1},y_{1})P(x_{2},y_{2})1_{x_{1}-x_{2}=g(y_{1}-y_{2})}-|O(2)||P|^{2} \right).\]
Moreover,
\[\sum_{g\in O(2)}\sum_{x_{1},x_{2},y_{1},y_{2}}\,P(x_{1},y_{1})P(x_{2},y_{2})1 _{x_{1}-x_{2}=g(y_{1}-y_{2})}\leq|O(2)||P|+|N(P)|.\]
The first approach is equivalent to bound \(N(P)\) by using Theorem 2.7 and Theorem 2.11.
If \(|A|\leq p\) and \(p\leq|B|\leq p^{5/4}\), then Theorem 2.11 tells us that
\[I_{A,B}\ll(p|A|^{2}+|A|^{10/3})^{1/2}\cdot p^{1/3}|B|^{4/3}.\]
As a consequence, one has
\[N(P)\leq\frac{|A|^{2}|B|^{2}}{p}+(p|A|^{2}+|A|^{10/3})^{1/2}\cdot p^{1/3}|B|^{ 4/3}.\]
However, when \(|A|\leq p\) and \(p\leq|B|\leq p^{5/4}\), by the Cauchy-Schwarz inequality, a better upper bound can be obtained. Indeed, using Corollary 2.9, we have
\[N(P)\leq N(A\times A)^{1/2}N(B\times B)^{1/2}\ll p^{1/3}|A|^{5/3}|B|^{4/3}.\]
Together with the above estimates, we obtain
\[\sum_{\begin{subarray}{c}g\in O(2),\\ m\neq 0\end{subarray}}|\widehat{P}(m,-gm)|^{2}\leq\frac{|P|}{p^{5}}+\frac{|P|^{4/ 3}|A|^{1/3}}{p^{17/3}}.\]
This gives
\[\left|I(P,R)-\frac{|P||R|}{p^{2}}\right|\ll p^{3}|P|^{1/2}|R|^{1/2}\left(\frac {1}{p^{5}}+\frac{|P|^{1/3}|A|^{1/3}}{p^{17/3}}\right)^{1/2}.\]
Similarly, if \(|A|\leq p\) and \(|B|\leq p\), we have
\[N(P)\ll|A|^{5/3}|B|^{5/3},\]
and
\[\sum_{\begin{subarray}{c}g\in O(2),\\ m\neq 0\end{subarray}}|\widehat{P}(m,-gm)|^{2}\ll\frac{|P|}{p^{5}}+\frac{|P|^{5/3} }{p^{6}}.\]
Hence,
\[\left|I(P,R)-\frac{|P||R|}{p^{2}}\right|\ll p^{3}|P|^{1/2}|R|^{1/2}\left(\frac {1}{p^{5}}+\frac{|P|^{2/3}}{p^{6}}\right)^{1/2}.\]
### Sharpness of Theorem 1.14 and Theorem 1.15 in odd dimensions:
We first show that Theorem 1.14 is sharp up to a constant factor.
Let \(X\) be an arithmetic progression in \(\mathbb{F}_{q}\), and let \(v_{1},\ldots,v_{\frac{d-1}{2}}\) be \((v-1)/2\) vectors in \(\mathbb{F}_{q}^{d-1}\times\{0\}\) such that \(v_{i}\cdot v_{j}=0\) for all \(1\leq i\leq j\leq(d-1)/2\). The existence of such vectors can be found in Lemma 5.1 in [9] when \((d=4k+1)\) or \((d=4k+3\) with \(q\equiv 3\mod 4)\). Define
\[A=B=\mathbb{F}_{q}\cdot v_{1}+\cdots+\mathbb{F}_{q}\cdot v_{\frac{d-1}{2}}+X \cdot e_{d},\]
here \(e_{d}=(0,\ldots,0,1)\). Set \(P=A\times B\). The number of quadruples \((x,y,u,v)\in A\times A\times B\times B\) such that \(||x-y||=||u-v||\), is at least a constant times \(|X|^{2}q^{2d-1}\), say, \(|X|^{2}q^{2d-1}/2\). For each \((g,z)\in O(d)\times\mathbb{F}_{q}^{d}\), let \(i(g,z)=\#\{(u,v)\in A\times B:gu+z=v\}\). Define \(\mathcal{Q}=\sum_{(g,z)}i(g,z)^{2}\). So, \(\mathcal{Q}\geq|X|^{2}q^{2d-1}|O(d-1)|/2\).
We call \(g\)**type**-**k**, \(0\leq k\leq(d-1)/2\), if the rank of the system
\[\{v_{1},\ldots,v_{(d-1)/2},e_{d},gv_{1},\ldots,gv_{(d-1)/2}\}\]
is \(d-k\).
For any pair \((g,z)\), where \(g\) is \(\mathbf{type-0}\), the number of \((u,v)\in A\times B\) such that \(gu+z=v\) is at most \(|X|\).
For \(0<k\leq(d-1)/2\), if \(g\) is \(\mathbf{type-k}\), then, assume,
\[gv_{1},\ldots,gv_{k}\in\operatorname{Span}(gv_{k+1},\ldots,gv_{\frac{d-1}{2}}, v_{1},\ldots,v_{\frac{d-1}{2}},e_{d}).\]
Let \(N(k)\) be the contribution to \(\mathcal{Q}\) of pairs \((g,z)\) such that \(g\) is \(\mathbf{type-k}\). We have \(N(k)\) is at most the number of \(\mathbf{type-k}\)\(g\)s times \(|A|^{2}\). For each \(k\), to count the number of \(\mathbf{type-k}\)\(g\)s, we observe that \(||v_{i}||=0\), so \(||gv_{i}||=0\). The number of elements of norm zero in \(\operatorname{Span}(gv_{k+1},\ldots,gv_{\frac{d-1}{2}},v_{1},\ldots,v_{\frac{ d-1}{2}},e_{d})\) is at most \(q^{d-k}\). So, the total number of \(\mathbf{type-k}\)\(g\)s such that
\[gv_{1}\in\operatorname{Span}(gv_{k+1},\ldots,gv_{\frac{d-1}{2}},v_{1},\ldots, v_{\frac{d-1}{2}},e_{d})\]
is at most \(q^{d-k}|O(d-1)|\), which is, of course, larger than the number of \(g\) satisfying
\[gv_{1},\ldots,gv_{k}\in\operatorname{Span}(gv_{k+1},\ldots,gv_{\frac{d-1}{2}},v_{1},\ldots,v_{\frac{d-1}{2}},e_{d}).\]
Summing over all \(k\geq 1\) and the corresponding \(\mathbf{type-k}\)\(g\)s, the contribution to \(\mathcal{Q}\) is at most \(|X|^{2}q^{d-1}q^{d-k}|O(d-1)|\leq|X|^{2}q^{d}|O(d)|q^{-k}\). So, the pairs \((g,z)\), where \(g\) is \(\mathbf{type-k}\) and \(k\geq 1\), contribute at most \(\ll|X|^{2}q^{d-1}|O(d-1)|\) which is much smaller than \(\mathcal{Q}/2\). Thus, we can say that the contribution of \(\mathcal{Q}\) mainly comes from \(\mathbf{type-0}\)\(g\)s.
Let \(R\) be the set of pairs \((g,z)\in O(d)\times\mathbb{F}_{q}^{d}\) such that \(i(g,z)\geq 2\) and \(g\) is \(\mathbf{type-0}\).
Whenever \(|X|=cq\), \(0<c<1\), by a direct computation, Theorem 1.14 shows that
\[I(P,R)\leq Cq^{\frac{d^{2}-d+2}{4}}\sqrt{|P||R|}=Cq^{\frac{d^{2}+d}{4}}|X| \sqrt{|R|}\leq C|X||O(d)|q^{d}\,,\]
for some positive constant \(C\). This gives \(\mathcal{Q}\leq C|X|^{2}|O(d)|q^{d}\). This matches the lower bound of \(|X|^{2}q^{d}|O(d)|/2\) up to a constant factor.
We note that this example can also be used to show the sharpness of Theorem 1.15(2) in the same way.
For the sharpness of Theorem 1.15(1), let \(X\subset\mathbb{F}_{q}\) with \(|X|=cq\), \(0<c<1\). Set
\[A=X\cdot v_{1}+\cdots+X\cdot v_{\frac{d-1}{2}},\ B=X\cdot v_{1}+\cdots+X\cdot v _{\frac{d-1}{2}}+X\cdot e_{d},\]
where \(e_{d}=(0,\ldots,0,1)\). Since any vector in \(A-gB\) is of the form
\[-g(x_{1}v_{1}+\cdots+x_{\frac{d-1}{2}}v_{\frac{d-1}{2}})+y_{1}v_{1}+\cdots+y _{\frac{d-1}{2}}v_{\frac{d-1}{2}}-x_{\frac{d+1}{2}}ge_{d},\]
where \(x_{i},y_{i}\in X\), we have \(|A-gB|\leq|X|^{d}\leq c^{d}q^{d}\) for all \(g\in O(d)\). Let \(R\) be the set of \((g,z)\) such that \(z\in A-gB\). Then, we have \(|R|\leq c^{d}q^{d}|O(d)|\). Theorem 1.15(1) gives
\[I(P,R)\leq Cq^{\frac{d^{2}-d}{4}}\sqrt{|P||R|}\leq Cc^{d}q^{\frac{d^{2}+d}{2}},\]
for some positive constant \(C\).
On the other hand, by the definitions of \(A\), \(B\), and \(R\), we have
\[I(P,R)=|O(d)||P|=c^{d}q^{d}|O(d)|=c^{d}q^{\frac{d^{2}+d}{2}}.\]
This matches the incidence bound up to a constant factor.
## 5. Intersection pattern I: proofs
In this section, we prove Theorem1.4, Theorem1.5, and Theorem1.6.
Proof of Theorem1.4.: Set
\[E_{1} =\left\{g\in O(d)\colon\#\{z\in\mathbb{F}_{q}^{d}\colon|A\cap(g( B)+z)|\leq\frac{|A||B|}{2q^{d}}\}\geq cq^{d}\right\}\] \[E_{2} =\left\{g\in O(d)\colon\#\{z\in\mathbb{F}_{q}^{d}\colon|A\cap(g( B)+z)|\geq\frac{3|A||B|}{2q^{d}}\}\geq cq^{d}\right\}\]
for some \(c\in(0,1)\).
We first show that \(|E_{1}|\ll\frac{|O(d-1)|q^{2d}}{c|A||B|}\). Indeed, let \(R_{1}\) be the set of pairs \((g,z)\) with \(g\in E_{1}\) and \(|A\cap(g(B)+z)|\leq|A||B|/2q^{d}\). It is clear that \(I(A\times B,R_{1})\leq\frac{|A||B||R_{1}|}{2q^{d}}\). On the other hand, Theorem1.14 also tells us that
\[I(A\times B,R_{1})\geq\frac{|A||B||R_{1}|}{q^{d}}-C|O(d-1)|^{1/2}q^{d/2}\sqrt{ |A||B||R_{1}|},\]
for some positive constant \(C\). Thus, we have
\[|R_{1}|\leq\frac{4C^{2}|O(d-1)|q^{3d}}{|A||B|}.\]
Together this with \(|R_{1}|\geq c|E_{1}|q^{d}\) implies the desired conclusion.
Similarly, for \(E_{2}\), let \(R_{2}\) be the set of pairs \((g,z)\) with \(g\in E_{2}\) and \(|A\cap(g(B)+z)|\geq 3|A||B|/2q^{d}\). Then, \(I(A\times B,R_{2})\geq\frac{3|A||B||R_{2}|}{2q^{d}}\). By Theorem1.14 again, we obtain
\[I(A\times B,R_{2})\leq\frac{|A||B||R_{2}|}{q^{d}}+C|O(d-1)|^{1/2}q^{d/2}\sqrt{ |A||B||R_{2}|},\]
and thus
\[|R_{2}|\leq\frac{4C^{2}|O(d-1)|q^{3d}}{|A||B|}.\]
The fact \(|R_{2}|\geq c|E_{2}|q^{d}\) implies our desired result \(|E_{2}|\ll\frac{|O(d-1)|q^{2d}}{c|A||B|}\).
Next, note that for any \(g\in O(d)\setminus(E_{1}\cup E_{2})\), by setting \(c=1/4\), we have
\[\#\{z\in\mathbb{F}_{q}^{d}:|A\cap(g(B)+z)|\geq\frac{|A||B|}{2q^{d}} \}\geq\frac{3}{4}q^{d}\] \[\#\{z\in\mathbb{F}_{q}^{d}:|A\cap(g(B)+z)|\leq\frac{3|A||B|}{2q^{d }}\}\geq\frac{3}{4}q^{d},\]
implying that there at least \(\geq q^{d}/2\) elements \(z\) satisfying
\[\frac{|A||B|}{2q^{d}}\leq\#\{z\in\mathbb{F}_{q}^{d}:|A\cap(g(B)+z)|\}\leq\frac{ 3|A||B|}{2q^{d}}.\]
Proof of Theorem1.5.: We consider the case when \(|A|<q^{\frac{d-1}{2}}\), since other cases can be treated in the same way. We use the same notations as in Theorem1.4. By Theorem1.15, there exists \(C>0\) such that
\[I(A\times B,R_{1})\geq\frac{|A||B||R_{1}|}{q^{d}}-Cq^{(d^{2}-d)/4}\sqrt{|A||B|| R_{1}|},\]
and
\[I(A\times B,R_{2})\leq\frac{|A||B||R_{2}|}{q^{d}}+Cq^{(d^{2}-d)/4}\sqrt{|A||B|| R_{2}|}.\]
This, together with \(|R_{1}|\geq c|E_{1}|q^{d}\) and \(|R_{2}|\geq c|E_{2}|q^{d}\), implies that
\[|E_{1}|,|E_{2}|\ll\frac{q^{(d^{2}+d)/2}}{|A||B|}.\]
As similar to the proof of Theorem1.4, the theorem follows.
Proof of Theorem1.6.: We use the same notations as in Theorem1.4 for \(d=2\). By Theorem1.16, there exists \(C>0\) such that
\[I(A\times B,R_{1})\geq\frac{|A||B||R_{1}|}{q^{2}}-Cq^{1/2}\sqrt{|A||B||R_{1}|} |A|^{1/4},\]
and
\[I(A\times B,R_{2})\leq\frac{|A||B||R_{2}|}{q^{2}}+Cq^{1/2}\sqrt{|A||B||R_{2}|} |A|^{1/4}.\]
This, together with \(|R_{1}|\geq c|E_{1}|q^{2}\) and \(|R_{2}|\geq c|E_{2}|q^{2}\), gives that
\[|E_{1}|,|E_{2}|\ll\frac{q^{3}}{|A|^{1/2}|B|}.\]
As similar to the proof of Theorem1.4, the theorem follows.
Using Theorem1.18 and Theorem1.19 respectively, the proofs of Theorem1.8 and Theorem1.9 can be obtained by the same way.
### Sharpness of Theorem 1.4 and Theorem 1.5:
The constructions we present here are similar to those in the previous section. For the reader's convenience, we reproduce the details here.
It follows from the proof of Theorem 1.4 that for any \(0<c<1\), there exists \(C=C(c)\) such that if \(|A||B|\geq Cq^{d+1}\), then there are at least \((1-c)|O(d)|q^{d}\) pairs \((g,z)\in O(d)\times\mathbb{F}_{q}^{d}\) such that
\[\frac{|A||B|}{2q^{d}}\leq|A\cap(g(B)+z)|\leq\frac{3|A||B|}{2q^{d}}. \tag{5.1}\]
We now show that this result is sharp in the sense that for \(0<c<1\) small enough, say, \(8c<1/9(1-c)\), there exist \(A,B\subset\mathbb{F}_{q}^{d}\) with \(|A|=|B|=|X|q^{\frac{d-1}{2}}\), \(cq<|X|<1/9(1-c)q\), such that the number of pairs \((g,z)\in O(d)\times\mathbb{F}_{q}^{d}\) satisfying (5.1) is at most \((1-c)|O(d)|q^{d}\).
To be precise, let \(X\) be an arithmetic progression in \(\mathbb{F}_{q}\), and let \(v_{1},\ldots,v_{\frac{d-1}{2}}\) be \((v-1)/2\) vectors in \(\mathbb{F}_{q}^{d-1}\times\{0\}\) such that \(v_{i}\cdot v_{j}=0\) for all \(1\leq i\leq j\leq(d-1)/2\). The existence of such vectors can be found in Lemma 5.1 in [9] when \((d=4k+1)\) or \((d=4k+3\) with \(q\equiv 3\mod 4)\). Define
\[A=B=\mathbb{F}_{q}\cdot v_{1}+\cdots+\mathbb{F}_{q}\cdot v_{\frac{d-1}{2}}+X \cdot e_{d},\]
here \(e_{d}=(0,\ldots,0,1)\).
We first note that the distance between two points in \(A\) or \(B\) is of the form \((x-x^{\prime})^{2}\). By a direct computation and the fact that \(X\) is an arithmetic progression, the number of quadruples \((x,y,u,v)\in A\times A\times A\times A\) such that \(||x-y||=||u-v||\), is at least a constant times \(|X|^{3}q^{2d-2}\), say, \(|X|^{3}q^{2d-2}/2\). For each \((g,z)\in O(d)\times\mathbb{F}_{q}^{d}\), let \(i(g,z)=\#\{(u,v)\in A\times B:gu+z=v\}\). Define \(\mathcal{Q}=\sum_{(g,z)}i(g,z)^{2}\). So, \(\mathcal{Q}\geq|X|^{3}q^{2d-2}|O(d-1)|/2\).
We note that \(|A|=|B|=|X|q^{\frac{d-1}{2}}\). If there were at least \((1-c)|O(d)|q^{d}\) pairs \((g,z)\in O(d)\times\mathbb{F}_{q}^{d}\) satisfying (5.1), then we would bound \(\mathcal{Q}\) in a different way, which leads to a bound that much smaller than \(|X|^{3}|O(d-1)|q^{2d-1}\), so we have a contradiction.
Choose \((1-c)|O(d)|q^{d}\) pairs \((g,z)\) satisfying (5.1). The contribution of these pairs is at most \(\frac{9(1-c)}{4}|O(d)|q^{d}\frac{|X|^{4}}{q^{2}}\) to \(\mathcal{Q}\).
The number of remaining pairs \((g,z)\) is at most \(c|O(d)|q^{d}\). We now compute the contribution of these pairs to \(\mathcal{Q}\). As before, we call \(g\)**type\(-\)k**, \(0\leq k\leq(d-1)/2\), if the rank of the system \(\{v_{1},\ldots,v_{(d-1)/2},e_{d},gv_{1},\ldots,gv_{(d-1)/2}\}\) is \(d-k\).
For any pair \((g,z)\), where \(g\) is **type\(-\)0**, the number of \((u,v)\in A\times B\) such that \(gu+z=v\) is at most \(|X|\). Thus, the contribution of pairs with **type\(-\)0**\(g\)s is at most \(c|O(d)|q^{d}|X|^{2}\).
As before, the contribution to \(\mathcal{Q}\) of \(\mathbf{type-k}\)\(g\)s, with \(k\geq 1\), is at most \(|X|^{2}q^{d-1}q^{d-k}|O(d-1)|\leq|X|^{2}q^{d}|O(d)|q^{-k}\). In other words, we have
\[\mathcal{Q} \leq\frac{9(1-c)}{4}|O(d)|q^{d}\frac{|X|^{4}}{q^{2}}+c|O(d)|q^{d} |X|^{2}+\sum_{k=1}^{\frac{d-1}{2}}|X|^{2}|O(d)|q^{d}\frac{1}{q^{k}}\] \[=\frac{9(1-c)}{4}|O(d)|q^{d}\frac{|X|^{4}}{q^{2}}+2c|O(d)|q^{d}|X|^ {2},\]
when \(q\) is large enough. By choosing \(c\) small enough, we see that \(\mathcal{Q}<|X|^{3}q^{2d-2}|O(d-1)|/2\), a contradiction.
The second statement of Theorem 1.5 is valid in the range \(|B|\gg q^{\frac{d+1}{2}}\). This is also sharp in odd dimensions by the above construction, since we can choose \(|A|=|B|<q^{\frac{d+1}{2}}\) and the conclusion fails.
The first statement of Theorem 1.5 is valid in the range \(|A||B|\gg q^{d}\). To see its sharpness, we construct two sets \(A\) and \(B\) with \(|A|=c^{\frac{d-1}{2}}q^{\frac{d-1}{2}}\), \(|B|=c^{\frac{d+1}{2}}q^{\frac{d+1}{2}}\), and the number of \(z\) in \(\mathbb{F}_{q}^{d}\) such that \(A\cap(gB+z)\neq\emptyset\) is at most \(|A-gB|\leq c^{d}q^{d}\) for any \(g\in O(d)\). Let \(v_{1},\ldots,v_{\frac{d-1}{2}}\) are linearly independent vectors in \(\mathbb{F}_{q}^{d-1}\times\{0\}\). Let \(X\subset\mathbb{F}_{q}\) with \(|X|=cq\). Set
\[A=X\cdot v_{1}+\cdots+X\cdot v_{\frac{d-1}{2}},\;B=X\cdot v_{1}+\cdots+X\cdot v _{\frac{d-1}{2}}+X\cdot e_{d},\]
where \(e_{d}=(0,\ldots,0,1)\). Since any vector in \(A-gB\) is of the form
\[-g(x_{1}v_{1}+\cdots+x_{\frac{d-1}{2}}v_{\frac{d-1}{2}})+y_{1}v_{1}+\cdots+y_{ \frac{d-1}{2}}v_{\frac{d-1}{2}}-x_{\frac{d+1}{2}}ge_{d},\]
where \(x_{i},y_{i}\in X\), we have \(|A-gB|\leq|X|^{d}\leq c^{d}q^{d}\), for all \(g\in O(d)\).
## 6. Intersection pattern II: proofs
In this section, we prove Theorem 1.12.
Proof of Theorem 1.12.: Let \(E\) be the set of \(g\) in \(O(d)\) such that \(|S_{g}(P)|<q^{d}/2\) and \(R=\{(g,S_{g}(P))\colon g\in E\}\). By Theorem 1.14, we first observe that
\[I(P,R)\leq\frac{|P||R|}{q^{d}}+|O(d-1)|^{1/2}q^{d/2}|P|^{1/2}|R|^{1/2}.\]
Note that \(|R|<|E|q^{d}/2\) and \(I(P,R)=|P||E|\). This infers
\[|P||E|\leq q^{d}|O(d-1)|^{1/2}|P|^{1/2}|E|^{1/2}.\]
So,
\[|E|\ll\frac{q^{2d}|O(d-1)|}{|P|},\]
as desired.
## 7. Growth estimates under orthogonal matrices: proofs
In this section, we prove Theorem1.21, Theorem1.22, Theorem1.23, Theorem1.24, Theorem1.25, and Theorem1.26.
Proof of Theorem1.21.: Set \(P=A\times B\). Let \(\lambda=|B|^{1+\epsilon}\). Define
\[E=\{g\in O(d)\colon|A-gB|\leq\lambda\}.\]
Set \(R=\{(g,z)\colon z\in A-gB,g\in E\}\). We observe that \(I(P,R)=|P||E|\). Applying Theorem1.14, one has a constant \(C>0\) such that
\[|P||E|=I(P,R)\leq\frac{|P||R|}{q^{d}}+C\sqrt{|O(d-1)|q^{d/2}|P|^{1/2}|R|^{1/2}}.\]
Using the fact that \(|R|\leq\lambda|E|\) with \(\lambda<q^{d/2}\), we have
\[\frac{|P||E|}{2}\leq C\sqrt{|O(d-1)|}q^{d/2}|P|^{1/2}|R|^{1/2}.\]
This implies
\[|E|\ll\frac{|O(d-1)|\lambda q^{d}}{|A||B|}.\]
This completes the proof.
Proof of Theorem1.22.: We use the same notations as in Theorem1.21, and assume that \(|A|<q^{\frac{d-1}{2}}\), since the other case can be proved similarly. By Theorem1.15, one has a constant \(C>0\) such that
\[|P||E|=I(P,R)\leq\frac{|P||R|}{q^{d}}+Cq^{(d^{2}-d)/4}|P|^{1/2}|R|^{1/2}.\]
Using the fact that \(|R|\leq\lambda|E|\) with \(\lambda<q^{d/2}\), we have
\[\frac{\sqrt{|P||E|}}{2}\leq C\sqrt{\lambda}q^{(d^{2}-d)/4}.\]
This implies
\[|E|\ll\frac{\lambda q^{(d^{2}-d)/2}}{|A||B|},\]
as desired.
Proof of Theorem1.23.: We use the same notations as in Theorem1.21 for \(d=2\). Applying Theorem1.16, there exists a constant \(C>0\) such that
\[|P||E|=I(P,R)\leq\frac{|P||R|}{q^{2}}+Cq^{1/2}|P|^{1/2}|R|^{1/2}|A|^{1/4}.\]
Using the fact that \(|R|\leq\lambda|E|\) with \(\lambda<q^{d}/2\), we have
\[\frac{\sqrt{|P||E|}}{2}\leq Cq^{1/2}\lambda^{1/2}|A|^{1/4}.\]
This implies
\[|E|\ll\frac{\lambda q}{|A|^{1/2}|B|},\]
as desired.
Theorem 1.24 (1) and (2), Theorem 1.25, and Theorem 1.26 are proved by the same approach using Theorem 1.17, Theorem 1.18, and Theorem 1.19, respectively.
For Theorem 1.24 (3), the same proof implies that if \(|A|\leq p\) and \(|B|\leq p\), then \(|E|\ll\frac{p|B|^{c}+|B|^{c}|P|^{2/3}}{|A|}\). However, if we use Theorem 3.3, then we are able to get rid of the term \(p|B|^{c}\).
## 8. Intersection pattern III: proofs
In this section, we prove Theorem 1.30. Let us start by introducing the necessary theorem.
**Theorem 8.1**.: _[_25_, Theorem 1.6]_ _Let \(\mathcal{K}\) be the set of \(k\)-planes and let \(\mathcal{H}\) be the set of \(h\)-planes in \(\mathbb{F}_{q}^{d}\) with \(h\geq 2k+1\). Then the number of incidences between \(\mathcal{K}\) and \(\mathcal{H}\) satisfies_
\[\left|I(\mathcal{K},\mathcal{H})-\frac{|\mathcal{K}||\mathcal{H}|}{q^{(d-h)(k+ 1)}}\right|\leq\sqrt{c_{k}(1+o(1))}q^{\frac{(d-h)h+k(2h-d-k+1)}{2}}\sqrt{| \mathcal{K}||\mathcal{H}|},\]
_where \(c_{k}=(2k+1)\binom{k}{\lfloor k/2\rfloor}\)._
**Theorem 1.30**.: _Let \(A,B\subset\mathbb{F}_{q}^{d}\) and \(m\) be a positive integer._
(1) _If \(|A|,|B|>q^{m}\), then there are at least \(\gg q^{m(d-m)}\) subspaces \(W\in G(d,m)\) such that_
\[|\pi_{W}(A)\cap\pi_{W}(B)|\gg q^{m}.\]
(2) _If \(|A|,|B|>q^{2m}\), then there there are at least \(\gg q^{m(d-m)}\) subspaces \(W\in G(d,m)\) such that_
\[|\pi_{W}(A)\cap\pi_{W}(B)|=q^{m}.\]
(3) _Let \(m\geq d/2\). If \(|A|>100q^{m}\), \(|B|<q^{m}/2\), and \(|A||B|>160q^{2m}\), then there are at least \(\gg q^{m(d-m)}\) subspaces \(W\in G(d,m)\) such that_
\[|\pi_{W}(A)\cap\pi_{W}(B)|\gg|B|.\]
Proof of Theorem 1.30.: We proceed as follows.
1. By applying Theorem1.27(2), we have \[\#\{W\in G(d,m)\colon|\pi_{W}(E)|\leq\delta q^{m}\}\leq 2\frac{\delta}{1-\delta}q^{ m(d-m+1)-s},\] for any \(E\subset\mathbb{F}_{q}^{d}\) with \(|E|=q^{s}\) and \(m<s<d\). Set \(\delta=\frac{3}{4}\). Then, the number of \(m\)-dimensional subspaces \(W\in G(d,m)\) such that \(|\pi_{W}(A)|\leq\frac{3}{4}q^{m}\) is at most \(6q^{(d-m+1)m-s}\), where \(|A|=q^{s}\) for some \(s>m\). The same happens for the set \(B\). This implies that there are at most \(12q^{(d-m+1)m-s}\)\(m\)-dimensional subspaces \(W\in G(d,m)\) such that \(|\pi_{W}(A)|\leq\frac{3}{4}q^{m}\) or \(|\pi_{W}(B)|\leq\frac{3}{4}q^{m}\), where \(q^{s}=\min\{|A|,|B|\}\). That is, there are at least \((1+o(1))q^{m(d-m)}(1-12q^{m-s})\)\(m\)-dimensional subspaces \(W\in G(d,m)\) such that \(|\pi_{W}(A)|>\frac{3}{4}q^{m}\) and \(|\pi_{W}(B)|>\frac{3}{4}q^{m}\). We note that \[|\pi_{W}(A)\cap\pi_{W}(B)| =|\pi_{W}(A)|+|\pi_{W}(B)|-|\pi_{W}(A)\cup\pi_{W}(B)|\] \[>\frac{3}{4}q^{m}+\frac{3}{4}q^{m}-|\pi_{W}(A)\cup\pi_{W}(B)|\] \[\geq\frac{1}{2}q^{m}.\] The second inequality is from \(|\pi_{W}(A)\cup\pi_{W}(B)|\leq q^{m}\) since the dimension of \(W\) is \(m\). Therefore, there are at least \((1+o(1))q^{m(d-m)}(1-12q^{m-s})\)\(m\)-dimensional subspaces \(W\in G(d,m)\) such that \[|\pi_{W}(A)\cap\pi_{W}(B)|\gg q^{m}.\] This completes the proof of Theorem1.30(1).
2. To prove the second part, we need to count the number of \(W\in G(d,m)\) such that \[|\pi_{W}(A)\cap\pi_{W}(B)|\neq q^{m},\] if \(|A|,|B|>q^{2m}\). To do this, we first count the number of \(W\in G(d,m)\) such that \(|\pi_{W}(A)|\neq q^{m}\). Let \(X:=\{W\in G(d,m)\colon|\pi_{W}(A)|\neq q^{m}\}\) and \(Y:=\{W\in G(d,m)\colon|\pi_{W}(B)|\neq q^{m}\}\). By Corollary1.28(3), we have that \[|X|=\#\{W\in G(d,m)\colon|\pi_{W}(A)|\neq q^{m}\}\leq 4q^{m(d-m+2)-s},\] for \(|A|=q^{s}\) with \(s>2m\). Similarly, \(|Y|\leq 4q^{(d-m)(m+2)-t}\) for \(|B|=q^{t}\) with \(t>2m\). Thus, it suffices to show that \(|X\cup Y|=\#\{W\in G(d,m)\colon|\pi_{W}(A)\cap\pi_{W}(B)|\neq q^{m}\}\) is smaller than the total number of \(m\)-subspaces. Since \(s,t>2m\), we have \[|X|\leq 4q^{(d-m)(m+2)-s}=o(q^{m(d-m)})\quad\text{and}\quad|Y|\leq 4q^{(d-m)(m+2)-t }=o(q^{m(d-m)}),\] which gives our desired result.
3. Lastly, we prove the third part. By Corollary 1.28(1) and Corollary 1.28(2), we have \[\#\{W\colon|\pi_{W}(A)|\leq q^{m}/10\}\leq\frac{1}{2|A|}q^{m(d-m+1)}\quad\text{ and}\quad\#\{W\colon|\pi_{W}(B)|\leq|B|/10\}\leq\frac{1}{2}|B|q^{m(d-m-1)}.\] Since \(|A||B|>2q^{2m}\), we also have that \[\frac{1}{2|A|}q^{m(d-m+1)}\leq\frac{1}{4}|B|q^{m(d-m-1)}.\] Thus, this, together with \(|B|<q^{m}/2\), implies that the number of \(m\)-dimensional subspaces \(W\) such that \[|\pi_{W}(A)|>q^{m}/10,\quad\text{and}\quad|\pi_{W}(B)|>|B|/10\] is at least \((1+o(1))q^{m(d-m)}-|B|q^{m(d-m-1)}/4-|B|q^{m(d-m-1)}/2>(1+o(1))q^{m(d-m)}/2\). From now on, we omit the term \(1+o(1)\) and consider that \(q\) is sufficiently large. We denote the set of these \(m\)-dimensional subspaces \(W\) by \(D\). Then we have \(|D|>q^{m(d-m)}/2\). Let \(D^{\prime}\subset D\) be the set of \(W\in D\) such that the number of \((d-m)\)-dimensional affine planes in \(\pi_{W}(B)\) such that each contains at least \(|A|/(100q^{m})\) points from \(A\) is at least \(|B|/100\). If \(|D^{\prime}|\geq|D|/2\), then we are done. Otherwise, by abuse of notation, we can assume that for any \(W\in D\), there are at least \(|B|/10-|B|/100=9|B|/100>|B|/20\) (\(d-m\))-dimensional affine planes in \(\pi_{W}(B)\) such that each contains at most \(|A|/(100q^{m})\) points from \(A\). For each \(W\in D\), let \(V_{W}\) be the subset of \(\pi_{W}(B)\) such that each contains at most \(|A|/(100q^{m})\) points from \(A\), and \(V=\cup_{W\in D}V_{W}\). It is clear that \(|V|\geq|D||V_{W}|>q^{m(d-m)}|B|/40\). Using the incidence bound in Theorem 8.1, note that \[\left|I(V,A)-\frac{|V||A|}{q^{m}}\right|\leq q^{\frac{m(d-m)}{2}}\sqrt{|V||A|}.\] Since \(|V||A|\geq 4q^{(d-m)m+2m}\), then one has \[I(V,A)\geq\frac{|V||A|}{2q^{m}}=q^{m(d-m-1)}\frac{|A||B|}{80}.\] This means that there are at least \(\gg q^{m(d-m)}\) subspaces \(W\in D\) such that \[I(V_{W},A)\geq\frac{|A||B|}{40q^{m}},\] since \(|D|\) is at most \(q^{m(d-m)}\) which is the total number of all \(m\)-subspaces.
For each such \(W\), let \(M\) be the number of \(m\)-dimensional affine planes in \(V_{W}\) such that each contains at least one point from \(A\). Note that
\[\frac{M|A|}{100q^{m}}\geq I(V_{W},A).\]
This implies that \(M\geq\frac{5|B|}{2}\). Since \(|\pi_{W}(A)\cap\pi_{W}(B)|\geq M\), we have
\[|\pi_{W}(A)\cap\pi_{W}(B)|\gg|B|.\]
This completes the proof.
\(\square\)
As mentioned in the introduction, the following lemma is on the sharpness of Theorem 1.30(3). This construction is similar to Example 4.1 in [21] in the continuous.
**Lemma 8.2**.: _Assume \(q=p^{2}\). For any \(0<c<1\), there exist sets \(A,B\subset\mathbb{F}_{q}^{2}\) with \(|A||B|=cq^{2}\) and \(L\subset G(2,1)\) with \(|L|\geq(1-c)q\) such that_
\[\pi_{W}(A)\cap\pi_{W}(B)=\emptyset\ \ \text{for any $W\in L$}.\]
_Proof_. Let \(A_{1}\) be the union of \(cp\) disjoint cosets of \(\mathbb{F}_{p}\). Then \(|A_{1}|=cq\). Let \(B^{\prime}=A_{2}=\mathbb{F}_{p}\).
Let \(F\colon\mathbb{F}_{q}^{2}\setminus(\mathbb{F}_{q}\times\{0\})\to\mathbb{F}_{q }^{2}\) defined by
\[F(x,y):=\left(\frac{x}{y},\frac{1}{y}\right).\]
Set
\[A=F^{-1}(A_{1}\times A_{2})\text{ and }B=(-B^{\prime})\times\{0\}.\]
It is clear that \(|A||B|=cq^{2}\). We now construct \(L\subset G(2,1)\) arbitrary large such that
\[\pi_{W}(A)\cap\pi_{W}(B)=\emptyset\ \ \text{for any $W\in L$}.\]
We denote the line of the form
\[\mathbb{F}_{q}\cdot\begin{pmatrix}-b\\ 1\end{pmatrix}+\begin{pmatrix}c\\ 0\end{pmatrix}\]
by \(\ell(-b,c)\).
Set \(C=A_{1}+B^{\prime}A_{2}\). We first observe that if \((x,y)\in\ell(-b,c)\) with \(b\in B^{\prime}\) and \(c\in C\), then \(x+yb=c\). This gives
\[A_{1}\times A_{2}\subset\bigcap_{b\in B^{\prime}}\bigcup_{c\in C}\ell(-b,c).\]
For any \((e_{1},e_{2})\) on the unit circle with \(e_{2}\neq 0\), we have
\[F\left(\mathbb{F}_{q}^{*}\cdot\begin{pmatrix}e_{1}\\ e_{2}\end{pmatrix}+\begin{pmatrix}b\\ 0\end{pmatrix}\right)=\ell(b,e_{1}/e_{2}).\]
Set \(E=\{(e_{1},e_{2})\colon e_{1}^{2}+e_{2}^{2}=1,\ e_{1}/e_{2}\in C\}.\) Then
\[A=F^{-1}(A_{1}\times A_{2})\subset\bigcap_{b\in B^{\prime}}\bigcup_{(e_{1},e_{ 2})\in E}\left(\mathbb{F}_{q}^{*}\cdot\begin{pmatrix}e_{1}\\ e_{2}\end{pmatrix}+\begin{pmatrix}b\\ 0\end{pmatrix}\right).\]
Since \(|C|=cq\), we have \(|E|\leq cq\). Define
\[L:=G(2,1)\setminus\{W^{\perp}\colon W\in E\}.\]
Here we identify each point \((e_{1},e_{2})\) on the unit circle with the line containing it and the origin. So \(|L|\geq q-|E|\geq(1-c)q\). By a direct computation, one can check that
\[\pi_{W}(A)\cap\pi_{W}(B)=\emptyset\ \text{ for any }W\in L.\]
This completes the proof.
## 9. Acknowledgements
T. Pham would like to thank the Vietnam Institute for Advanced Study in Mathematics (VIASM) for the hospitality and for the excellent working condition. S. Yoo was supported by the KIAS Individual Grant (CG082701) at Korea Institute for Advanced Study.
|
2306.06650 | Scanning NV magnetometry of focused-electron-beam-deposited cobalt
nanomagnets | Focused-electron-beam-induced deposition is a promising technique for
patterning nanomagnets for spin qubit control in a single step. We fabricate
cobalt nanomagnets in such a process, obtaining cobalt contents and saturation
magnetizations comparable to or higher than those typically obtained using
electron-beam lithography. We characterize the nanomagnets using transmission
electron microscopy and image their stray magnetic field using scanning NV
magnetometry, finding good agreement with micromagnetic simulations. The
magnetometry reveals the presence of magnetic domains and halo side-deposits,
which are common for this fabrication technique. Finally, we estimate dephasing
times for electron spin qubits in the presence of disordered stray fields due
to these side-deposits. | Liza Žaper, Peter Rickhaus, Marcus Wyss, Boris Gross, Martino Poggio, Floris Braakman | 2023-06-11T11:13:02Z | http://arxiv.org/abs/2306.06650v2 | # Scanning NV magnetometry of focused-electron-beam-deposited cobalt nanomagnets
###### Abstract
Focused-electron-beam-induced deposition is a promising technique for patterning nanomagnets for spin qubit control in a single step. We fabricate cobalt nanomagnets in such a process, obtaining cobalt contents and saturation magnetizations comparable to or higher than those typically obtained using electron-beam lithography. We characterize the nanomagnets using transmission electron microscopy and image their stray magnetic field using scanning NV magnetometry, finding good agreement with micromagnetic simulations. The magnetometry reveals the presence of magnetic domains and halo side-deposits, which are common for this fabrication technique. Finally, we estimate dephasing times for electron spin qubits in the presence of disordered stray fields due to these side-deposits.
Nanomagnets with precisely defined geometries are of interest for a variety of applications, including magnetic resonance force microscopy [1, 2], as mediating elements between spins
and mechanical degrees of freedom [3, 4, 5], magnetic memories [6], and for the implementation of quantum logic with spin-based qubits [7, 8] such as electron spins confined in quantum dots.
Such electron spin qubits can be controlled and manipulated using high-frequency voltages applied to metallic gates [9], and selective spin rotation can be implemented by periodically displacing the electron wave function inside a magnetic field gradient resulting from a nearby nanomagnet [7]. Recent experiments have shown successful operation in fault-tolerant regimes with gate fidelities above the required thresholds [10, 11, 12]. Realizing fast spin rotation, while at the same time keeping dephasing and relaxation rates acceptably low, requires precise engineering of strong magnetic field gradients. This places stringent constraints on the geometry, relative location and alignment, and magnetic properties of the used nanomagnets [13, 14]. Furthermore, when scaling up to larger qubit arrays, the variability between a large number of individual nanomagnets will need to be characterized and minimized. Spatial characterization of nanomagnet stray fields is therefore important in order to facilitate qubit device fabrication, precise positioning of quantum dots relative to the nanomagnet, and to correctly assess and minimize qubit decohering mechanisms.
Typically, nanomagnets are patterned using a multi-step procedure, involving resist-coating, electron-beam lithography, metallization, and lift-off. Such a procedure is prone to introducing impurities in the devices due to residual resist particles, as well as to introducing possible misalignment. Furthermore, such techniques are limited to fabrication of 2D patterns.
Here, we use focused-electron-beam-induced deposition (FEBID) to pattern Co nanomagnets in a single step [15]. FEBID is an appealing technique for the fabrication of nanomagnets integrated in qubit devices, since it generates no impurities in the form of residual resist, eases fabrication due to its single-step nature, and allows for the fabrication of 3D geometries [16], opening up new ways of engineering magnetic gradients optimized for spin qubit control. FEBID of Co has been demonstrated as a reliable technique for growing highly magnetic
nanostructures, reaching Co content of up to \(\sim\)96 atomic percent of bulk values [15, 17]. FEBID also allows for patterning with lateral resolution in the nm range [18], approaching the intrinsic limit of the process imposed by the electron beam diameter [19, 20]. For Co nanostructures, lateral resolutions of below 30 nm have thus far been achieved [21].
We characterize the properties of FEBID nanomagnets using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM), high-resolution energy dispersive spectroscopy (EDS) analysis, and atomic force microscopy (AFM). Next, We use scanning NV magnetometry (SNVM) [22] to image the magnetic stray field of the Co deposits, both at externally applied magnetic field sufficiently high to achieve magnetization saturation, and at zero field. We find good agreement of our measurements with micromagnetic simulations. From our SNVM measurements of the disordered magnetic stray field of unintended deposits surrounding the nanomagnet, we estimate spin qubit dephasing times in the presence of charge noise.
The sketch in Fig.1a illustrates the working principle of our FEBID [23, 24, 25, 26] fabrication technique. First, a precursor molecule containing cobalt, \(\text{Co}_{2}\text{(CO)}_{8}\), is introduced inside a cham
Figure 1: (a) Illustration describing FEBID patterning of Co nanomagnet. (b) SEM image of the Co nanomagnet studied here. (c) HAADF-STEM image of a cross-section of a Co NW nanomagnet, similar to the magnets studied here. (d) EDS analysis along linecut shown in (c).
ber pumped to high vacuum. Irradiating the precursor with an electron beam causes it to decompose, leaving Co deposits on a nearby sample substrate [15]. By directing the beam using a scanning electron microscope, this technique can be used to, in a single step, pattern Co nanodeposits with high resolution [21].
We use a Thermo Fisher FEI Helios 650 NanoLab FIB/SEM, fitted with a Co\({}_{2}\)(CO)\({}_{8}\) gas injection system. The nanomagnets are patterned on the top surface of a Si substrate covered with 290 nm of thermally grown SiO\({}_{2}\). We fabricate the nanomagnets with a nanowire (NW) shape, in order to obtain a magnetic configuration with a single magnetic easy axis, enabling simple alignment of our scanning probe and straightforward comparison to simulations. To achieve high Co content and high lateral resolution, we used the following FEBID parameters [15]: an acceleration voltage of 10 kV, a beam current of 3.2 nA, a dwell time of 1 us, and a precursor flux corresponding to a vacuum chamber pressure of \(4\times 10^{-6}\) mbar. Using these settings, we have achieved patterning Co structures with lateral widths down to 50 nm and heights down to 15 nm, as measured via AFM (See Supporting Information).
After FEBID fabrication, we use scanning and transmission electron microscopy (SEM and TEM) to characterize the geometry and composition of representative nanomagnets. Fig.1b shows an SEM top-view image of a Co NW deposit (dimensions: 14.4 um length, 230 nm width, and 130 nm height) and Fig.1c shows an HAADF-STEM image of a cross-section of such a deposit. In Fig.1c, the rounded cross-section of the Co NW can be discerned, as well as "halo" side-deposits of nm thickness extending laterally for several microns. EDS mapping along the linecut indicated in Fig.1c reveals a composition consisting mostly of Co (82 \(\pm\) 2.5 %), with additional smaller amounts of C (14 \(\pm\) 2.5 %) and O (4 \(\pm\) 2.5 %) (see Fig.1d). We find that this composition is rather uniform throughout the deposit, including similar proportions in the halo side-deposits (see Supporting Information for additional EDS data).
The halo is commonly deposited as a side-effect in FEBID, produced through precursor dissociation by secondary electrons scattering off the substrate and the pattern that is being grown [27, 28]. Such halo deposits are typically undesirable and various approaches can be used to mitigate their formation. The amount of halo and its composition can vary depending on the deposition parameters, in particular the exact amount of precursor gas present in the chamber. Furthermore, by performing FEBID at low temperatures, halo effects can potentially be reduced. Finally, the halo can in principle be removed by means of argon ion milling (see Supporting Information), although at the same time a layer of the intended deposited structure and surrounding substrate may be removed and charge defects may be introduced into the device.
Compared to other scanning probe magnetometry techniques [29], such as scanning super
Figure 2: (a) Illustration of the SNVM setup. (b) Magnetic stray field produced by a Co NW, with \(B_{ext}=\)202.5 mT. Top panel: SNVM data. Data has been smoothened using a Gaussian filter with \(\sigma\) below pixel size. Regions over the ends of the NW have been blacked out, since here the SNVM measurements are unreliable (see main text). Bottom panel: simulation taken at a height 30 nm above the bottom surface of the magnet geometry. (c) Horizontal (top panel) and vertical (bottom panel) linecuts of the SNVM data, at the positions indicated by black lines in (b). Solid blue lines are corresponding linecuts taken from the simulated data in (b).
conducting quantum interference device (SQUID) magnetometry and magnetic force microscopy (MFM), SNVM [2, 30, 31] offers several advantages which make it suitable for our use. Of particular relevance for our application is the high spatial resolution that can be achieved with SNVM, which can reach 15 nm to 25 nm [29, 32, 33], making it possible to image magnetic fields and currents at length scales relevant for spin qubit devices. Also, SNVM yields quantitative measurements of the magnetic fields as the Zeeman energy of a single NV-center defect can be probed directly. Furthermore, due to its high magnetic field sensitivity on the order of \(\mu\)T/\(\sqrt{\mathrm{Hz}}\), SNVM allows imaging the weak fields associated with nanoscale magnetic domains [34] and other spatially inhomogeneous magnetic stray fields, making it a useful tool to study the magnetization properties of FEBID Co halo structures and their impact on spin qubit performance.
Fig.2a illustrates the SNVM setup employed here: a commercial system (Qnami ProteusQ) operating under ambient conditions. We use a diamond cantilever (Qnami Quantilever MX) hosting a single negatively charged NV center embedded inside its protruding tip. The cantilever is attached to a quartz Akiyama tuning fork, allowing for frequency-modulated AFM. For our measurements we use diamond tips hosting an NV center with a spin quantization axis oriented parallel to the principal axis of the NW magnet, i.e. along the \(x\)-axis as defined in Fig.2a. This type of diamond tips are fabricated from 110 diamond blankets [35]. We estimate the distance \(d_{NV}\) of the NV center to the apex of the diamond tips to be 30 nm to 50 nm [34], and corresponding best achievable lateral spatial resolutions of \(0.86\cdot d_{NV}\). During the measurements, an external magnetic field \(B_{ext}\) is applied along the NV quantization axis. This direction coincides with the principal NW axis and its easy magnetic axis. To perform magnetometry, we employ measurements of optically detected electron spin resonance as well as of fluorescence. See e.g. Celano et al. [34] for a more in-depth description of the SNVM setup and measurement techniques used here.
Fig.2b shows an SNVM scan of a Co NW nanomagnet, taken with \(B_{ext}\) = 202.5 mT, which falls within the typical operating range of spin qubits. The scan is taken with a tip-sample distance < 5 nm, and consequently the magnetometry measurements are taken at a distance \(\sim d_{NV}\) from the sample surface. The scan shows the \(x\)-component of the magnetic stray field of the nanowire-shaped magnet, revealing a pole at each end of the magnet. At this value of \(B_{ext}\), the nanomagnet is almost fully saturated along its magnetic easy axis. The associated stray field profile features large regions surrounding the nanomagnet where field components transverse to the quantization axis of the NV center are small. In these regions, relatively little quenching of NV fluorescence [22] occurs and it is straightforward to reconstruct the \(x\)-components of the stray field from the SNVM measurements. Even so, we blacked out regions in Fig.2b where we could not reliably track the Zeeman splitting of the NV center. This can occur when the magnetic stray field is too large, transverse components are too large, or when the optical read-out signal is quenched. Especially at the ends of the NW, we expect strong out-of-plane stray field components. These out-of-plane fields are transverse to the NV axis and lead to a quenching of the NV signal [22].
We compare the SNVM measurement with finite-element simulations of the \(x\)-component of the stray field in the same area around the NW (see Fig.2b, lower panel), using the software package MuMax3 [36, 37]. Here, we simulate the stray field of a rectangular Co box geometry with a width of 250 nm, height of 130 nm, and a length of 14.4 um. We use a typical value of the exchange constant for Co, \(A_{ex}=14\cdot 10^{-12}\)J/m, and a 5x5x5 nm\({}^{3}\) cell size. Using this model, we obtain a \(B_{x}\) stray field profile that qualitatively agrees well with the experiment, as shown in Fig.2b. Fig.2c shows plots of vertical and horizontal linecuts taken at the corresponding lines shown in Fig.2b. We find best agreement between simulation and experiment when we use a saturation magnetization of \(M_{s}=1.2\cdot 10^{6}\)A/m in the simulation. Such a saturation magnetization corresponds to 85% of the bulk value, agreeing well with the atomic fraction of Co measured in our deposit (Fig.1d). We note that exactly aligning the simulation with the experimental data in the \(xy\)-plane is to some degree hindered by
imperfect knowledge of the precise location of the NV center inside the scanning tip, as well as by the pixel size of the scan. Some deviations between simulation and experiment may also result from the fact that in the simulation we do not take the rounded shape of the NW or the halo into account.
We further investigate the presence of magnetic structures of characteristic sizes of 50 nm to 200 nm. Such dimensions are of the same order of magnitude as the length scales relevant for spin qubit devices, such as the typical dimensions of quantum dots, confinement gate electrodes, nanomagnets, and coupling elements [9]. Moreover, also tunneling lengths and typical wave function displacements are of a similar order of magnitude. Of particular relevance for spin qubits are unintended variations of the magnetic stray field on short length scales. In the presence of small displacements of the electron wave function, such variations can translate to magnetic noise, which can limit qubit decoherence [14, 38].
In the Co nanomagnet devices shown here, we find small magnetic structures in the form of magnetic domains inside the NW at low \(B_{ext}\), as well as grain-like stray fields produced by the halo deposits surrounding the NW. We investigate these smaller structures in a Hall bar
Figure 3: FEBID Co Hall bar. (a) SEM image of Co Hall bar device patterned through FEBID. Contacts are Ti/Pd, fabricated using electron-beam lithography and lift-off procedure. (b) SNVM maps of Co NW taken at \(B_{ext}\) = 240 mT (upper panel), and \(B_{ext}\) = 13 mT (lower panel). White dashed lines indicate location of NW.
device consisting of 3 crossing Co NWs fabricated through FEBID using similar parameters as before, see Fig.3a. Fig.3b shows SNVM scans of a part of one of the Co NWs, in the region delineated in Fig.3a. In the upper panel of Fig.3b, SNVM data taken at \(B_{ext}\) = 240 mT is shown, at which field the magnetization of the horizontal NW section is saturated, resulting in a homogeneous stray field above the NW. In the lower panel in Fig.3b, SNVM of the same section is shown at \(B_{ext}\) = 13 mT. In this case, multiple domains of characteristic size of several hundred nanometer can be discerned in the observed stray field of the magnet including its halo.
Next, we use SNVM to study the halo side-deposits in more detail. Figs.4a, and b show optical microscopy and SNVM images, respectively, of a Co Hall bar structure patterned via FEBID, which exhibits a significant halo side deposit. As can be seen, the halo can be distinguished as a dark shade of inhomogeneous shape in the optical microscopy image. The SNVM image of the same area further reveals that the halo presents a magnetic stray field of grainy composition, see Fig.4b (see Supporting Information for a magnified figure). The grainy pattern follows the same shape as the dark shade discernible in Fig.4a: it surrounds the intended deposit and becomes smooth further away from the deposit. We investigate the size distribution of the grainy structures, using a segmentation analysis (Gwyddion) on a subset of the data shown in Fig.4b. We find a typical equivalent square side \(a_{eq}\) of roughly 100 nm, larger than the pixel size of 50x50 nm of the scan. Furthermore, we find associated stray field fluctuations of up to 3 mT.
We estimate the effect of such magnetic stray field fluctuations on the dephasing of electron spin qubits when placed inside the stray field of Fig.4b. Specifically, we consider dephasing as a result of spin qubit displacements inside the halo stray field due to charge noise. Typical rms displacement amplitudes of QD electron spin qubits in this scenario are 1 pm to 10 pm,[14, 39] along \(x\) and \(y\). Out-of-plane displacements are typically negligible for
quantum well or MOS quantum dots, since the confinement potential in this direction is much larger than in the \(xy\)-plane. Since such displacements are orders of magnitude smaller than the grain size of the halo stray field, we can restrict our analysis to using the first derivative at each point, and neglect high-order derivatives of the stray field. Taking the spin qubit quantization axis to be parallel to \(x\), the stray field derivatives that are relevant for dephasing are therefore \(dB_{x}/dx\) and \(dB_{x}/dy\). By differentiating the scan of Fig.4b with respect to \(x\) and \(y\), we find that both \(dB_{x}/dx\) and \(dB_{x}/dy\) are largest for positions near the intended Co Hall bar deposit (top left and right corners, slightly above bottom left and right corners of plot in Fig.4b), but do not exceed \(425\mu\)T/nm at any point of the scan.
Using these derivatives of the stray field, we can estimate the inhomogeneous dephasing time \(T_{2}^{*}\) of a spin qubit placed inside the grainy stray field induced by the halo. For each point \((x,y)\) of the scan of Fig.4b, we calculate \(T_{2}^{*}(x,y)\) using \(T_{2}^{*}=\sum\limits_{i}(2\pi\sqrt{2}\cdot\mu_{B}/\hbar\cdot dB_{x}/di\cdot \Delta i)^{-1}\), with \(i\in x,y\). Here we use \(\Delta x=\Delta y=10\,\mathrm{pm}\), [14] an electron spin Lande g-factor of 2, and we assume a quasi-static \(1/f\)-like spectral density of the charge noise [40]. Fig.4c shows the corresponding map of \(T_{2}^{*}(x,y)\). In this case, \(T_{2}^{*}(x,y)\) decreases from several hundreds of \(\mu\)s
Figure 4: (a) Optical micrograph of Co Hall bar, with halo distinguishable as dark shape. (b) SNVM map of same area as in (a), taken at \(B_{ext}\) = 13 mT. Dark vertical central line and areas at contacts feature significant field components transverse to the NV quantization axis, preventing straightforward determination of \(B_{x}\) in these areas. (c) Map of estimated \(T_{2}^{*}(x,y)\) for the same area, assuming spin qubit rms displacement amplitude of \(10\,\mathrm{pm}\).
on the bottom-right side of the scan, where almost no halo is present, to roughly \(10\,\mathrm{\SIUnitSymbolMicro s}\) in the top-left of the scan, where the halo is most intense. We find that \(T_{2}^{*}\) exceeds \(1\,\mathrm{\SIUnitSymbolMicro s}\) for each point of the scan. Note that the contours in the plot of Fig.4c have been obtained by smoothing the data. While these contours indicate the trend of decreasing \(T_{2}^{*}\) when approaching the Co deposit, the grainy pattern visible in the colorplot of Fig.4c originates from the disordered halo stray field, and hence should not be ignored.
The estimated \(T_{2}^{*}(x,y)\) shown in Fig.4e are on par with those found for various kinds of high-quality spin qubits in Si- and Ge-based quantum dots [41], indicating that the spatially inhomogeneous stray fields of the halo side-deposits need not limit coherence more than other factors, such as charge noise in the presence of strong intended field gradients or spin-orbit coupling, and hyperfine interactions. In future work, we aim to characterize also the time-dependent magnetic noise originating from the halo and evaluate its impact on spin qubits. from Our TEM and SNVM characterization show that our FEBID structures have Co content and saturation magnetization comparable or higher than what is typically obtained using Co evaporation and standard electron beam lithography patterning. Moreover, past results have shown that depositions of Co content in excess of 95 atomic percent can be obtained using FEBID. Such high Co contents, in combination with the ability of FEBID to produce 3D magnet geometries would enable further optimization of nanomagnets for spin qubit control.
Finally, future research may target cryo-FEBID for the patterning of magnetic nanostructures on sensitive spin qubit devices, since it allows to pattern deposits with electron doses of order \(10^{3}\)\(\mu\)C/cm\({}^{2}\), which is \(\sim 10^{4}\) times less than needed for FEBID at room temperature [42, 43], and similar to what is used in electron-beam exposure of resists. Hence, it can be expected that sample damage due to electron irradiation is comparable for cryo-FEBID and resist-based electron-beam lithography techniques.
## Acknowledgement
We thank Prof. Jose Maria De Teresa, Prof. Patrick Maletinsky, and Dr. Monica Schonenberger for useful discussions and assisting with the AFM measurements. Calculations were performed at sciCORE ([http://scicore.unibas.ch](http://scicore.unibas.ch)) scientific computing center at University of Basel. We acknowledge funding from the Swiss National Science Foundation via NCCR SPIN as well as Project grant 200020_207933.
|
2301.12682 | Image Contrast Enhancement using Fuzzy Technique with Parameter
Determination using Metaheuristics | In this work, we have presented a way to increase the contrast of an image.
Our target is to find a transformation that will be image specific. We have
used a fuzzy system as our transformation function. To tune the system
according to an image, we have used Genetic Algorithm and Hill Climbing in
multiple ways to evolve the fuzzy system and conducted several experiments.
Different variants of the method are tested on several images and two variants
that are superior to others in terms of fitness are selected. We have also
conducted a survey to assess the visual improvement of the enhancements made by
the two variants. The survey indicates that one of the methods can enhance the
contrast of the images visually. | Mohimenul Kabir, Jaiaid Mobin, Ahmad Hassanat, M. Sohel Rahman | 2023-01-30T06:09:07Z | http://arxiv.org/abs/2301.12682v1 | # Image Contrast Enhancement using Fuzzy Technique with Parameter Determination using Metaheuristics
###### Abstract
In this work, we have presented a way to increase the contrast of an image. Our target is to find a transformation that will be image specific. We have used a fuzzy system as our transformation function. To tune the system according to an image, we have used Genetic Algorithm and Hill Climbing in multiple ways to evolve the fuzzy system and conducted several experiments. Different variants of the method are tested on several images and two variants that are superior to others in terms of fitness are selected. We have also conducted a survey to assess the visual improvement of the enhancements made by the two variants. The survey indicates that one of the methods can enhance the contrast of the images visually.
Keywords:image enhancement metaheuristics fuzzy logic genetic algorithm
## 1 Introduction
Image enhancement is the procedure of improving an image's quality and information content. Image enhancement aims to increase visual differences among its features and make it more suitable for applications (e.g. increasing the brightness of dark images for viewing). Some common image enhancement techniques are sharpening, smoothing, increasing contrast, noise reduction, etc. [3]. Contrast enhancement is a process that is applied to images or videos to increase their dynamic range [4].
Image enhancement has been practiced as an applicable problem in metaheuristics for a long time [1, 2, 13]. Moreover, the blending of fuzzy systems with meta-heuristic algorithms has recently received attention in the Computational Intelligence community [5]. Many works treated the image enhancement problem as an optimization problem and concentrated on altering the image quality fitness function [1, 3, 10], combining several meta-heuristic algorithms [1, 2], optimizing parameters [14] and escaping local optima [16]. One thing that has yet to gain more attention in image enhancement problems is the adaption of a fuzzy
system. Sandeep and Samrudh [12] have shown that variation in the input membership function in a fuzzy system has a positive impact on image enhancement performance.
This paper addresses the image contrast enhancement problem by using stochastic optimization on the fuzzy logic system. The paper's main contribution is to design a meta-heuristic technique by optimizing the fuzzy logic system. The logical component of the fuzzy logic system is a set of membership functions that are used to describe an intensity transformation function. We implement genetic operators which tweak the fuzzy logic system to enhance the original image to an optimal enhanced image. The fuzzy system of our paper is based on simple contrast enhancement rules described in [6], where the authors have used some image-independent rules and fuzzy sets.
Our main idea of the fuzzy image enhancement technique is as follows: we start with a basic fuzzy rule set and a set of input membership functions. By applying a metaheuristics framework, we try to evolve the fuzzy sets. Finally, we apply the transformation function described by fuzzy sets on the value channel of HSV color space to get the final image. Thus we have converted the problem into an optimization problem. In other words, rather than generating an image, we try to generate a suitable mapping between the input and output color values. However, this is inherently challenging since, even if we consider only 8-bit color, i.e., 256 color values, the number of possible mappings becomes huge. Thus it becomes infeasible to search through the solution space exhaustively, and there is no definite knowledge about how to improve/generate a solution. This motivates us to leverage a metaheuristics framework [7].
## 2 Literature Review
Some contrast manipulation techniques are gamma transformation [22], histogram equalization (HE) [23], etc. HE is very useful in contrast enhancement [8]. But HE, while increasing the contrast, fails to keep image brightness the same [9]. BiHistogram Equalization [24] solves this issue. Another problem of HE is the information loss of image [11]. Also, gamma transform, log transform [25] can be used with lower computational complexity. In the work of [20], we can find applications of gamma transformation for contrast enhancement which applies different gamma corrections on multiple parts of pixel sets automatically. Unfortunately, these techniques don't work well in a complex illumination setting [10]. So, they can not be applied without tweaking some parameters.
Fuzzy logic and metaheuristics techniques have been applied previously in image enhancement problems. One basic method is to apply a fuzzy logic-based system from [6] that uses three rules only and trapezoidal and triangular input fuzzy sets. Joshi and Kumar [12] have proposed a similar method, albeit with a more complex rule set (7 rules) and Gaussian fuzzy sets.
To evaluate the fitness of an enhanced image, Munteanu and Rosa [13] have proposed a novel objective function and applied an evolutionary algorithm to search for optimal parameters in a continuous transform function. The same
objective function has also been applied in artificial bee colony optimization [3], cuckoo search algorithm [15], and also in the firefly algorithm for UAV captured image enhancement [14].
## 3 Preliminaries
### Fuzzy Image Processing
Fuzzy image processing is a collection of all approaches that understand, represent, and process the images, their segments, and features as fuzzy sets (see Figure 1).
The coding of image data (fuzzification) and decoding of the results (defuzzification) are steps that make it possible to process images with fuzzy techniques. The main power of fuzzy image processing is in the middle step (membership modification) in Figure 1[19].
### Fuzzy Sets for Intensity Transformation
Contrast enhancement is one of the principal applications of intensity transformations. We can state the process of enhancing the contrast of a gray-scale image using the following rules [6]:
* IF a pixel is **dark**, THEN make it **darker**
* IF a pixel is **gray**, THEN make it **gray**
* IF a pixel is **bright**, THEN make it **brighter**
## 4 Problem Statement
Our goal is to manipulate the image contrast to enhance the sharpness of the image. Thus, image features will be more differentiable visually. While this is a subjective matter, the effort has been made to quantify it using a fitness
Figure 1: Fuzzy Image Processing. The figure is taken from [18]
function in the literature [15, 3, 16, 17, 13]. We leverage one such fitness function in our work. This fitness function is used to measure the quality of an image. A transformation function is used to enhance the image, which is further optimized using a metaheuristics approach. So, the input of the problem is an image, and the output is another image which is (expectedly) an enhanced version of the former.
We give a more formal definition of the problem below in the context of a gray-scale image (for simplicity). Suppose a gray-scale image, \(I=f(x,y)\), where \(x\) and \(y\) denote the pixels' positions of the image. Image \(I\) is of \(M\times N\) size. So, \(0\leq x<M\) and \(0\leq y<N\). Now assume that there is a fuzzy logic-based transformation function \(T(I)\) that transforms the gray value of each pixel of the image and outputs another image \(I_{e}=g(x,y)\) and a quality function, \(Fitness(I)\)
\[I_{e}=T(I)=T(f(x,y)) \tag{1}\]
\[F=Fitness(I)=log(log(E(I_{s})))*\frac{ne(I_{s})}{M*N}*H(I_{s}) \tag{2}\]
So, our problem is to find a transformation function (Equation 1) by optimizing (here maximization) Equation 2. For explanation of the terms in Equation 2, see Subsection 5.2.
## 5 Methodology
In this section, we present our approaches. We have explored five different metaheuristic approaches. In particular, we have used Hill Climbing (three variants) and Genetic Algorithm (two variants thereof). For the Hill Climbing approach, the variants differ in the mutation functions and input membership functions, as highlighted below. We used one type of input membership function for the Genetic Algorithm, but the difference is in new generation selection strategies (see Section 5.6).
* Hill Climbing 1. neighborhood generation using simple mutation 2. neighborhood generation with trapezoidal and triangular input membership set splitting mutation 3. mixed neighborhood generation with Gaussian input membership set splitting mutation
* Genetic Algorithm 1. simple neighborhood generation with trapezoidal and triangular input membership set 2. simple neighborhood generation with gaussian and sigmoid input membership set
All these variants have some common parts, namely, initialization, representation, and fitness assessment of an optimization session, described below.
Finally, we have presented our methodology of experiments and values of parameters used in algorithms and experiments.
### Population Representation
Every meta-heuristics technique starts with some (usually random) population, and the structure of the population is problem specific. In our case, each individual (in the population) will be one of the input membership functions, as shown in Figure 2. So, our population representation holds information on a set of input membership functions. There are at least three functions in each representation. Each membership function is represented by a tuple of three values regardless of the function types (i.e., trapezoidal, triangular, Gaussian, and Sigmoidal). The first two values of the tuple determine the shape of the membership function, and the third one is used in defuzzification (as will be clear shortly).
### Fitness Assessment
We will formulate the enhancement's quality/fitness \(F\) using the quality function mentioned in Equation 2. There are some terms in Equation 2 which we will explain now,
\(I_{s}\,=\) image after Applying Sobel filter on \(I\)
\(E(I)\,=\) sum of intensity of image \(I\)
\(ne(I)\,=\) number of edge pixel in \(I\)
\(H(I)\,=\) entropy of image \(I\)
\(M\,=\) width of \(I\)
\(N\,=\) height of \(I\)
#### HSV Conversion
If the input image is a color image, we need to convert it to the HSV format. In the case of a gray-scale image, there is no need for such a conversion.
Figure 2: Input membership functions for fuzzy, rule-based contrast enhancement [6]
#### 5.2.2 Defuzzification
Defuzzification refers to converting the fuzzy value to a single crisp value. For any given input pixel \(z_{0}\), the output crisp value \(v_{0}\) will be as follows.
\[v_{0}=\frac{\sum\limits_{i=1}^{n}\mu_{i}(z_{0})*v}{\sum\limits_{i=1}^{n}\mu_{i}( z_{0})};n\geq 3 \tag{3}\]
Here,
\(\mu_{i}(z_{0})=\) fuzzy level of the image pixel with value \(z_{0}\)
\(v=\) constant/mutable value (depending on the algorithm variant) used in the defuzzification
### Hill Climbing
Hill Climbing stochastically generates candidate solutions and stores the best-found solution. The best solution is evaluated using the fitness function shown in Equation 2. In each generation, we generate a certain number of neighbor solutions (set to 10) from a fixed individual. We have tried three variants of Hill Climbing differing from each other in neighborhood generation and input membership function type.
#### 5.3.1 Simple Neighborhood Generation
In this variant, we have initiated a solution with two trapezoidal (\(\mu\_dark,\mu\_bright\)) and one triangular (\(\mu\_gray\)) functions like Figure 2. When generating a new solution, only the shape of the functions (i.e., the width of the triangle and trapezoid's oblique line's slope) are tweaked. We have used three hyperparameters, namely, ChangeProb, MutateMu, and MutateSigma. Firstly, ChangeProb represents the threshold of the random number chosen to decide whether the shape change of a member function will be done. On the other hand, MutateMu and MutateSigma represent, respectively, the mean and variance of Gaussian distribution from which the random number is chosen when generating a new solution.
.3.2 Neighborhood Generation with trapezoidal and triangular input membership set splitting mutation
This variant considers an additional tweak besides the previous one. A membership function may split into two membership functions (e.g., one triangle in Figure 2 splits into two). In addition to the previously mentioned parameters, here we have used another one, which is MembershipSplitProb, to control the probability of choosing function shape change or input member function splitting.
#### 5.3.3 Neighborhood Generation with Gaussian input membership set splitting mutation
In this case, we have used only the Gaussian input membership function in our solution. All the hyperparameters mentioned in the previous two cases are used.
### Genetic Algorithm
Unlike Hill Climbing, the Genetic Algorithm (GA) starts with a number (say, PopSize) of individuals. We use the same evaluation function shown in Equation 2 for fitness evaluation in our GA approach. To introduce variations in the population, GA breeds a new population of children, selects individuals from the old population, and tweaks them to breed new individuals. To keep the footprints of both populations, it joins the parent and children populations to form a new generation of population. In our implementation, we have fixed PopSize equal to 30.
### Tweaking Operations
#### 5.5.1 Crossover
The crossover operation mixes multiple individuals (typically 2) and matches to form the children. There are 3 classical ways of doing a crossover. In our implementation, we have used uniform crossover. In the context of our representation, our crossover operation marches down all the membership functions and to combine them swaps individual functions if a coin toss comes up head with probability \(p\). To use crossover on the Genetic Algorithm, both individuals should be of the same size.
#### 5.5.2 Mutation
Our mutation operation scans each membership function and randomly tweaks the function shape with a certain probability. To exploit both exploitation and exploration, our mutation operation uses a Gaussian distribution with a certain mean and variance. Unlike Hill Climbing, mutation operation here does not increase/decrease the number of membership functions.
### Joining
The Genetic Algorithm differs in how parent and child populations are joined. In our Genetic Algorithm procedure, we have experimented with both (P, P) and (P+P) evolution strategies, where \(P\) is the PopSize.
### Experiment Process
In the experimental analysis, first, we choose the best variants among all variants of Hill Climbing and the Genetic Algorithm. Then we measure the performance of our image enhancement method with respect to common metrics of image enhancements, and finally, we compare our results with one of the simplest methods of image enhancement -- histogram equalization.
#### 5.5.3 Variant Selection
The following steps are used to choose the best variants:
1. Run one variant on each image. The variant is applied on one image a total of NumofTest times. Each time (one stochastic optimization) is run for PerRunTime seconds. This time is kept the same for all variants. Also, mutation and population initialization-related parameters are kept the same for all variants.
2. Measure the average rate of fitness improvement over the number of generations achieved in given PerRunTime. Then the average is computed for NumofTest times.
3. We have compared the metric achieved in step 2 for all five variants and then selected the best two variants. The higher the value of this metric demonstrates better performance.
The value of experiment controlling parameters is written in Table 1.
## 6 Experimental Results
In this section, we evaluate our proposed approach and present the results of the application of our method on different images. We have conducted our experiments on several images used in [3] and some other images (both color, gray-scale, and text images) collected from different sources from the Internet. In particular, we have experimented with a total of 18 images.
### Experimental Setup and Environment
To evaluate the efficacy of our approach, we have implemented a prototype in python. We have used the python evolutionary computation framework, deap4. To implement fuzzy logic, we have used python fuzzy logic toolbox, skfuzzy5.
Footnote 4: [https://deap.readthedocs.io/en/master/](https://deap.readthedocs.io/en/master/)
Footnote 5: [https://pythonhosted.org/scikit-fuzzy](https://pythonhosted.org/scikit-fuzzy)
All variants of our algorithm are run on a Intel(R) Core(TM) i3 CPU [email protected] processor with 6GB RAM running ubuntu 18.04 and python version 3.6.8.
Our experiment code can be found in this link, [https://bitbucket.org/mahi045/image-enhancement/](https://bitbucket.org/mahi045/image-enhancement/).
\begin{table}
\begin{tabular}{|c|c|} \hline
**Hyperparameter name** & **Value** \\ \hline NumofTest & 5 \\ \hline PerRunTime & 120 \\ \hline ChangeProb & 0.5 \\ \hline MutateMu & 3 \\ \hline MutateSigma & 2 \\ \hline MembershipSplitProb & 0.1 \\ \hline \end{tabular}
\end{table}
Table 1: The value of hyperparameters
### Results
Here, we visually present outputs for a limited number of images due to our page limitation. Figures 4, 5, 6, 7 show output images generated by our technique against the reference input images. As an outcome of our technique, the enhanced images are more clear and natural (Figures 4, 6, 7), the edges are sharper(Figures 5, 7), the histogram is expanded, and contrast between black and white portion is increased. On the contrary, some black portions (Figures 5, 6) are unnecessarily darkened.
### Survey
#### 6.3.1 Variant Selection to Select Survey Images
First we have grouped the variants according to used metaheuristic algorithms, namely Hill Climbing and Genetic algorithm. Then according to the method described in Subsection 5.7, we have ranked the variants among two groups. Based on this the selected variants from the two groups are,
1. Hill Climbing with simple neighborhood generation (Subsection 5.3)
2. Genetic Algorithm with (P, P) strategy (Subsection 5.6)
#### 6.3.2 Survey Preparation
We have prepared an anonymous survey6 (currently not taking any responses) to get a quantitative opinion on our enhanced images. In the survey, we have placed \(36\) enhanced images (produced by our technique) side-by-side with their original image. The first \(18\) images are from Hill Climbing with neighborhood generation with split mutation (see Section 5.3), and the second \(18\) images are the same but from the Genetic Algorithm with (P, P) strategy (see Section 5.6). Reviewer needs to mark the enhanced image on a scale of \(1-9\). As the reference, the mark of the original image was \(5\); therefore, a mark of \(>5\) by the reviewer would mean an enhanced image, while \(<5\) will mean image degradation. A mark of \(5\) would mean the processed image is the same as the original visually (a subjective judgment). The prepared survey form can be found at the following link: [https://forms.gle/9wuLKMYBShjyAaAK59](https://forms.gle/9wuLKMYBShjyAaAK59).
Footnote 6: [https://forms.gle/9wuLKMYBShjyAaAK59](https://forms.gle/9wuLKMYBShjyAaAK59)
#### 6.3.3 Survey Response & Observations
We have received a total of \(67\) responses. From the responses, we have filtered responses with suspicious patterns like all equal marks or very high or very low marks. This filtering caused two responses to be filtered. From the remaining \(65\) responses, it seems method \(2\) is better (mean score \(5.35\)) than method \(1\) (mean score \(4.62\)). Moreover, the method \(2\) improves the original image (score \(>5\)).
From Figure 3, we can see the mean of (over images) quantitative responses of people (survey participants). Each data point in Figure 3(b) is the mean response of one participant. Figure 3(a) shows the mean (green triangle), and
median (orange horizontal line) of the responses. The red points are outliers according to the 1.5*IQR rule [21].
From Figure 3(a), method 2 shows a better mean score than method 1's mean score (mean over participants). From Figure 3(b), we can see that method 1 has a higher chance of degrading the image (dense below score 5), although method 2 has achieved the image with the lowest score according to one participant.
Figure 4: enhanced image (b) generated by simple HC and (a) is original image
Figure 3: Box plot of achieved score from survey participants of two methods. Method 1 represents Hill Climbing with neighborhood generation with split mutation (5.3) and method 2 represents Genetic Algorithm with (P, P) strategy (see Section 5.6)
Figure 5: enhanced image (b) generated by simple HC and (a) is original image
Figure 6: enhanced image (b) generated by Genetic Algorithm (P, P) and (a) is original image
## 7 Conclusion
We have applied meta-heuristics to find a good image-specific fuzzy logic-based transformation function in this work. We have employed a quality function from some previous works. As image quality is subjective, to assess our method's success, we have conducted a survey to gain subjective opinions on the visual quality of enhancement and reported the result. From the assessment, we have seen that one variant among the five gives us a visual improvement on average.
For future work, more experiments can be done by associating more image processing operations (e.g., gamma transformation) can be added in the processing pipeline. One limitation of our approach (according to visual observation) is that it darkens the image. Adding other processing operations with automatic parameter tuning through meta-heuristics seems promising for further improvement.
|
2303.04699 | Crossover from attractive to repulsive induced interactions and bound
states of two distinguishable Bose polarons | We study the impact of induced correlations and quasiparticle properties by
immersing two distinguishable impurities in a harmonically trapped bosonic
medium. It is found that when the impurities couple both either repulsively or
attractively to their host, the latter mediates a two-body correlated behavior
between them. In the reverse case, namely the impurities interact oppositely
with the host, they feature anti-bunching. Monitoring the impurities relative
distance and constructing an effective two-body model to be compared with the
full many-body calculations, we are able to associate the induced (anti-)
correlated behavior of the impurities with the presence of attractive
(repulsive) induced interactions. Furthermore, we capture the formation of a
bipolaron and trimer state in the strongly attractive regime. The trimer refers
to the correlated behavior of two impurities and a representative atom of the
bosonic medium and it is characterized by an ellipsoidal shape of the
three-body correlation function. Our results open the way for controlling
polaron induced correlations and creating relevant bound states. | F. Theel, S. I. Mistakidis, P. Schmelcher | 2023-03-08T16:39:59Z | http://arxiv.org/abs/2303.04699v2 | **Crossover from attractive to repulsive induced interactions and bound states of two distinguishable Bose polarons**
## Abstract
**We study the impact of induced correlations and quasiparticle properties by immersing two distinguishable impurities in a harmonically trapped bosonic medium. It is found that when the impurities couple both either repulsively or attractively to their host, the latter mediates a two-body correlated behavior between them. In the reverse case, namely the impurities interact oppositely with the host, they feature anti-bunching. Monitoring the impurities relative distance and constructing an effective two-body model to be compared with the full many-body calculations, we are able to associate the induced (anti-) correlated behavior of the impurities with the presence of attractive (repulsive) induced interactions. Furthermore, we capture the formation of a bipolaron and trimer state in the strongly attractive regime, where the latter consists of two impurities and a medium atom. Our results open the way for controlling polaron induced correlations and creating relevant bound states.**
###### Contents
* 1 Introduction
* 2 Two distinguishable impurities in a bosonic gas
* 3 Variational wave function approach
* 4 One-body density configurations of the three-component mixture
* 5 Intercomponent (induced) correlations and entanglement
* 5.1 Characteristic correlation patterns
* 5.2 Emergent correlation regimes
* 6 Quantification of impurities induced interactions
* 6.1 Effect of the induced impurity-impurity correlations on their relative distance
* 6.2 Effective two-body model
* 7
Bipolaron formation
* 8 Three-body correlations and trimer state
* 9 Conclusions and perspectives
* A Behavior of the bipartite entanglement
* B Effective mass and trap frequency of a single impurity
* C Modelling the effective impurity interactions with an exponential potential
* D Impact of mass-imbalanced impurities and the atom number of the bosonic gas
* E Estimating the importance of correlations on the many-body wave function
## 1 Introduction
Impurities embedded in a many-body medium, e.g. a Bose-Einstein condensate (BEC), are dressed by its excitations and generate quasiparticles [1, 2]. In the case of a structureless host these refer to polarons [3], while for instance, utilizing a magnetic environment or in the presence of a cavity, magnetic polarons [4, 5, 6] and polaritons [7, 8] are formed respectively. Polarons, which we will investigate herein, have been widely studied in cold-atom settings owing to the enormous flexibility, e.g., in terms of controlling the spatial dimension [9, 10, 11], the interparticle interactions [12, 13, 14], as well as the trapping geometry and the number of species [15, 16, 17, 18, 19] and atoms [20, 21]. Depending on the statistics of the medium both Bose [11, 22, 23, 24, 25, 26] and Fermi [1, 27, 28] polarons have been experimentally realized, while theoretically fundamental properties of these type of quasiparticles including effective mass [29, 30, 31], residue [1, 2], and bound state formation [1, 2, 32] emerging in two-component systems have been discussed. Interestingly, by immersing at least two impurities into a quantum gas the latter mediates interactions between the former [33, 34, 35], a phenomenon that has been interpreted in terms of a Casimir-type interaction describing the induced interaction between two objects in a fluctuating medium [36, 37, 38]. In particular, induced interactions are solely attractive as long as the impurities are indistinguishable and thus couple in the same way to their medium. The magnitude of this induced attraction, in general, increases for larger impurity-medium coupling strength and specifically for sufficiently strong attractive ones the impurities assemble in a bound state that can be a bipolaron [39, 40, 41, 42, 43, 44, 32] or a trimeron [44]. Notice that besides the above-discussed studies in a homogeneous BEC environment, the attractive nature of induced interactions has been unveiled also for a harmonically confined [45, 46, 47] or a lattice trapped [48] medium.
Interestingly, it was predicted [36, 38] that there is also the possibility of mediating repulsive impurity-impurity interactions when two impurities are coupled with different signs to a bosonic bath. In this sense, the underlying experimentally relevant three-component system [16, 18] al
lows to unravel additional polaronic properties as it has been also argued by immersing impurities into a two-component pseudospinor mixture [4, 5, 6, 49, 50, 51, 52] in order to create, for instance, spin-wave excitations and magnetic polarons [4, 5], impurities diffusive response [50] or to facilitate the detection of the dressing cloud via interferometry [4]. However, quasiparticle formation in three-component systems is largely unexplored, besides the few above-mentioned recent studies. An interesting direction is to exploit the tunability of such mixtures, e.g. in terms of different intercomponent couplings, for devising the ground state quasiparticle properties such as the impurities effective mass and induced interactions. Here, it is important to understand the interplay of the latter properties and the underlying impurities' correlations. Also, the formation of relevant bound states either solely among the impurities (bipolarons) or between the impurities and the host atoms (trimers) remains elusive. To address these questions, we consider two distinguishable and non-interacting impurities that are embedded into an one-dimensional bosonic gas. The impurities' couplings with the host are individually tuned spanning the regime from attractive to repulsive interactions. Here, the effective interactions between the impurities can be only mediated in the presence of impurity-medium entanglement and bound states require the involvement of strong correlations. As such, to account for the relevant inter and intra-component correlations we employ the variational multilayer multiconfiguration time-dependent Hartree method for atomic mixtures (ML-MCTDHX) approach [53, 54, 55] which is well established for investigating impurity physics [35].
Inspecting the spatial two-body correlations between the two impurities we reveal that, in general, they are correlated (anti-correlated) when the two impurity-medium coupling strengths posses the same (opposite) sign. To shed more light on the impact of induced impurities' correlations we carefully monitor their relative distance [56], excluding all mean-field type contributions, for varying coupling strengths. A central result of our work is that the impurities' correlated (anti-correlated) behavior is related to a decrease (increase) of their relative distance, thus, indicating the presence of an induced attraction (repulsion) between them. This observation is additionally confirmed by constructing an effective two-body model in the weak impurity-medium coupling regime inspired from the case of indistinguishable impurities [35, 47]. It specifically allows to assign the impurities' induced interaction strength and sign but also other quasiparticle related properties such as their effective mass and trap frequency.
For strong impurity-medium attractions, we identify the formation of a bipolaron state involving the two distinguishable impurities. This bound quasi-particle state is characterized by the so-called bipolaron energy [32], and the size of the impurities' dimer state featuring an exponential decrease for larger attractions. Proceeding a step further, we find that for such strong attractive impurity-medium interactions the three-body correlation function reveals a trimer state among the two impurities and a corresponding bath atom. To further testify the existence of this trimer state we employ the Jacobi relative distances of the three distinguishable atoms [57] showing an exponentially decreasing trend for increasing impurity-medium attractions.
This work is organized as follows. In section 2, the three-component setup under consideration is introduced and in Section 3 we explain the variational method used to obtain the ground state properties of the many-body system. Section 4 elaborates on the possible ground state density configurations upon varying the impurity-medium couplings. The emergence of induced impurity-impurity correlation patterns is explicated in Section 5. The interrelation of the aforementioned induced correlations with the induced attractive and repulsive impurity interactions is provided in Section 6 through monitoring their relative distance and constructing an effective two-body model. Delving into the strongly attractive impurity-medium interaction regime, we demonstrate the formation of a bipolaron state among the two distinguishable impurities in Section 7 and the
generation of a trimer state among the impurities and a bath atom in Section 8. We summarize our findings and discuss future perspectives in Section 9. The behavior of the logarithmic negativity in order to quantify the bipartite intercomponent entanglement is discussed in Appendix A. Appendices B and C provide supplemental information regarding the polaron characteristics and induced effective interactions. In Appendix D we comment on the impact of the impurity mass and the number of bath particles on the ground state properties of the system. Finally, in Appendix E we elaborate on the microscopic excitation processes of the system via a number state analysis.
## 2 Two distinguishable impurities in a bosonic gas
We consider a one-dimensional harmonically trapped three component mixture. It contains a bosonic medium \(A\) with \(N_{A}=15\) atoms of mass \(m_{A}\) and two distinguishable impurities \(B\) and \(C\), i.e., \(N_{B}=N_{C}=1\), having masses \(m_{B}\) and \(m_{C}\), respectively. The many-body Hamiltonian of this system reads
\[\hat{H}=\sum_{\sigma}\hat{H}_{\sigma}+\sum_{\sigma\neq\sigma^{ \prime}}\hat{H}_{\sigma\sigma^{\prime}}, \tag{1}\]
where \(\hat{H}_{\sigma}\) denotes the Hamiltonian of each component \(\sigma\) and \(\hat{H}_{\sigma\sigma^{\prime}}\) represents the intercomponent interaction contribution with \(\sigma,\sigma^{\prime}\in\{A,B,C\}\). Specifically,
\[\hat{H}_{\sigma}=\sum_{i=1}^{N_{\sigma}}\biggl{(}-\frac{\hbar^{2 }}{2m_{\sigma}}\frac{\partial^{2}}{(\partial x_{i}^{\sigma})^{2}}+\frac{1}{2} m_{\sigma}\omega_{\sigma}^{2}(x_{i}^{\sigma})^{2}+g_{\sigma\sigma}\sum_{i<j} \delta(x_{i}^{\sigma}-x_{j}^{\sigma})\biggr{)}, \tag{2}\] \[\hat{H}_{\sigma\sigma^{\prime}}=g_{\sigma\sigma^{\prime}}\sum_{i= 1}^{N_{\sigma}}\sum_{j=1}^{N_{\sigma^{\prime}}}\delta(x_{i}^{\sigma}-x_{j}^{ \sigma^{\prime}}). \tag{3}\]
Assuming that the system is at ultracold temperatures it dominantly experiences \(s\)-wave scattering processes that can be described by two-body contact interactions between particles of the same as well as of different species characterized by the generic strength \(g_{\sigma\sigma^{\prime}}\)[14]. The latter depends on the respective three-dimensional scattering lengths \(a_{\sigma\sigma^{\prime}}^{3D}\) and the transversal confinement frequency \(\omega_{\perp}\) that are experimentally tunable via Feshbach resonances [13, 14] and confinement induced resonances respectively [12].
For simplicity, we focus on the mass-balanced case \(m_{\sigma}\equiv m\) (unless stated otherwise) and thus \(\omega_{\sigma}\equiv\omega\). Moreover, we rescale our Hamiltonian in harmonic oscillator units \(\hbar\omega\) which means that the length and interaction scales are given in \(\sqrt{\hbar/m\omega}\) and \(\sqrt{\hbar^{3}\omega/m}\), respectively. Such a three-component system could be experimentally realized [16, 18], e.g., by trapping two isotopes of Rubidium atoms with \({}^{85}\)Rb emulating the medium and two-hyperfine states of \({}^{87}\)Rb [58, 59] representing the impurities. Since our aim is to understand the role of induced interactions between the impurities mediated by the medium, in the ground state of the system, it is natural to consider two non-interacting impurities setting \(g_{BC}=0\), which could be realized, for instance, via magnetic Feshbach resonances [60].
## 3 Variational wave function approach
The ground state of the three-component mixture, described by the Hamiltonian of Eq. (1), is determined within the ML-MCTDHX method [53, 54, 55, 61]. A central aspect of this _ab-initio_ approach is based on the expansion of the many-body wave function on different layers using a variationally optimized time-dependent many-body basis. This leads to an efficient truncation of the underlying Hilbert space tailored to capture the relevant inter- and intracomponent correlations. Specifically, the many-body wave function is first expressed in terms of three different sets of \(D_{\sigma}\) species functions as follows
\[\ket{\Psi^{\text{MB}}(t)}=\sum_{i=1}^{D_{A}}\sum_{j=1}^{D_{B}}\sum_{k=1}^{D_{C }}C_{ijk}(t)\ket{\Psi_{i}^{A}(t)}\ket{\Psi_{j}^{B}(t)}\ket{\Psi_{k}^{C}(t)}. \tag{4}\]
The time-dependent coefficients \(C_{ijk}(t)\) bare information about the entanglement between the involved components. For instance, the bipartite entanglement between two components can be analyzed by tracing out the degrees of freedom of the third one and then apply the positive partial transpose criterion on the resulting mixed state [62] (see also Appendix A). Next, the intracomponent correlations are included into the wave function ansatz by expanding each species function as a superposition of permanents \(\ket{\vec{n}(t)}\) weighted by time-dependent expansion coefficients \(C_{i,\vec{n}}^{\sigma}(t)\). In this notation, \(\vec{n}=(n_{1}^{\sigma},\dots,n_{d^{\sigma}})\) represents the occupation distribution of \(N_{\sigma}\) particles on \(d_{\sigma}\) time-dependent single-particle functions. Additionally, the single-particle functions are expanded into a time-independent discrete variable representation [63] consisting in our case of \(\mathcal{M}_{r}=300\) evenly spaced grid points.
The number of utilized species functions \(D_{\sigma}\) dictates the degree of intercomponent entanglement. For instance, by providing only one species function for each component, i.e., by setting \(D_{A}=D_{B}=D_{C}=1\), the many-body wave function reduces on its top layer to a product state, thereby, prohibiting any interspecies entanglement. Such a treatment is commonly referred to as a species mean-field ansatz (sMF) [53]. For two-component mixtures the sMF ansatz is unique, however, in three-component systems there are various sMF that could be constructed. As an example, setting \(D_{\sigma}=1\) and \(D_{\sigma^{\prime}},D_{\sigma^{\prime\prime}}>1\), we allow for entanglement generation only between the species \(\sigma^{\prime}\) and \(\sigma^{\prime\prime}\), whilst intercomponent correlations with species \(\sigma\) are suppressed. To clearly distinguish among the different possible sMF ansatzes, in the following, we abbreviate as sMF\(\sigma\) where \(\sigma\in\{A,B,C\}\) the ansatz that ignores intercomponent correlations between species \(\sigma\) and the remaining ones. In this sense, the sMFC is written as
\[\ket{\Psi^{\text{sMFC}}(t)}=\sum_{i=1}^{D_{A}}\sum_{j=1}^{D_{B}}C_{ij1}(t) \ket{\Psi_{i}^{A}(t)}\ket{\Psi_{j}^{B}(t)}\ket{\Psi_{1}^{C}(t)}, \tag{5}\]
where only species \(A\) and \(B\) can become entangled while species \(C\) remains uncorrelated with the other species.
The ground state of the three component mixture is obtained through the imaginary time propagation method. The time-dependent coefficients of each layer, namely the species and single-particle layers, are optimally adapted to the system, e.g. by following the Dirac-Frenkel variational principle [64] in order to determine the underlying ML-MCTDHX equations of motion. The latter correspond to \(D_{A}D_{B}D_{C}\) linear differential equations of motion for the \(C_{ijk}(t)\) coefficients coupled to \(\sum_{\sigma=A,B,C}D_{\sigma}\binom{N_{\sigma}+d_{\sigma}-1}{d_{\sigma}-1}\) nonlinear integrodifferential equations for the species functions and \(d_{A}+d_{B}+d_{C}\) nonlinear integrodifferential equations for the single-particle functions. This
co-moving basis concept minimizes the number of required states for achieving numerical convergence. In this sense, it reduces the computational cost as compared to methods relying on time-independent basis sets, while simultaneously allows to account for all relevant correlations. The truncation of the Hilbert space is determined by the number of employed species- and single-particle functions defining the numerical configuration space (\(D_{A}\), \(D_{B}\), \(D_{C}\); \(d_{A}\), \(d_{B}\), \(d_{C}\)). For our system, the degree of correlations in the bosonic bath, e.g. as captured by its depletion [65]\(1-n_{0}^{A}\) with \(n_{0}^{A}\) representing the largest eigenvalue of the bath's one-body reduced density matrix is negligible within the considered interaction strength intervals. This allows us to use only a few orbitals for the medium in order to ensure convergence. On the other hand, the impurities depletion is in general larger, especially for strongly repulsive interactions, and thus we need to use more orbitals. Herewith, we have checked that employing an orbital configuration (6, 6, 6; 4, 6, 6) results in the convergence of the observables of interest, such as the species densities and intercomponent two-body correlation functions, while the amount of equations of motion are tractable.
## 4 One-body density configurations of the three-component mixture
To investigate the emergent spatial configurations of the three-component impurity setting arising due to different combinations of the involved interactions, we initially employ the \(\sigma\)-component one-body density being normalized to unity. Namely, \(\rho_{\sigma}^{(1)}(x)=\langle\Psi^{\text{MB}}|\hat{\Psi}_{\sigma}^{\dagger}( x)\hat{\Psi}_{\sigma}(x)\,|\Psi^{\text{MB}}\rangle\) where \(\hat{\Psi}_{\sigma}^{(\dagger)}\) denotes the bosonic field operator which annihilates (creates) a \(\sigma\)-species atom at position \(x\). In an experiment, the density is routinely detected through _in-situ_ absorption imaging [66, 67, 68]. Our understanding on the mixture spatial distributions at different interactions is also corroborated by an effective potential picture, which has been proven thus far successful in order to qualitative explicate various aspects of impurity physics in two-component settings [69, 56, 70]. According to this, each \(\sigma\) component is subjected to an effective potential stemming from the superposition of its external harmonic trap and the density of the complementary components \(\sigma^{\prime}\) weighted by the respective intercomponent interactions, i.e.,
\[V_{\sigma}^{\text{eff}}(x)=V_{\sigma}(x)+\sum_{\sigma^{\prime}\neq\sigma}N_{ \sigma^{\prime}}g_{\sigma\sigma^{\prime}}\rho_{\sigma^{\prime}}^{(1)}(x). \tag{6}\]
Naturally, this is a sMF framework since it ignores intercomponent correlations. Moreover, it is more meaningful for the impurity subsystem since the impact of the impurity densities is suppressed for the medium. Density profiles of all three components and the impurity effective potentials are provided in Fig. 1 for characteristic impurity-medium interaction configurations, namely \((g_{AB},g_{AC})=(-1.0,-0.2)\), \((1.0,-0.2)\) and \((1.0,1.5)\). The impurities are considered to be non-interacting among each other, i.e., \(g_{BC}=0\), and the medium bosons feature throughout \(g_{AA}=0.2\).
As it can be seen, for an overall attractive impurity-medium coupling the bosons of the medium are placed in the vicinity of the impurities which are naturally localized at the trap center [cf. Figure 1(a)]. This distribution of the medium atoms can also be understood in terms of the respective attractive impurity-medium interaction energy \(E_{A\sigma}^{\text{int}}=\langle\Psi^{\text{MB}}|\mathcal{H}_{A\sigma}|\Psi^{ \text{MB}}\rangle\) for \(g_{A\sigma}<0\) with \(\sigma=B,C\). Also, for both \(g_{AB}<0\) and \(g_{AC}<0\) the effective potential of each impurity corresponds to a dipped harmonic trap enforcing its localization whose degree is, of course, enhanced for stronger attractions [cf. Figure 1(a)]. The value of the attractive interaction determines the degree of spatial localization, i.e., the \(B\) impurity with \(g_{AB}=-1.0\) is more localized than the \(C\) impurity experi
encing \(g_{AC}=-0.2\). For sufficiently large attractive impurity-medium couplings (\(|g_{AA}|\gg g_{AA}\)) the impurities form a bipolaron, see for details the discussion in Section 7.
On the other hand, tuning at least one of the impurity-medium couplings towards the repulsive regime such that \(g_{AA}>g_{AA}\) is satisfied leads to the phase-separation among these components since \(E_{A\sigma}^{\text{int}}>0\). In this case, the impurity forms a shell around the edges of the bath residing around the trap center [51]. Such configurations can be readily observed, for instance, in Figure 1(b) where solely the \(B\) impurity is strongly repulsively coupled with the bath (\(g_{AB}>g_{AA}\)) and also in Figure 1(c) where both impurities phase separate with the bath due to \(g_{AB}>g_{AA}\) and \(g_{AC}>g_{AA}\). Notice that for strong repulsive impurity-medium couplings the underlying effective potential of the impurity has the form of a double-well potential which favors the phase-separation among the bath and the corresponding impurity [cf. Figures 1(b) and (c)].
Another interesting phenomenon reflecting the richness of three-component systems arises upon considering distinct interactions between each impurity and the bath. Indeed, varying the impurity-medium coupling for a specific impurity affects the shape of the bath accordingly and, in turn, impacts the distribution of the other impurity. This is visualized in Figures 1(a) and (b) where \(g_{AC}\) is the same while \(g_{AB}\) is modified from attractive to repulsive values ultimately altering the spatial localization of impurity \(C\), see in particular the peak of \(\rho_{C}^{(1)}(x)\). Therefore, it is possible to implicitly manipulate the distribution of one impurity by adjusting the coupling of the other impurity with the bath and importantly in the absence of direct impurity-impurity interaction. This property, as it will be discussed below, can be proved crucial for controlling impurity-impurity induced interactions.
## 5 Intercomponent (induced) correlations and entanglement
Next, we shed light on the associated intercomponent correlation patterns with a particular emphasis on the existence of induced correlations between the impurities mediated by the bosonic
Figure 1: One-body \(\sigma\)-species density, \(\rho_{\sigma}^{(1)}(x)\), shown together with the effective potentials [Eq. (6)] of the impurities (see legend). Two distinguishable non-interacting impurities (\(B\), \(C\)) are considered which are individually coupled to a bosonic medium \(A\) with \(g_{AA}=0.2\). The impurity-medium coupling strengths from left to right panels refer to \((g_{AB},g_{AC})=(-1.0,-0.2)\), \((1.0,-0.2)\) and \((1.0,1.5)\). For attractive interactions the medium atoms accumulate in the vicinity of the impurities and their effective potential is attractive. Turning to repulsive couplings a tendency for impurity-medium phase-separation occurs for \(g_{A\sigma}>g_{AA}\).
gas. The intercomponent two-body spatial correlations, or two-body coherence, can be quantified through [68],
\[\mathcal{G}^{(2)}_{\sigma\sigma^{\prime}}(x^{\sigma}_{1},x^{\sigma^{\prime}}_{2} )=\rho^{(2)}_{\sigma\sigma^{\prime}}(x^{\sigma}_{1},x^{\sigma^{\prime}}_{2})- \rho^{(1)}_{\sigma}(x^{\sigma}_{1})\rho^{(1)}_{\sigma^{\prime}}(x^{\sigma^{ \prime}}_{2}). \tag{7}\]
Here, we subtract the probability of independently detecting a \(\sigma\) and a \(\sigma^{\prime}\) atom at positions \(x^{\sigma}_{1}\) and \(x^{\sigma^{\prime}}_{2}\) from the probability to simultaneously measure one at \(x^{\sigma}_{1}\) and the other at \(x^{\sigma^{\prime}}_{2}\). The latter is provided by the reduced two-body density
\[\rho^{(2)}_{\sigma\sigma^{\prime}}(x^{\sigma}_{1},x^{\sigma^{\prime}}_{2})= \langle\Psi^{\text{MB}}|\,\hat{\Psi}^{\dagger}_{\sigma}(x^{\sigma}_{1})\hat{ \Psi}^{\dagger}_{\sigma^{\prime}}(x^{\sigma^{\prime}}_{2})\hat{\Psi}_{\sigma^ {\prime}}(x^{\sigma^{\prime}}_{2})\hat{\Psi}_{\sigma}(x^{\sigma}_{1})|\Psi^{ \text{MB}}\rangle\,, \tag{8}\]
which is normalized to unity. In this sense, the two particles are correlated or bunched (anti-correlated or antibunched) if \(\mathcal{G}^{(2)}_{\sigma\sigma^{\prime}}(x^{\sigma}_{1},x^{\sigma^{\prime}}_ {2})\) is positive (negative); otherwise, they are referred to as two-body un-correlated [68, 71].
### Characteristic correlation patterns
First, we study the emergent two-body correlation patterns between the \(B\) impurity and the medium for different intercomponent interactions [Figures 2(a1)-(c1)]. For attractive \(g_{AB}<0\) and \(g_{AC}<0\) the \(B\) impurity is correlated with a bath atom at the same position, see the diagonal of \(\mathcal{G}^{(2)}_{AB}(x^{A}_{1},x^{B}_{2})>0\), while these two particles are anti-correlated when symmetrically placed with respect to the trap center as it is shown from the anti-diagonal of \(\mathcal{G}^{(2)}_{AB}(x^{A}_{1},x^{B}_{2})<0\) [Figure 2(a1)]. In this sense, the \(B\) impurity prefers to occupy the same spatial region with the bath.
Figure 2: Two-body correlation function between (a1)-(c1) one bath particle and the \(B\) impurity as well as (a2)-(c2) among the two non-interacting impurities [see Eq. (7)]. Each column corresponds to the same interaction configuration which is from left to right \((g_{AB},g_{AC})=(-1.0,-0.2)\), \((1.0,-0.2)\) and \((1.0,1.5)\). We consider two distinguishable non-interacting impurities and an interacting medium with \(g_{AA}=0.2\). Impurity \(B\) is correlated (anti-correlated) with a bath particle at the same location in the case of attractive (repulsive) \(g_{AB}\), see panel (a1) [(b1), (c1)]. The impurities experience induced correlations when they both couple either repulsively or attractively to the bath [panels (a2), (c2)], while they are anti-correlated when each impurity couples with an opposite sign to the majority species [panel (b2)].
Turning to repulsive \(g_{AB}>0\) and independently of \(g_{AC}\lessgtr 0\), the above-discussed two-body correlation distributions are inverted and the \(B\) impurity features an anti-bunched (bunched) behavior at the same (different) location with a bath particle as can be deduced by the diagonal (anti-diagonal) of \(\mathcal{G}^{(2)}_{AB}(x_{1}^{A},x_{2}^{B})\) [cf. Figures 2(b1) and (c1)]. This trend reflects the impurity-medium phase-separation identified on the density level [Figures 1(b) and (c)].
Let us now discuss the induced correlations among the non-interacting impurities. When both impurities are attractively coupled to their bath they exhibit a bunching tendency which is, of course, mediated by the bosonic gas, see the diagonal of \(\mathcal{G}^{(2)}_{BC}(x_{1}^{B},x_{2}^{C})\) depicted in Figure 2(a2). Otherwise, the impurities are anti-bunched when residing at different locations with respect to \(x=0\). This two-body configuration of the impurities manifests the presence of their attractive induced interactions regulated by the impurity-medium attractive interactions as we will discuss in Section 6. Note also that a further increase of the impurity-bath attraction can result in the formation of a bipolaron state which we analyze in detail within Section 7. A similar two-body impurities correlation pattern occurs when they both repulsively couple with their bath [Figure 2(c2)]. However, in this case the impurities cluster either at the left or the right side of the bath, while the probability to reside at opposite sides is suppressed [cf. Figure 2(c2)]. This trend which is inherently related to the impurity-medium phase-separation has also been observed for two indistinguishable impurities and it is known as their coalescence [45]. In sharp contrast, if one impurity couples repulsively and the other attractively to the bath the reverse to the above-described correlation behavior is observed. Namely, the impurities anti-bunch (bunch) at the same (different) location in terms of the trap center, see Figure 2(b2). This scenario manifests the flexibility offered by three component mixtures and it is connected to the emergence of repulsive impurity-impurity induced interactions, a phenomenon that can not occur in two-component systems and we analyze in Section 6.
### Emergent correlation regimes
To provide an overview of the two-body correlation behavior stemming from the interplay of the distinct impurity-medium couplings, we inspect the spatially integrated over \([-\infty,0\;]\) (due to symmetry) correlation function
\[\mathcal{C}_{\sigma\sigma^{\prime}}=\int_{-\infty}^{0}\,dx_{1}^{\sigma}\int_{ -\infty}^{0}\,dx_{2}^{\sigma^{\prime}}\mathcal{G}^{(2)}_{\sigma\sigma^{\prime }}(x_{1}^{\sigma},x_{2}^{\sigma^{\prime}}). \tag{9}\]
It quantifies the amount of intercomponent correlations or anti-correlations by means that it is positive (negative) when the particles prefer (avoid) to occupy the same region with respect to the trap center1. The phase diagrams of the impurity-medium \(\mathcal{C}_{AB}\) and impurity-impurity \(\mathcal{C}_{BC}\) integrated correlations as a function of \(g_{AB}\) and \(g_{AC}\) are depicted in Figure 3(a) and (b) respectively. Recall that since \(g_{BC}=0\) all emerging impurity correlations are induced by their coupling to the bath.
Footnote 1: Due to parity symmetry the maximum (minimum) value of \(\mathcal{C}_{\sigma\sigma^{\prime}}\) is 0.25 (-0.25) denoting strong bunching (anti-bunching).
An anti-correlated (correlated) behavior between the \(B\) impurity and the bath occurs for \(g_{AB}>0\) (\(g_{AB}<0\)) and varying \(g_{AC}\), see also Figures 2(a1)-(c1). Notice also the un-correlated tendency for strongly attractive \(g_{AC}\) and repulsive \(g_{AB}\) [Figures 3(a), (b)]. Indeed, due to the large \(g_{AC}<0\) both the bath \(A\) and the \(C\) impurity localize at the trap center minimizing their
spatial overlap with the \(B\) impurity since \(g_{AB}>0\) and thus \(\mathcal{C}_{AB}\) is suppressed. Naturally, a less attractive \(g_{AC}\) enhances the overlap between impurity \(B\) and the bath leading to an anti-correlated behavior. The largest degree of anti-correlation as captured by \(\mathcal{C}_{AB}\) is reached when \(g_{AB}>g_{AA}\) and \(g_{AC}>g_{AA}\) where both impurities form a shell around the bath and coalesce [cf. corresponding region in Figure 3(a)].
Turning to the impurities' correlations, we observe that as long as they both couple either repulsively or attractively to the bath it holds that \(\mathcal{C}_{BC}>0\), implying that they are correlated [see also Figures 2(a1) and (c1)]. However, when the couplings \(g_{AB}\) and \(g_{AC}\) have opposite signs, with one lying in the weak and the other in the strong interaction regime, then mostly \(\mathcal{C}_{BC}<0\), i.e., the impurities are anti-correlated [cf. Figure 2(b1)]. A notable exception takes place if one of the impurities couples strongly repulsively to the bath (e.g. \(g_{AB}>g_{AA}\)) and the other strongly attractively (e.g. \(|g_{AC}|>g_{AA}\)). This leads to a suppressed spatial overlap among the bath and the repulsively interacting impurity2 and thus the bath is only correlated with the attractively coupled impurity, see also the discussion above. Together with the fact that the impurities are spatially separated in this interaction region, if mediated impurity correlations occur they have to be nonlocal. This is indeed the case since the impurities are found to be anti-correlated, \(\mathcal{C}_{BC}<0\), see the two parameter regimes in Figure 3(b) enclosed by the dashed lines.
Figure 3: (a)-(b) Phase diagram of the intercomponent (see legends) spatially integrated correlation functions \(\mathcal{C}_{\sigma\sigma^{\prime}}\) [Eq. (9)] in the parametric plane of the impurity-medium interaction strengths (\(g_{AB}\), \(g_{AC}\)). A value of \(\mathcal{C}_{\sigma\sigma^{\prime}}<0\) (\(\mathcal{C}_{\sigma\sigma^{\prime}}>0\)) indicates an anti-correlated (correlated) behavior between the atoms of species \(\sigma\) and \(\sigma^{\prime}\), while \(\mathcal{C}_{\sigma\sigma^{\prime}}=0\) denotes the absence of two-body correlations (see also main text). The gray circles correspond to the interaction combinations (\(g_{AB}\), \(g_{AC}\)) depicted in Figures 1 and 2. The regions enclosed by the dashed lines in panel (b) indicate the interaction regions where the impurities do not overlap but are still two-body anti-correlated. The harmonically trapped three component system consists of two non-interacting but distinguishable impurities immersed in a bosonic gas of \(N_{A}=15\) atoms with \(g_{AA}=0.2\).
## 6 Quantification of impurities induced interactions
Below, we examine how the mediated correlations among the distinguishable impurities alter their relative distance and, subsequently, relate the induced impurity-impurity correlation patterns with an effective induced interaction strength. The latter as it will be argued can be either attractive or repulsive due to the genuine three-component nature of the system and it is further quantified via an effective two-body model.
### Effect of the induced impurity-impurity correlations on their relative distance
A reliable measure for this purpose, that has also been utilized in two-component settings [56, 71] and can be experimentally monitored via _in-situ_ spin-resolved single-shot measurements [72], is the relative distance between the impurities
\[\langle r_{BC}\rangle=\frac{1}{N_{B}N_{C}}\int\mathrm{d}x_{1}^{B}\mathrm{d}x_ {2}^{C}\left|x_{1}^{B}-x_{2}^{C}\right|\rho_{BC}^{(2)}(x_{1}^{B},x_{2}^{C}). \tag{10}\]
Specifically, in order to extract the contribution stemming from genuine impurity-medium correlations we estimate the modified relative distance at different correlation levels as dictated by the respective truncation of the many-body (MB) wave function (see also Section 3), namely
\[\Delta\langle r_{BC}\rangle=\langle r_{BC}^{\mathrm{MB}}\rangle-\left[\langle r _{BC}^{\mathrm{sMF}}\rangle+\left(\langle r_{BC}^{\mathrm{sMFB}}\rangle- \langle r_{BC}^{\mathrm{sMF}}\rangle\right)+\left(\langle r_{BC}^{\mathrm{sMFC }}\rangle-\langle r_{BC}^{\mathrm{sMF}}\rangle\right)\right]. \tag{11}\]
Here, sMF stands for the general species mean-field case where all intercomponent correlations are neglected, while sMFB (sMFC) refers to the case at which only intercomponent correlations between the \(B\) (\(C\)) impurity and the medium are ignored [51, 35]. Excluding the sMF contribution as well as the ones corresponding to the entanglement between the bath and impurity \(C\) or \(B\) [cf. last four terms of Eq. (11)] from the relative distance where all correlations are included, i.e., \(\langle r_{BC}^{\mathrm{MB}}\rangle\), we are able to distill the effects originating from the mutual correlation among the impurities and the bosonic gas by tracking \(\Delta\langle r_{BC}\rangle\). As such, \(\Delta\langle r_{BC}\rangle\) captures the genuine effects of the induced correlations as described by \(\mathcal{C}_{BC}\) [Figure 3(b)]. We interpret a value of \(\Delta\langle r_{BC}\rangle\) which is positive (negative) as the signal of emergent repulsive (attractive) impurities' induced interactions.
The modified relative distance, \(\Delta\langle r_{BC}\rangle\), is presented in Figure 4(a) with respect to the \(g_{AC}\) coupling and for characteristic fixed \(g_{AB}\) values. In general, we find an induced attraction between the impurities when they both couple either attractively or repulsively to the medium, while they feature a mediated repulsion if one of them couples attractively and the other repulsively to the bosonic gas. Since \(\Delta\langle r_{BC}\rangle\) is closely related to \(\mathcal{C}_{BC}\), an induced correlation (anti-correlation) between the impurities can be associated to their attractive (repulsive) induced interaction and vice versa [cf. Figures 3(b) and 4(a)]. For instance, considering repulsive \(g_{AB}\) and tuning \(g_{AC}\) to weak attractions, \(\Delta\langle r_{BC}\rangle\) becomes positive denoting an induced repulsion between the impurities. However, for stronger repulsive \(g_{AC}\)\(\Delta\langle r_{BC}\rangle\) is negative and thus attractive induced interactions occur maximizing in the coalescence regime where \(g_{AB}\) and \(g_{AC}\) are both strongly repulsive, see also the inset of Figure 3(a). Furthermore, in the case of suppressed mediated correlations between the impurities (\(\mathcal{C}_{BC}\approx 0\)), i.e., in the trivial case where \(g_{AB}=0\) or for strong attractive \(g_{AC}\) and repulsive \(g_{AB}\) [cf. Figure 3(b)], also \(\Delta\langle r_{BC}\rangle\) vanishes (see Figure 4(a) for strong attractive \(g_{AC}\) and \(g_{AB}=0.2,1.0\)). In the last scenario, the gradually increasing \(g_{AC}\) attraction leads to a reduction (enhancement) of the correlation between the bath and the \(B\) (\(C\)) impurity whose interplay impedes the development of mediated impurity correlations and therefore induced interactions.
In the case of an attractively coupled impurity \(B\), e.g. \(g_{AB}=-1.0,-0.2\), \(\Delta\langle r_{BC}\rangle\) decreases when \(g_{AB}\) is tuned to strong attractive values, a phenomenon also occurring for \(\mathcal{C}_{BC}\) [Figure 3(b)]. Here, increasing the attraction between impurity \(C\) and the bath enhances their correlation, while at sufficiently strong attractive \(g_{AC}\) the correlation between the bath and the impurity \(B\) begins to slightly decrease for constant attractive \(g_{AB}\) (cf. Figure 3). This competition between the different impurity-medium correlations suggests an interesting interplay between the individual intercomponent correlations and could in principle hinder the bath to mediate correlations between the impurities leading eventually to the observed reduction of the induced impurity-impurity correlation/interaction. Such an interplay of intercomponent correlations is indicative of a more intricate and generic correlation transfer process among the species [35], that is an exciting future perspective but lies beyond the focus of our study. However, note that for decreasing \(g_{AB}=g_{AC}\) results in a saturation of the impurity-impurity correlation, a fact that will also become important later in the discussion regarding the bipolaron formation in Section 7.
Finally, notice that a similar qualitative behavior of the intercomponent correlations and thus also of \(\Delta\langle r_{BC}\rangle\) takes place for either increasing the number of atoms of the bosonic medium or the bare mass of one of the impurities, see Appendix D. In fact, both scenarios lead for repulsive \(g_{AB}\) and \(g_{AC}\) to an amplified impurities entanglement and to a stronger attractive induced interaction.
### Effective two-body model
To determine the strength of induced impurity-impurity interactions, we reduce the three-component many-body system to an effective two-body model consisting of two interacting quasi-particles. This is a common approach to identify polaron properties from many-body simulations and has
Figure 4: (a) and its inset: Modified relative distance [Eq. (11)] reflecting the effects on \(\langle r_{BC}^{\text{MB}}\rangle\) which are exclusively caused by the induced impurities correlation as a function of \(g_{AC}\) and for different fixed \(g_{AB}\). (b) Induced interaction strength between the two Bose polarons estimated by maximizing the overlap between the two-body correlation functions \(\mathcal{G}_{BC}^{(2),\text{eff}}\) obtained from the effective two-body model and \(\mathcal{G}_{BC}^{(2)}\) predicted within the many-body approach (see main text). (c) Fidelity \(\mathcal{F}_{BC}\) of the impurities wave function as found in the many-body method and the effective two-body model with respect to the impurity-medium couplings \(g_{AB}\) and \(g_{AC}\). We consider two non-interacting but distinguishable impurities immersed in a bosonic gas of \(N_{A}=15\) atoms with \(g_{AA}=0.2\).
been successfully applied to two indistinguishable impurities [47] but not to distinguishable ones. Here, the effective two-body model employs the effective potential \(V_{\sigma}^{\rm eff}(x^{\sigma})\) [defined in Eq. (6)] for each impurity and thus neglects impurity-medium correlations. Also, the underlying impurities induced interactions are represented by a contact potential of strength \(g_{BC}^{\rm eff}\) (a treatment with finite range interactions leads to similar results as it is demonstrated in Appendix C). Specifically, the corresponding effective two-body Hamiltonian reads
\[H^{(2),\rm eff}=\sum_{\sigma=B,C}\left(-\frac{\hbar^{2}}{2m_{\sigma}}\frac{ \partial^{2}}{(\partial x^{\sigma})^{2}}+V_{\sigma}^{\rm eff}(x^{\sigma}) \right)+g_{BC}^{\rm eff}\delta(x^{B}-x^{C}). \tag{12}\]
The effective potential accounts for the effective mass and frequency of each impurity [73]. These effective parameters originate from the polaron picture where the impurity becomes dressed by the excitations of the bath, see Appendix B for a more detailed discussion.
In order to deduce the effective interaction strength \(g_{BC}^{\rm eff}\), we minimize \(\Delta\mathcal{G}_{BC}^{(2)}=\int\mathrm{d}x_{B}\mathrm{d}x_{C}\)\(\left|\mathcal{G}_{BC}^{(2)}-\mathcal{G}_{BC}^{(2),\rm eff}\right|^{2}\), where \(\mathcal{G}_{BC}^{(2)}\) and \(\mathcal{G}_{BC}^{(2),\rm eff}\) are the impurities' two-body correlation functions calculated from the many-body three-component mixture and the effective two-body model, respectively 3. By estimating the value of \(g_{BC}^{\rm eff}\) which minimizes \(\Delta\mathcal{G}_{BC}^{(2)}\), we are able to associate the emergent induced correlation pattern between the impurities described in Fig. 3(b) with a corresponding induced interaction strength \(g_{BC}^{\rm eff}\). The resultant behavior of \(g_{BC}^{\rm eff}\) provided in Figure 4(b) for fixed \(g_{AB}\) and varying \(g_{AC}\) agrees qualitatively with the observations made for \(\Delta\langle r_{BC}\rangle\). The impurities experience an induced attraction when they both couple either attractively or repulsively to the bath, corresponding to an induced correlation, otherwise they feature an induced repulsion related to their anti-correlated tendency 4. To testify the validity range of the effective two-body model [Eq. (12) ] for describing the impurities, we calculate the fidelity \(\mathcal{F}_{BC}\) of their ground state wave function as obtained from \(H^{(2),\rm eff}\) (\(\left|\Phi_{\rm eff}^{BC}\right\rangle\)) and the full three-component mixture (\(\left|\hat{\Psi}_{i}^{BC}\right\rangle\)) 5. The fidelity is provided in Figure 4(c) as a function of \(g_{AC}\) and for different fixed values of \(g_{AB}\). It becomes apparent that \(H^{(2),\rm eff}\) is not valid for \(g_{AA}<g_{A\sigma}\) where the respective impurity phase separates with the bath.
Footnote 3: We find \(\Delta g_{BC}^{(2)}\lesssim 10^{-5}\) for all considered interaction strengths \(g_{AB}\) and \(g_{AC}\).
Footnote 4: Note that \(g_{BC}^{\rm eff}=0\) if one of the impurities does not interact with the bath which further confirms the validity of the effective model predictions since in this case no correlations are mediated.
Footnote 5: For this reason we use the Schmidt decomposition \(\left|\Psi^{\rm MB}\right\rangle=\sum_{i}\sqrt{\lambda_{i}}\left\langle\Psi_{i }^{\rm a}\right\rangle\otimes\left|\hat{\Psi}_{i}^{BC}\right\rangle\) where the \(\lambda_{i}\) correspond to the Schmidt coefficients [74, 75]. As such the fidelity is expressed as \(\mathcal{F}_{BC}=\left|\sum_{i}\sqrt{\lambda_{i}}\langle\Psi_{i}^{\rm a}[ \phi_{\rm eff}^{BC}]\right|^{2}\).
## 7 Bipolaron formation
Strong attractive induced interactions between two dressed impurities, commonly occurring for strong attractive impurity-medium direct interactions, eventually lead to the formation of a bound dimer quasi-particle state, the so-called bipolaron [32, 42]. In order to probe the presence of such a dimer impurity bound state in our setup, we study the bipolaron energy,
\[E_{\rm bip}(g_{AB},g_{AC})=E(g_{AB},g_{AC})-E_{1}(g_{AB})-E_{1}(g_{AC})+E_{0}. \tag{13}\]
Here, \(E(g_{AB},g_{AC})\) denotes the total energy of the system including the two distinguishable impurities, \(E_{0}\) is the energy of the bosonic gas in the absence of impurities and \(E_{1}(g_{AB})\), \(E_{1}(g_{AC})\) is the energy of one impurity coupled to the bath. The bipolaron energy is presented in Figure 5(a)
covering a wide range of attractive and repulsive impurity-medium interactions, \(g_{AB}\) and \(g_{AC}\). It features a rapid decrease when both impurities couple attractively to the medium, thereby, evincing the formation of a bound state6.
Footnote 6: The bipolaron energy decreases exponentially if both impurity-medium couplings (\(g_{AB}\), \(g_{AC}\)) are equally varied from the non-interacting limit to the strongly attractive regime, i.e., along the diagonal in Figure 5(a).
A complementary observable used for the identification of the bipolaron is the spatial size of this dimer state. This is naturally captured by \(\sigma\sim\sqrt{\langle r_{BC}^{2}\rangle}\), where \(\langle r_{BC}^{2}\rangle\) is the squared relative distance [cf. Eq. (10)] between the impurities \(B\) and \(C\)[32]. Specifically, in the following, we track \(\sqrt{\sigma/\sigma_{0}}\) with \(\sigma_{0}\) being the distance in the uncoupled scenario, i.e., at \(g_{AB}=g_{AC}=0\), such that we explicitly estimate the impact of the impurity-medium interactions on the dimer size. This is depicted in Figure 5(a) as contour dashed lines along which \(\sqrt{\sigma/\sigma_{0}}\) is constant in the \(g_{AB}\)-\(g_{AC}\) plane on top of the bipolaron energy. It can be readily seen that for increasing magnitude of the attractive impurity-medium couplings, i.e., \(g_{AB}\) and \(g_{AC}\), the size of the dimer state shrinks further, see in particular the dashed lines in Figure 5 which from bottom left to top right correspond to \(\sqrt{\sigma/\sigma_{0}}\approx 0.18,0.29,0.65\).
The bipolaron dimer state refers to the bunching behavior of the impurities which manifests in the elongated shape of their two-body density \(\rho_{BC}^{(2)}(x_{1}^{B},x_{2}^{C})\) along the diagonal. In the non
Figure 5: (a) Bipolaron energy, \(E_{\text{bip}}\), as a function of the intercomponent coupling strengths \(g_{AB}\) and \(g_{AC}\). The dashed lines represent contours along which the size of the dimer state \(\sigma\) remains fixed and in particular from bottom left to top right correspond to \(\sqrt{\sigma/\sigma_{0}}\approx 0.18,0.29,0.65\). (b), (c) Reduced two-body impurities’ density \(\rho_{BC}^{(2)}(x_{1}^{B},x_{2}^{C})\) for \((g_{AB},g_{AC})=(-0.5,-0.5)\) and \((-1.5,-1.5)\), respectively [see also corresponding gray dots in panel (a)]. The region where \(\rho_{BC}^{(2)}(x_{1}^{B},x_{2}^{C})=\rho_{\sigma\sigma^{\prime}}^{(2)}(0,0)/2\) is fitted to an ellipse (white dotted line) and shown together with the semi-minor and semi-major axis (black lines). The corresponding eccentricity is depicted in panel (d) assuming \(g_{AB}=g_{AC}\). The transition to a bipolaron state where the eccentricity saturates for increasing impurity-medium attractions and the size of the dimer state is \(\sqrt{\sigma/\sigma_{0}}\approx 0.29\) occurs at \(g_{AC}=-1.5\) (gray dashed line). We consider two non-interacting but distinguishable impurities immersed in a bosonic gas of \(N_{A}=15\) atoms with \(g_{AA}=0.2\).
interacting case, i.e., \(g_{AB}=g_{AC}=0\), \(\rho_{BC}^{(2)}(x_{1}^{B},x_{2}^{C})\) is circularly symmetric in the \(x_{1}^{B}-x_{2}^{C}\) plane and becomes gradually elongated for larger attractions due to the mediated attraction between the impurities, see e.g. Figures 5(b) and (c) for the cases \((g_{AB},g_{AC})=(-0.5,-0.5)\) and \((-1.5,-1.5)\), respectively, also marked as gray dots in Figure 5(a). To quantify the degree of the aforementioned elongation, we fit the half maximum of the impurities' two-body density7, i.e. \(\rho_{BC}^{(2)}(0,0)/2\) to a rotated ellipse [see white dotted lines in Figures 5(b) and (c)] and determine the corresponding eccentricity \(e=\sqrt{1-b^{2}/a^{2}}\) where \(a\) (\(b\)) denotes the semi-major (semi-minor) axis marked by the black lines of the ellipse8. Apparently for \(e=0\), \(\rho_{BC}^{(2)}(x_{1}^{B},x_{2}^{C})\) is circularly symmetric while in the case of \(e<1\) it is elongated having the shape of an ellipse.
Footnote 7: We remark that choosing \(\rho_{BC}^{(2)}(0,0)/2\) for the fitting is employed for convenience. Indeed, also other density values were used, e.g. \(\rho_{BC}^{(2)}(0,0)/4\), verifying the same behavior of the eccentricity.
Footnote 8: For the fitting we use the general ellipse equation \(ax_{1}^{2}+\beta x_{1}x_{2}+\gamma x_{2}^{2}+\delta x_{1}+\epsilon x_{2}+\phi=0\), which in the frame of the ellipse reduces to \(\bar{x_{1}}^{2}/a^{2}+\bar{x_{2}}^{2}/b^{2}=1\).
The eccentricity of the impurities' two-body density is depicted in Figure 5(d) for \(g_{AB}=g_{AC}\). By tuning the impurity-medium coupling from the non-interacting limit towards strong attractions, \(e\) increases from \(e\approx 0\) at \(g_{AB}=g_{AC}=0\) to finite positive values until it saturates at around \(g_{AB}\approx-1.5\). A larger attraction leads only to an additional shrinking of the dimer size, see in particular the exponential decrease of \(\sqrt{\sigma/\sigma_{0}}\) in Figure 5(d), leaving the shape of \(\rho_{BC}^{(2)}(x_{1}^{B},x_{2}^{C})\) almost unchanged. In this sense, we deduce that the bipolaron state is formed at \(g_{AB}=g_{AC}\approx-1.5\) corresponding to \(\sqrt{\sigma/\sigma_{0}}\approx 0.29\) [vertical gray dashed line in Figure 5(d)]. This observation allows us to generalize our conclusions for the bipolaron formation also in the case of \(g_{AB}\neq g_{AC}\) from the critical size of the dimer state being \(\sqrt{\sigma/\sigma_{0}}\lesssim 0.29\), which corresponds to the central contour dashed line in Figure 5(a).
We remark that the above-described behavior of both \(E_{\rm bip}(g_{AB},g_{AC})\) and \(\sigma/\sigma_{0}\) is in accordance with previously studied two-component systems containing two indistinguishable bosonic impurities that form a bipolaron9 in the strongly attractive coupling regime [32]. However, our results generalize these findings demonstrating the existence of a bipolaron in the case of two distinguishable impurities and suggesting that this bound state is robust to individual variations of \(g_{AB}\) or \(g_{AC}\) as indicated by the contour lines in Figure 5. Another aspect that we have addressed is that increasing the mass of one impurity, e.g. considering \(m_{B}=2\), leads to a faster reduction of the dimer state size as well as the bipolaron energy for decreasing \(g_{AB}=g_{AC}\) while the eccentricity saturates at smaller impurity-medium attractions as compared to the mass-balanced case. This suggests, as expected, that a heavier impurity facilitates bipolaron formation.
Footnote 9: We have also verified that upon considering two indistinguishable bosonic impurities our results regarding the bipolaron energy, dimer size and eccentricity coincide with those of the three-component setup with \(g_{AB}=g_{AC}\).
## 8 Three-body correlations and trimer state
In the following, we aim to shed light on the existence of three-body correlations appearing in the ground state of the two distinguishable impurities embedded into the bosonic gas. For this we resort to a spatially resolved three-body correlation function, which is a straightforward extension of the two-body one defined in Eq. (7) and reads
\[\mathcal{G}_{ABC}^{(3)}(x_{1}^{A},x_{2}^{B},x_{3}^{C})=\rho_{ABC}^{(3)}(x_{1}^{ A},x_{2}^{B},x_{3}^{C})-\rho_{A}^{(1)}(x_{1}^{A})\rho_{B}^{(1)}(x_{2}^{B}) \rho_{C}^{(1)}(x_{3}^{C}). \tag{14}\]
The three particles are correlated (anti-correlated) if \(\mathcal{G}^{(3)}_{ABC}(x_{1}^{A},x_{2}^{B},x_{3}^{C})>0\) (\(\mathcal{G}^{(3)}_{ABC}(x_{1}^{A},x_{2}^{B},x_{3}^{C})<0\)), whilst a vanishing \(\mathcal{G}^{(3)}_{ABC}(x_{1}^{A},x_{2}^{B},x_{3}^{C})=0\) implies that they are three-body uncorrelated. Moreover, the reduced three-body density
\[\rho^{(3)}_{ABC}(x_{1}^{A},x_{2}^{B},x_{3}^{C})=\langle\Psi^{\rm MB }|\,\hat{\Psi}^{\dagger}_{A}(x_{1}^{A})\hat{\Psi}^{\dagger}_{B}(x_{2}^{B})\hat {\Psi}^{\dagger}_{C}(x_{3}^{C})\hat{\Psi}_{C}(x_{3}^{C})\hat{\Psi}_{B}(x_{2}^{ B})\hat{\Psi}_{A}(x_{1}^{A})|\Psi^{\rm MB}\rangle\,, \tag{15}\]
is the normalized and spatially resolved probability of finding at the same time a bath particle at position \(x_{1}^{A}\) and the impurities \(B\) and \(C\) at positions \(x_{2}^{B}\) and \(x_{3}^{C}\)[76].
The three-body correlation function is depicted in Figures 6(a) and (b) for the case of strong repulsions between impurity \(B\) and the bath (\(g_{AB}=1\)) and either weak attractive or repulsive couplings between the bath and the \(C\) impurity, namely \(g_{AC}=-0.2\) and \(0.2\), respectively. Moreover, for visualization and completeness issues, we additionally showcase within the \(x_{1}^{A}\)-\(x_{2}^{B}\), \(x_{1}^{A}\)-\(x_{3}^{C}\) and \(x_{2}^{B}\)-\(x_{2}^{C}\) planes the underlying two-body correlation functions \(\mathcal{G}^{(2)}_{AB}(x_{1}^{A},x_{2}^{B})\), \(\mathcal{G}^{(2)}_{AC}(x_{1}^{A},x_{3}^{C})\) and \(\mathcal{G}^{(2)}_{BC}(x_{2}^{B},x_{3}^{C})\), respectively10. Focusing on \(g_{AC}=-0.2\), it becomes evident that \(\mathcal{G}^{(3)}_{ABC}(x_{1}^{A},x_{2}^{B},x_{3}^{C})\) fragments into two correlated and two anti-correlated parts. The correlated segments indicate that it is likely for one bath atom and the \(C\) impurity to reside at the same side with respect to the trap center while the repulsively coupled impurity \(B\) favors to be on the opposite side. On the other hand, the anti-correlated fragments suggest that a configuration where the impurities and a bath atom are at the same location is not favorable. The spatial arrangement of these fragments is altered in the three-dimensional space if the sign of \(g_{AC}\) is inverted, in a sense that the correlated and anti-correlated regions are rotated by roughly \(90^{\circ}\) around the \(x_{2}^{B}\) direction. In such a configuration the impurities are located at the same side in terms of the trap center and a bath atom lies on the opposite side. The corresponding two-body correlation functions \(\mathcal{G}^{(2)}_{AC}(x_{1}^{A},x_{3}^{C})\) and \(\mathcal{G}^{(2)}_{BC}(x_{2}^{B},x_{3}^{C})\) become inverted, whereas \(\mathcal{G}^{(2)}_{AB}(x_{1}^{A},x_{2}^{B})\) preserves its pattern, see the contours in Figures 6(a) and (b).
Footnote 10: As an example, notice that the contours in the \(x_{1}^{A}\)-\(x_{2}^{B}\) and \(x_{2}^{B}\)-\(x_{3}^{C}\) planes of Figure 6(c) correspond to the \(\mathcal{G}^{(2)}_{AB}(x_{1}^{A},x_{2}^{B})\) and \(\mathcal{G}^{(2)}_{AB}(x_{1}^{B},x_{2}^{C})\) illustrated in Figures 2(b1) and (b2), respectively.
Subsequently, we turn to strongly attractive impurity-medium interactions with \(g_{AB}=g_{AC}\). Here, the three-body density \(\rho^{(3)}_{ABC}(x_{1}^{A},x_{2}^{B},x_{3}^{C})\) becomes elongated exhibiting an ellipsoidal shape, see e.g. Figure 6(d) for \((g_{AB},g_{AC})=(-1.5,-1.5)\). Thereby, the three-body density is stretched along the \((x_{1}^{A},x_{2}^{B},x_{3}^{C})\)-direction, i.e., the diagonal of the coordinate system, demonstrating a bunching behavior of the three particles. In particular, the corresponding three-body correlation function, presented in Figure 6(c), features a correlated pattern along the diagonal around which a shell-like structure consisting of anti-correlated fragments is formed.
To quantify the deformation of the three-body density, we fit its half maximum, i.e., \(\rho^{(3)}_{ABC}(0,\,0,0)/2\), to a rotated ellipsoid (see white dashed lines in Figure 6(d) corresponding to a profile of the ellipsoid). Specifically, we fit the ellipsoid equation \(\tilde{x_{1}}^{2}/a^{2}+\tilde{x_{2}}^{2}/b^{2}+\tilde{x_{3}}^{2}/c^{2}=1\), where \(\tilde{x_{i}}\) refers to the coordinate system of the ellipsoid spanned by its semi-axis with lengths \(a\), \(b\) and \(c\) [green lines in Figure 6(d)]. From the semi-axis we determine three eccentricities, namely \(e_{ab}=\sqrt{1-b^{2}/a^{2}}\), \(e_{ac}=\sqrt{1-c^{2}/a^{2}}\) and \(e_{bc}=\sqrt{1-c^{2}/b^{2}}\) with \(a\geq b\geq c\). These eccentricities are depicted in Figure 6(e) together with the relative deviation, \(err\), from the ellipsoid function for varying \(g_{AB}\) and assuming \(g_{AB}=g_{AC}\). In the non-interacting case, i.e., \(g_{AB}=g_{AC}=0\), the eccentricities are already finite indicating a deviation from a spherical shape, which is in contrast to the bipolaron [cf. Figure 5(d)]. This is attributed to the presence of finite intraspecies interactions among the bath particles causing the observed spatial deformation. Importantly, the eccentricities
ties show an increasing tendency for stronger attractive values of \(g_{AB}=g_{AC}\), meaning that the elongation of the ellipsoid is enhanced until it saturates at around \(g_{AB}=g_{AC}\approx-1.5\).
A further characterization of the size of the three-body cluster at strong attractions is achieved by inspecting the hyperspherical radius \(\langle r_{\sigma-\sigma^{\prime}-\sigma^{\prime\prime}}^{(3)}\rangle\) and the Jacobi relative distance \(\langle r_{\sigma^{\prime}\sigma^{\prime\prime}-\sigma^{\prime}}^{(3)}\rangle\). The latter denotes the distance between the atom \(\sigma\) and the center-of-mass of the particles \(\sigma^{\prime}\) and \(\sigma^{\prime\prime}\)[57, 77, 78]. These observables are defined as
\[\langle r_{A-B-C}^{(3)}\rangle=\frac{1}{N_{A}N_{B}N_{C}}\int \mathrm{d}x_{1}^{A}\mathrm{d}x_{2}^{B}\mathrm{d}x_{3}^{C}\sqrt{(x_{1}^{A})^{2 }+(x_{2}^{B})^{2}+(x_{3}^{C})^{2}}\rho_{ABC}^{(2)}(x_{1}^{A},x_{2}^{B},x_{3}^ {C}), \tag{16}\] \[\langle r_{\sigma^{\prime}\sigma^{\prime\prime}-\sigma}^{(3)} \rangle=\frac{1}{N_{A}N_{B}N_{C}}\int\mathrm{d}x_{1}^{A}\mathrm{d}x_{2}^{B} \mathrm{d}x_{3}^{C}\left|x^{\sigma}-\frac{1}{2}\left(x^{\sigma^{\prime}}+x^{ \sigma^{\prime\prime}}\right)\right|\rho_{ABC}^{(2)}(x_{1}^{A},x_{2}^{B},x_{3 }^{C}), \tag{17}\]
with \(\sigma,\sigma^{\prime},\sigma^{\prime\prime}\in\{A,B,C\}\) and \(\sigma\neq\sigma^{\prime}\), \(\sigma\neq\sigma^{\prime\prime}\), \(\sigma^{\prime}\neq\sigma^{\prime\prime}\). Note that in the present case \(\langle r_{AB-C}^{(3)}\rangle=\langle r_{AC-B}^{(3)}\rangle\), since impurity \(B\) and \(C\) have identical mass and are coupled with the same strength to the bath. Figure 6(f) reveals that for stronger impurity-medium attractions the hyperspherical radius decreases exponentially implying an exponential shrinking of the size of the three-body cluster. The same exponential decrease is also captured by the expectation values of the Jacobi relative distances where we find \(\langle r_{BC-A}^{(3)}\rangle<\langle r_{AB-C}^{(3)}\rangle\) reflecting the fact that the bath atoms extend over a larger spatial region than the impurities due to the repulsive \(g_{AA}\). The above properties imply the formation of a bound trimer state for couplings \(g_{AB}=g_{AC}\leq-1.5\) corresponding to values where the ellipsoidal structure of the three-body density saturates. In this sense, the formation of a bipolaron is accompanied by the development of a bound trimer state.
## 9 Conclusions and perspectives
We have studied the correlation properties in the ground state of two non-interacting distinguishable impurities immersed in a bosonic bath with the entire three-component system being harmonically trapped. The impurities become dressed by the excitations of the bosonic gas generating quasiparticle states, herein Bose polarons, having characteristic properties such as effective mass and featuring induced correlations. In order to appreciate the impact of inter- and intracomponent correlations we rely on the variational ML-MCTDHX method whose flexible wave function truncation ansatz allows to operate at different correlation orders. An emphasis is placed on the high tunability of the three-component setting unveiling rich density and correlation patterns, the manipulation of both the sign and the strength of impurities induced interactions as well as the formation of bound impurity states.
Specifically, we demonstrate that upon varying the involved impurity-medium couplings, both impurities can either localize at the trap center (attractive intercomponent interactions), form a shell around the bosonic gas (repulsive interactions), i.e., phase-separate, or one of them localize and the other phase-separate (alternating signs of impurity-medium couplings). These density configurations can be understood at least qualitatively in terms of an effective potential picture for the impurities which refers to a dipped harmonic oscillator (double-well) for attractive (repulsive) intercomponent interactions.
A detailed characterization of the induced correlations is provided in a wide range of impurity-medium interactions aiming to expose their intricate role. Inspecting the two-body intercomponent correlation functions we find that the bosonic gas mediates anti-correlations among the impurities if one of them couples repulsively and the other attractively to it. In contrast, induced two-body correlations occur as long as both impurities couple either attractively or repulsively to their medium. The origin of the aforementioned correlation patterns is traced back to the spatial configurations of each component. This means that if the impurities have a finite spatial overlap with the bath the latter mediates two-body correlations between them. Interestingly, there is also the possibility that the impurities are not overlapping but can be still correlated implying that non-local correlations are in play. To quantify the strength and sign of the induced interactions we employ the relative two-body distance among the impurities extracting all contributions stemming from mean-field effects. In this sense, it is demonstrated that induced two-body correlations (anti-correlations) are related to mediated attractive (repulsive) impurity interactions. These findings are further supported by an effective two-body model containing the impurities effective trapping potential and their induced interactions. Importantly, this approach allows to
determine the strength and sign of the effective interactions mediated between the impurities through a comparison with the full many-body results. Moreover, by constructing an effective one-body Hamiltonian enables us to estimate the effective mass and trapping frequency of each distinguishable impurity (polaron), see Appendix B.
Evidences regarding bipolaron formation are provided, when both impurities are strongly attractively coupled to the bosonic gas, by means that the bipolaron energy and the size of the underlying dimer state rapidly decrease for stronger attraction. Interestingly, we determine the intercomponent three-body correlation function according to which overall weak three-body correlations exist and become enhanced for strongly attractive impurity-medium interactions signaling the formation of trimers among the impurities and an atom of the medium.
In this investigation we have restricted ourselves to the ground state of the three-component mixture. Further understanding on the character of the impurities induced interactions and in particular their nonlocal character and their dependence on the statistics of the medium are interesting perspectives. Also, the emulation of spectroscopic schemes that will allow the identification of the ensuing polaron states and excitations [24, 79] constitutes an intriguing direction. Another straightforward extension would be to explore the nonequilibrium impurities dynamics in order to understand the build-up of induced correlations. An additional fruitful research direction is to understand the Bose polaron formation when indistinguishable impurities are immersed in an attractive two-component gas forming a droplet. Certainly, studying correlation effects in particle-balanced three component settings with an emphasis on the few- to many-body crossover and in particular close to the pair immiscibility threshold is worth to be pursued.
## Acknowledgements
This work has been funded by the Deutsche Forschungsgemeinschaft (DFG, Germany Research Foundation) -- SFB 925 -- project 170620586. S.I.M. gratefully acknowledges financial support from the NSF through a grant for ITAMP at Harvard University.
## Appendix A Behavior of the bipartite entanglement
A standard measure to estimate the bipartite entanglement of mixed states that exist in a multi-component system11 is encapsulated in the logarithmic negativity [81, 82, 62, 83]. It is based on the partial transpose of the two-body species reduced density matrix12, which, e.g. referring to species \(\sigma\) and \(\sigma^{\prime}\), is obtained by integrating out the degrees of freedom of species \(\sigma^{\prime\prime}\) leading to \(\rho_{\sigma\sigma^{\prime}}^{(2),\text{spec}}=\text{Tr}_{\sigma^{\prime \prime}}\left(|\Psi^{\text{MB}}\rangle\langle\Psi^{\text{MB}}|\right)=\sum_{ ijlm}\sum_{k}C_{ijk}C_{lmk}^{*}|\Psi_{i}^{\sigma}\rangle|\Psi_{j}^{ \sigma^{\prime}}\rangle\langle\Psi_{l}^{\sigma}|\langle\Psi_{m}^{\sigma^{ \prime}}|\).
Footnote 11: Notice that, for instance, the von-Neumann entropy as an entanglement measure is well-defined in a two species but it is not applicable in multi-component ones [80].
Footnote 12: This is completely different from the two-body density matrix of two particles given by Eq. (8).
Its partial transpose \(T_{\sigma}\) with respect to species \(\sigma\) is calculated by exchanging the indices \(i\) and \(l\) associated with species \(\sigma\), i.e., \(\left(\rho_{\sigma\sigma^{\prime}}^{(2),\text{spec}}|_{ijlm}\right)^{\sigma} =\rho_{\sigma\sigma^{\prime}}^{(2),\text{spec}}|_{ijlm}\). Calculating the eigenvalues of \(\left(\rho_{\sigma\sigma^{\prime}}^{(2),\text{spec}}\right)^{T_{\sigma}}\) and in particular summing up its negative eigenvalues \(\mu_{i}\) yields the so-called negativity, \(\mathcal{N}_{\sigma\sigma^{\prime}}=\sum_{i}|\mu_{i}|\). Subsequently, the logarithmic negativity reads
\[\mathcal{E}_{\sigma\sigma^{\prime}}=\log_{2}\left(1+2\mathcal{N}_{\sigma\sigma ^{\prime}}\right). \tag{18}\]
This measure exploits the fact that for a separable mixture, e.g. \(\rho^{(2),\text{spec}}_{\sigma\sigma^{\prime}}=\sum_{i}p_{i}\tilde{\rho}^{(1), \text{spec}}_{\sigma,i}\otimes\tilde{\rho}^{(1),\text{spec}}_{\sigma^{\prime},i}\), the partial transpose does not alter the spectrum of \(\rho^{(2),\text{spec}}_{\sigma\sigma^{\prime}}\) and, hence, all eigenvalues remain positive. In this sense, the presence of negative eigenvalues guarantees the existence of entanglement. However, this statement can not be inverted, i.e., even if the logarithmic negativity is zero the species \(\sigma\) and \(\sigma^{\prime}\) can still be entangled [80].
The logarithmic negativity between the bath and the \(B\) impurity, \(\mathcal{E}_{AB}\), as well as among the impurities, \(\mathcal{E}_{BC}\), is illustrated in Figures 7(a) and (b) respectively within the \(g_{AB}\)-\(g_{AC}\) plane. As expected it overall captures the main features of the integrated correlation functions shown in Figures 3(a) and (b). For instance, \(\mathcal{E}_{AB}\) vanishes for strongly attractive \(g_{AC}\) and strongly repulsive \(g_{AB}\) [Figure 7(a)], while the parameter region referring to the impurities coalescence is in a similar way pronounced in \(\mathcal{E}_{BC}\) as it has been observed for \(\mathcal{C}_{BC}\), compare Figures 3(a) and (b) for repulsive \(g_{AB}\) and \(g_{AC}\). Recall that while \(\mathcal{E}_{\sigma\sigma^{\prime}}\) provides only a quantitative diagnostic for the bipartite entanglement and does not describe the correlated or anti-correlated behavior as \(\mathcal{C}_{\sigma\sigma^{\prime}}\) it still gives insight into the entanglement content of the many-body system. As such, for large \(g_{AB}<0\) the logarithmic negativity uncovers that the bath and the \(B\) impurity are strongly entangled especially so in the repulsive \(g_{AC}>0\) region, while varying \(g_{AB}\) towards the weakly attractive regime and for \(|g_{AC}|>1\) entanglement is reduced [Figure 7(a)]. This is attributed to the simultaneous increase of \(\mathcal{E}_{AC}\)[13], unveiling a competition between the intercomponent entanglement of individual impurities with the medium. Finally, in line with the predictions of \(\mathcal{C}_{BC}\), \(\mathcal{E}_{BC}\) demonstrates that entanglement is finite when both impurities are either weakly attractive or strongly repulsively coupled to the medium, see Figure 3(d).
Figure 7: (a)-(b) Diagram of the intercomponent (see legends) logarithmic negativity \(\mathcal{E}_{\sigma\sigma^{\prime}}\) [Eq. (18)] as a function of the impurity-medium couplings (\(g_{AB}\), \(g_{AC}\)). The harmonically trapped three component system consists of two non-interacting but distinguishable impurities immersed in a bosonic gas of \(N=15\) atoms with \(g_{AA}=0.2\).
## Appendix B Effective mass and trap frequency of a single impurity
In the following, we approach the three-component impurity setting as a polaron problem since each individual impurity via its coupling to the bosonic gas is dressed by the excitations of the latter. In this sense, we aim to capture the effective behavior of the \(B\) and \(C\) impurity with the effective one-body model [73],
\[\hat{H}_{\sigma}^{(1),\text{ho-eff}}=-\frac{\hbar^{2}}{2m_{\sigma}^{\text{eff}} }\frac{\partial^{2}}{(\partial\,x^{\sigma})^{2}}+\frac{1}{2}m_{\sigma}^{\text {eff}}(\omega_{\sigma}^{\text{eff}})^{2}x^{2}, \tag{19}\]
Figure 8: One-body density of the \(B\) impurity obtained within different approaches (see legend) for the interaction configurations (a) \((g_{AB},g_{AC})=(-0.2,0.1)\) and (b) \((0.2,0.1)\). Specifically, \(\rho_{\text{MB}}^{(1)}\) denotes the one-body distribution of the full three-component many-body system, whereas \(\rho_{B}^{(1),\text{eff}}\) and \(\rho_{B}^{(1),\text{ho-eff}}\) are calculated using the effective one-body Hamiltonians composed of either an effective harmonic oscillator with an effective mass and frequency [cf. Eq. (19)] or the effective potential defined in Eq. (6), respectively. Effective mass and trapping frequency of the dressed (b) \(B\) and (c) \(C\) impurity, respectively, as deduced from the effective polaron model defined of Eq. (19).
where \(m_{\sigma}^{\rm eff}\) and \(\omega_{\sigma}^{\rm eff}\) denote the polaron effective mass and trapping frequency with \(\sigma\in\{B,C\}\)14. To identify the values of the effective mass and frequency, we minimize the cost function
Footnote 14: Recall that within the effective two-body model described by Eq. (12) we implicitly account for the effective mass and frequency via the effective potential \(V_{\rm{B,C}}^{\rm eff}\) Eq. (6)]. Indeed, beyond mean-field corrections imprinted on \(\rho_{A}^{(1)}\) and, thus appearing in \(V_{\rm{B,C}}^{\rm eff}\), affect the effective mass and frequency [73].
\[\mathcal{L}_{\sigma}=\Delta\rho_{\sigma}^{(1)}+\Delta E_{\sigma}. \tag{20}\]
In this expression, the first term refers to \(\Delta\rho_{\sigma}^{(1)}=\int\mathrm{d}x_{\sigma}\left|\rho_{\sigma}^{(1), \mathrm{MB}}(x_{\sigma})-\rho_{\sigma}^{(1),\mathrm{ho-eff}}(x_{\sigma}) \right|^{2}\) with \(\rho_{\sigma}^{(1),\mathrm{MB}}\) and \(\rho_{\sigma}^{(1),\mathrm{ho-eff}}\) being the one-body density as predicted from the full three-component system and the effective one-body model, respectively. The second contribution of the right-hand side in Eq. (20) designates the energy difference \(\Delta E_{\sigma}=\left|E_{\sigma}^{\mathrm{MB}}-E_{\sigma}^{\mathrm{ho-eff} }\right|^{2}\), where \(E_{\sigma}^{\mathrm{MB}}=\langle\Psi^{\mathrm{MB}}|\hat{H}_{\sigma}|\Psi^{ \mathrm{MB}}\rangle\) is the \(\sigma\) impurity energy and \(E_{\sigma}^{\mathrm{ho-eff}}=\langle\phi|\hat{H}_{\sigma}^{(1),\mathrm{ho- eff}}|\phi\rangle=\frac{1}{2}\omega_{\sigma}^{\rm eff}\) is the energy of the effective one-body model and \(|\phi\rangle\) the corresponding ground state. Note that in order to uniquely estimate \(m_{\sigma}^{\rm eff}\) and \(\omega_{\sigma}^{\rm eff}\) one needs to adequately describe both the density and the energy of the impurity.
Figures 8(a) and (b) showcase the one-body densities \(\rho_{B}^{(1),\mathrm{MB}}\) and \(\rho_{B}^{(1)\mathrm{ho-eff}}\) for the characteristic interaction configurations \((g_{AB},g_{AC})=(-0.2,0.1)\) and \((0.2,0.1)\), respectively. For comparison we additionally provide the one-body density \(\rho_{B}^{(1),\mathrm{eff}}\) obtained from \(\hat{H}_{B}^{(1),\mathrm{eff}}=-\frac{\hbar^{2}}{2m_{\mathrm{g}}}\frac{\partial ^{2}}{(\partial x^{\prime})^{2}}+V_{B}^{\mathrm{eff}}\). As it can be readily seen, the one-body densities predicted by the two effective one-body models are in excellent agreement with the one corresponding to the full three-component many-body system. Deviations start to become evident for strong repulsive impurity-medium couplings (not shown) where the impurity and the medium phase separate [73, 35]. Recall that the effective model is by definition valid for weak intercomponent repulsions where the impurity does not probe the edges of the bosonic cloud.
The effective masses and frequencies of the \(B\) and \(C\) impurities after minimization of the cost function given by Eq. (20) are represented in Figures 8(c) and (d) with respect to the impurity-medium couplings. It is important to point out that both the effective mass and frequency of a specific impurity, e.g. the \(B\) one, primarily depend on its coupling with the bath \(g_{AB}\). The interaction strength of the other impurity (\(C\)) with the bath, e.g. \(g_{AC}\), has almost no impact on the effective parameters of impurity \(B\). For instance, this conclusion can be drawn from the nearly constant behavior of \(m_{B}^{\rm eff}\) and \(\omega_{B}^{\rm eff}\) for varying \(g_{AC}\) shown in Figure 8(c), or the fact that \(m_{C}^{\rm eff}\) and \(\omega_{C}^{\rm eff}\) remain almost intact for fixed \(g_{AC}\) and different \(g_{AB}\), see Figure 8(d).
For an attractively coupled impurity with the bosonic gas, the effective mass and frequency become larger than their bare values [gray dashed lines in Figures 8(c) and (d)], see in particular \(m_{B}^{\rm eff}\), \(\omega_{B}^{\rm eff}\) when \(g_{AB}=-0.2\) in Figure 8(c) and \(m_{C}^{\rm eff}\), \(\omega_{C}^{\rm eff}\) for \(g_{AC}<0\) in Figure 8(d). As such, the emergent Bose polaron experiences a narrower trapping potential, thereby, reflecting the localization of the impurity at the trap center [cf. \(\rho_{B}^{(1),\mathrm{ho-eff}}\) and \(V_{B}^{\mathrm{ho-eff}}\) in Figure 8(a)]. On the other hand, in the case of a repulsively coupled impurity the effective trapping frequency is still tighter than the original value, but the effective mass becomes smaller than its bare value [cf. \(m_{B}^{\rm eff}\), \(\omega_{B}^{\rm eff}\) for \(g_{AB}=0.2\) in Figure 8(c) as well as \(m_{C}^{\rm eff}\), \(\omega_{C}^{\rm eff}\) for \(g_{AC}>0\) in Figure 8(d)]. In particular, the effective mass is small enough to compensate the increased effective frequency meaning that the underlying harmonic trap is eventually broadened [cf. \(V_{B}^{\mathrm{ho-eff}}\) in Figure 8(b)]. Additionally, the comparatively smaller effective mass is related to a spatial delocalization of the impurity cloud15. In this way, the effective one-body model captures the effects imprinted on the impurity in the three-component system.
Footnote 15: Indeed, the kinetic energy of, e.g., the impurity \(C\) increases for increasing \(g_{AC}\) while the potential energy remains nearly constant.
## Appendix C Modelling the effective impurity interactions with an exponential potential
To verify the validity of the contact interaction potential for describing the induced impurity interactions between the impurities [Eq. (12)], we next exemplify that our results do not change if one instead uses an exponential potential. The latter has been derived in Refs. [37, 38] and holds in the homogeneous case and for immobile impurities residing at distances satisfying \(l=|x^{B}-x^{C}|\ll\xi_{A}\), with \(\xi_{A}\approx 1/\sqrt{2m_{A}g_{AA}N_{A}\rho_{A}^{(1)}(0)}\approx 0.6\) being the healing length of the bath. In particular, we replace the interaction term in Eq. (12) with
\[U(l)=-\frac{g_{AB}g_{AC}m_{A}}{\sqrt{\gamma}}e^{-2l/\xi_{A}}, \tag{21}\]
where \(\gamma=\frac{m_{A}g_{AA}}{N_{A}\rho_{A}^{(1)}(0)}\)16. As discussed in Section 6.2, we judge the quality of the effective two-body model by estimating the fidelity, \(\mathcal{F}_{BC}\), between the impurities two-body wave function as extracted from the full many-body system and the effective two-body model containing either a contact or an exponential interaction potential. Subsequently, we determine the difference \(\mathcal{F}_{BC}^{\text{exp}}-\mathcal{F}_{BC}^{\text{contact}}\) which as shown in Figure 9(a) testifies deviations at most of the order \(10^{-4}\).
Footnote 16: We model the exponential potential with the so-called POTFIT method [84, 85].
Proceeding one step further, we determine the overlap between the respective two-body correlation functions of the impurities determined within the full three-component system and the effective two-body model. Namely, we track \(\Delta\mathcal{G}_{BC}^{(2),\text{exp}}=\int\mathrm{d}x_{B}\mathrm{d}x_{C} \left|\mathcal{G}_{BC}^{(2)}-\mathcal{G}_{BC}^{(2),\text{exp}}\right|^{2}\), where \(\mathcal{G}_{BC}^{(2),\text{exp}}\)
Figure 9: (a) Relative deviation between the fidelities \(\mathcal{F}_{BC}^{\text{exp}}\) and \(\mathcal{F}_{BC}^{\text{contact}}\), which correspond to the overlap of the impurities two-body wave function obtained from the full many-body approach and the effective model of Eq. (12) containing either an exponential or a contact-type interaction potential, respectively. (b) Difference between \(\Delta\mathcal{G}_{BC}^{(2),\text{exp}}\) and \(\Delta\mathcal{G}_{BC}^{(2),\text{contact}}\), referring to the variance of the two-body correlation function calculated within the effective two-body model using either the contact or the exponential interaction potential with respect to the full three-component system. For both quantities the relative deviations are minor, testifying the validity of both effective interaction potentials.
denotes the two-body correlation function obtained within the effective two-body model (see also Section 6.2) with an exponential interaction. To infer the deviations among the exponential and contact effective interactions at the two-body correlation level, we calculate the difference \(\Delta G_{BC}^{(2),\text{exp}}-\Delta G_{BC}^{(2),\text{contact}}\), see Figure 9(b). Also here, only small deviations of the order \(10^{-5}\) are identified.
Therefore, the contact and exponential effective interaction potentials lead essentially to the same description regarding the impurities properties. This outcome was not _a-priori_ expected since the exponential potential is originally derived in the homogeneous case.
## Appendix D Impact of mass-imbalanced impurities and the atom number of the bosonic gas
Let us demonstrate the generalization of our results in the main text when the impurities are mass-imbalanced or the bosonic medium contains a larger number of particles. For this purpose, we focus on the behavior of the intercomponent correlations which can be quantified through the integrated correlation function [Eq. (9)] presented in Figure 10 for different system parameters.
In general, increasing the mass of an impurity disturbs the cloud of the bosonic gas to a larger degree which should eventually lead to an enhanced impurity-medium correlation. This is indeed evident in Figure 10(a) where the integrated correlation function, \(\mathcal{C}_{AB}\), is increased as compared to the mass-balanced case, thus testifying an overall larger degree of entanglement. Furthermore, since the correlation between the \(C\) impurity and the bath is not affected by the change of \(m_{B}\) [Figure 10(b)], the larger \(\mathcal{C}_{AB}\) leads to a stronger mediated correlation between the impurities, see e.g. \(\mathcal{C}_{BC}\) in Figure 10(c). The latter naturally leads to an amplified impurities' induced interaction for increasing \(m_{B}\). In particular, for \(g_{AB}=0.2\) and strong repulsive \(g_{AC}\), where \(\mathcal{C}_{BC}\) features the largest increase.
Next, we concentrate on the mass-balanced system but consider a larger number of bath particles and in particular \(N_{A}=30\), while maintaining the same mean-field interaction, i.e., \(N_{A}g_{AA}=\text{const}\). As it can be seen, the impurity-medium correlations, as captured by \(\mathcal{C}_{AB}\) and \(\mathcal{C}_{AC}\), are reduced compared to the reference case \(N_{A}=15\), \(g_{AA}=0.2\) [Figure 10(a), (b)]. This is attributed to the smaller intra-species coupling strength \(g_{AA}=0.1\) resulting in a decrease of the respective intra-species correlations among the bath particles. However, the mediated correlations among the impurities \(B\) and \(C\) are clearly enhanced when \(g_{AB}\) and \(g_{AC}\) are both repulsive, see Figure 10(c). In this sense, a larger number of bath particles featuring a decreasing intraspecies interaction is associated to a reduction of intraspecies correlations of the bath and impurity-medium ones but enhances to a certain degree the mediated correlation between the impurities. This behavior hints towards a complicated correlation transfer mechanism to the impurity-impurity subsystem which deserves further future investigations.
## Appendix E Estimating the importance of correlations on the many-body wave function
To expose the impact of intercomponent correlations at different interaction regimes on the level of the many-body wave function we analyze the fidelity \(\left|\langle\psi^{\text{MF}}|\psi^{\text{MB}}\rangle\right|^{2}\), see Figure 11(a). Here,
\(|\Psi^{\text{MB}}\rangle\) denotes the full many-body wave function where all emergent inter- and intracomponent correlations are taken into account, while \(|\Psi^{\text{MF}}\rangle\) refers to the species mean-field wave function which ignores all intercomponent correlations. Naturally, the fidelity is unity when the species are non-interacting, i.e., \(g_{AB}=g_{AC}=0\), since in this scenario intercomponent correlations are _a-priori_ prohibited. However, the fidelity decays for increasing impurity-medium coupling strengths as intercomponent correlations are triggered in this case. The largest deviation between the many-body and species mean-field wave functions occurs in the parameter region corresponding to the coalescence of the impurities, i.e., for strongly repulsive \(g_{AB}\) and \(g_{AC}\).
Further understanding of the respective correlation mechanisms can be delivered by identifying the participating microscopic configurations. For this reason we construct the species function eigenbasis \(|\psi_{i}^{A}\rangle|\psi_{j}^{B}\rangle|\psi_{k}^{C}\rangle\) obtained by calculating the eigenfunctions of an effective species Hamiltonian [cf. Eq. (2)] characterized by the effective potential defined in Eq. (6)17. As basis for the bath we take the ground and the energetically two lowest excited states of the effective potential
Figure 10: Integrated two-body correlation function [Eq. (9)] among (a) the \(B\) impurity and the medium, (b) the \(C\) impurity and the medium and (c) between the impurities as a function of the intercomponent interaction strength \(g_{AC}\). In all panels, we consider fixed \(g_{AB}=-0.2,0.2\) as well different masses of the \(B\) impurity (simultaneously setting \(\omega_{B}=\sqrt{m_{A}/m_{B}}\)) and atom numbers of the medium (see legend), while keeping constant the mean-field interaction \(N_{A}g_{AA}\). The gray dashed line in panel (c) marks \(\Delta\left\langle r_{BC}\right\rangle=0\).
into account, while for the two impurities we consider the corresponding energetically lowest six eigenstates leading to a total number of 108 three-component basis states \(|\psi_{i}^{A}\rangle|\psi_{j}^{B}\rangle|\psi_{k}^{C}\rangle\).
The respective probability amplitudes \(P_{ijk}=\left|\left(\langle\psi_{i}^{A}|\langle\psi_{j}^{B}|\langle\psi_{k}^{C} \rangle\right)\Psi^{\text{MB}}\right)\right|^{2}\), with \(|\Psi^{\text{MB}}\rangle\) being the full many-body wave function, are presented in Figure 11(b) for \(g_{AB}=1.0\) and varying \(g_{AC}\). Notice that the state \(|\psi_{0}^{A}\rangle|\psi_{0}^{B}\rangle|\psi_{0}^{C}\rangle\), denoting the case in which each species occupies the ground state of the effective species Hamiltonian, represents the three-body ground state obtained with a sMF ansatz. Consequently, \(P_{000}=\left|\langle\Psi^{\text{sMF}}|\Psi^{\text{MB}}\rangle\right|^{2}\) (cf. Figures 11(a) and (b) for \(g_{AB}=1.0\)). In general, it is observed that finite interactions yield a non-negligible population of energetically higher-lying excited states. Importantly, this behavior becomes enhanced in the coalescence regime, i.e., for strong repulsive \(g_{AB}\) and \(g_{AC}\). This means that there are several macroscopically occupied basis states reflecting the significant intercompoment entanglement (cf. Figures 2 and 7).
|
2301.03957 | AI based approach to Trailer Generation for Online Educational Courses | In this paper, we propose an AI based approach to Trailer Generation in the
form of short videos for online educational courses. Trailers give an overview
of the course to the learners and help them make an informed choice about the
courses they want to learn. It also helps to generate curiosity and interest
among the learners and encourages them to pursue a course. While it is possible
to manually generate the trailers, it requires extensive human efforts and
skills over a broad spectrum of design, span selection, video editing, domain
knowledge, etc., thus making it time-consuming and expensive, especially in an
academic setting. The framework we propose in this work is a template based
method for video trailer generation, where most of the textual content of the
trailer is auto-generated and the trailer video is automatically generated, by
leveraging Machine Learning and Natural Language Processing techniques. The
proposed trailer is in the form of a timeline consisting of various fragments
created by selecting, para-phrasing or generating content using various
proposed techniques. The fragments are further enhanced by adding voice-over
text, subtitles, animations, etc., to create a holistic experience. Finally, we
perform user evaluation with 63 human evaluators for evaluating the trailers
generated by our system and the results obtained were encouraging. | Prakhar Mishra, Chaitali Diwan, Srinath Srinivasa, G. Srinivasaraghavan | 2023-01-10T13:33:08Z | http://arxiv.org/abs/2301.03957v1 | # AI based approach to Trailer Generation for Online Educational Courses
###### Abstract
In this paper, we propose an AI based approach to Trailer Generation in the form of short videos for online educational courses. Trailers give an overview of the course to the learners and help them make an informed choice about the courses they want to learn. It also helps to generate curiosity and interest among the learners and encourages them to pursue a course. While it is possible to manually generate the trailers, it requires extensive human efforts and skills over a broad spectrum of design, span selection, video editing, domain knowledge, etc., thus making it time-consuming and expensive, especially in an academic setting. The framework we propose in this work is a template based method for video trailer generation, where most of the textual content of the trailer is auto-generated and the trailer video is automatically generated, by leveraging Machine Learning and Natural Language Processing techniques. The proposed trailer is in the form of a timeline consisting of various fragments created by selecting, para-phrasing or generating content using various proposed techniques. The fragments are further enhanced by adding voice-over text, subtitles, animations, etc., to create a holistic experience. Finally, we perform user evaluation with 63 human evaluators for evaluating the trailers generated by our system and the results obtained were encouraging.
Video Trailer Generation, Machine Learning, Natural Language Processing
## I Introduction
The growth of the internet has significantly increased the amount of free instructional content. These resources are offered not only by big institutions but also by individual content creators over various platforms such as Coursera, Udemy, YouTube, etc. This increase in content production rate has resulted in the creation of redundant courses and tutoring videos for many topics over time. In spite of advantages like on-demand accessibility, the abundance of options has increased confusion and made it more challenging to select a course that might be in line with learner's interests. And often, enrolling to a course that doesn't meet the learner's expectations for a course's curriculum and other aspects such as expected level of commitment, the availability of support, etc., causes the learner to lose motivation and eventually drop the course. [1, 2].
This problem can be tackled to a certain extent by presenting a video trailer to the learners before the start of the course (learning pathway) to help them quickly glance through the pathway and get an overall idea of the course content and its format [3, 4, 5].
The idea of _Trailers_ is not brand-new, and the film industry has been using them extensively for a while. Trailers, in context of movies are mostly about advertising. They notify viewers about an upcoming movie while generating interest among them. Often the effectiveness of a trailer affects the perception of the movie, even before it is released publicly. The course trailers serve a greater purpose in the educational context than simple course promotion. Before beginning the learning journey, they aid in helping learners set realistic expectations for their learning outcomes and competency mastery.
Concept of trailers might resemble with that of summarization [6, 7, 8], but apart from incorporating a few elements of summarization like shortening and abstracting out information from substantial sized input source, trailers are different in terms of their motivation, purpose and the impact they create on the end users. Unlike summaries, trailers need not be complete in their coverage. Also, they are designed to give glimpses of a few interesting segments of the narrative without revealing the main plot or climax of the underlying narrative [9]. Although there is no clear demarcation of what a climax is in academic narratives, based on our analysis of many academic course trailers in popular MOOCs (Massive Open Online Courses) such as Udemy1 and Coursera2, we see prevalence of a common pattern in trailer timelines. The timeline starts with an introduction about the course and the instructor and ends with a call-to-action (CTA) which offers opportunity to the learners to take action or start the course. In between, there are several elements and factoids about the course and its contents, that aim to arouse viewer interest.
Footnote 1: [https://www.udemy.com](https://www.udemy.com)
Footnote 2: [https://www.coursera.org](https://www.coursera.org)
The current approach of generating trailers is manual, cumbersome and time-consuming, it requires someone with relevant skills like designing, video editing, and a subject matter expert to help in curating the trailer content. Although, there are software products like Apple iMovie3, Windows Movie Maker4 and others that people can use for generating trailers by performing basic editing like cuts, merging frames,
etc. Yet the content to be placed in the trailer has to be curated entirely by a human expert.
In our work, we propose a semi-automatic template based framework for generating video trailers for learning pathways, which are a sequence of related educational documents of various forms [10, 11, 12]. Here, most of the content that is placed in the trailer is auto-generated with a scope for taking inputs from the creator. The framework for trailer generation consists of various essential trailer fragments arranged as a timeline of the trailer. Each fragment is composed of a sequence of frames that are coherent within themselves in terms of the topical information they present. And inherently, each frame is composed of various types of elements and their properties like font size, text styling, image size, etc. Fig. 1 shows the illustration for the same.
Once all the elements are generated and placed at their respective positions within a frame of a trailer fragment, a template is applied to it. The template consists of the multi-modal experiences such as voice-over, subtitles, sounds, animations, etc. It also determines the elements of the trailer design such as number and ordering of fragments, frames and elements. Fig. 2 shows the visual view of some of the frames for one of the templates with it's corresponding elements and their positioning in the frames.
## II Related Work
There are studies that discuss the idea, use and motivation of having trailers for academic courses [3, 4, 5]. Also, there are online educational platforms like Coursera and Udemy which have course trailers. However, we could not find literature on approaches to generating trailers for academic courses. Hence, in the following paragraphs we discuss some of the pioneering works of trailer generation in-general across other domains. Trailer generation can also be seen as special case of larger research interest of adding an element of surprise to the engage receiver's attention in midst of information overload [13, 14].
Authors in [15, 16, 17, 18] present an approach for automatic trailer generation from movies as input. Hermes et al. [16] create trailers for action movies by analyzing audio and video signals present in movies and automatically detecting features like faces, scene cuts, sound-volume, etc and use ontology of the corresponding domain for producing trailers. Irie et al. [17] propose a movie trailer generation method which extracts symbols like title logo, main theme music and selects impressive shot or speech segments based on clustering methods and EM algorithm. Brachmann et al. [15] propose an approach of generating action movie trailers using the concept of trailer grammar, knowledge base and various ML techniques for analyzing audio and images present in the movie. Smith et al. [18] propose a system that understands and encodes the patterns and emotions present in horror movies using Convolution Neural Networks(CNN).
All the above methods use visual and audio cues to derive the trailer frames, whereas we use raw text data and build the necessary discriminative and generative Neural Network models to create frames and its elements to be placed in the trailer.
Hesham et al. in [19] explore the idea of creating movie trailers from their subtitles. They first classify the movie by genre, identify important keywords and then rank important subtitles. The trailer is then generated by stacking the movie time-frames corresponding to the important subtitles. Gaikwad et al. in [20] propose a technique to create previews of movies by utilizing subtitles and finding the most representative scenes by matching them with the plot summaries. Chi et al. [21] propose an approach to automatically create marketing-style short videos for a given product page url by extracting elements and their styles present in the product html page under specified tags.
Unlike the aforementioned works which primarily focus on generating trailers based on an extractive strategies, in our work we develop various modules that comprehend input document and generate content for the trailer either by paraphrasing or by using Natural Language Generator based model.
As far as we know, automatic/semi-automatic generation of video trailers for learning pathways is unexplored. Our proposed approach of video trailer generation using Machine Learning, Natural Language Processing and Generation techniques is also unique.
## III Proposed System
We propose a framework for trailer generation consisting of different trailer fragments that form a trailer timeline, generation of the trailer fragments and finally applying templates that determine the look and feel of the trailer. Based on our analysis of multiple trailers presented for various online courses offered on various educational platforms like Coursera and Udemy, we designed and structured our trailer elements, fragments and overall flow of the trailer.
We propose a trailer timeline consisting of 7 trailer fragments namely, Splash, Trailer Title, Author Details, Outline, Meta-Information, Social Proof and finally the Call-to-Action. Figure 3 shows the timeline of all the above-mentioned fragments in the trailer. Each of these fragments define a specific part of the trailer, their purpose and their importance in the trailer. We define the fragments in detail in further part of this section. As discussed earlier, fragments are composed of
Fig. 1: Trailer Structure
a sequence of frames and each frame is composed of various types of elements and their properties.
The overall approach for trailer generation is illustrated in Fig. 4. All the resources mapped to a learning pathway form the input to our _Fragment Data Generator (FDG)_ module. Template constraints that define the elements, fragments and frames also form the input to _FDG_. The _FDG_ along with other sources like creator's input, any images or information from the web or knowledge bases, etc., can be incorporated into the frames or the fragments. Once the elements for all the frames across all the fragments are generated, we pass it to the composition module for adding in other important aspects of the trailer like voice-over, subtitles, sounds, etc., to add to its multi-modal experience.
### _Fragment Data Generation_
Following are the proposed trailer fragments arranged in the order of their appearance in the trailer timeline-
Splash FragmentThe idea of splash fragment is to display any introductory information related to the trailer such as credits, software logo, etc., mostly obtained from creator's input. This optional fragment could also be the last fragment in the trailer depending on the creator's preference.
Trailer Title FragmentIn this fragment we generate a short yet representative title for the entire trailer, hence giving a quick idea about the topic that summarizes the underlying pathway or the set of resources. We apply _Hierarchical Title Generation_ model [22] over the resources mapped to the learning pathway to get the list of trailer titles. We select a title among them based on their Term Frequency. In case, none of the titles are above a threshold, we fall back on the fact that the first resource in the pathway is the proxy to the introductory resource, and we generate the trailer title for it by applying _Single Document Title Generator_[23, 24]. Figure 5 shows the trailer title fragment generation flow.
Author Details FragmentA quick introduction about the author or the instructor of the learning pathway could help the learners build an implicit connect and trust. Majority of the elements in the _Author Details Fragment_ like author names, affiliations and author's image are expected from the creator while creating the trailer. Template constraints such as addressing multiple authors with different frame elements, handling and getting relevant images to be put in this fragment etc are also obtained from trailer creator. These inputs and template constraints are plugged in the automation system to fill the overall author frame. Additionally, we crawl the web to get relevant images, for example: we crawl the web and get relevant affiliation images and place it in the desired coordinates as defined by the template. Also for the templates that allow for having only the frontal face of author, we make use of an open-sourced face recognition model5 to crop the face from the uploaded author image. In case no author image is provided to the system by the creator, we place a dummy caricatured relevant sized image. Similarly, we have defined defaults for the features, frames and templates in case there is no input from the trailer creator. For example, when multiple authors exists, we display information w.r.t to the the first author entered by the creator and treat him/her as the primary instructor and rest all the authors are abstracted by placing them under the "and others" category.
Footnote 5: [https://docs.opencv.org/3.4/db/d28/tutorial_cascade_classifier.html](https://docs.opencv.org/3.4/db/d28/tutorial_cascade_classifier.html)
Outline FragmentThis fragment gives an idea about the specific topics that would be covered in the learning pathway. This could help in setting learners' expectation in terms of the topics covered and in deciding whether the content aligns to his/her end goals. For this we use _Single Document
Fig. 3: Trailer Timeline
Fig. 2: Illustration of Frames
_Title Generator_[23, 24] model to generate titles for all the resources in the learning pathway which represents the outline of the learning pathway.
Every template under the outline fragment limits the number of text elements to be listed on the screen with the aim to balance aesthetics and information at the same time. To adhere to this prior constraint, we design a multi-step process to select diverse, yet impactful set of elements from a relatively larger list of outlines generated in the previous step. Fig. 6 shows the entire pipeline of Outline Text Selection.
Let \(K\) be the number of text elements that the frame requires and \(N\) be the total number of resources we have as input and let \(K<N\). We start with all the resources (N) given by the user and remove any instance of assessments and short documents under the assumption that such documents won't hold much informational content. After this we remove any occurrence of exact duplicates and near duplicates in the remaining set and pass the remaining resource list to the title generator system to generate title for every resource.
Post this, we fix the first and the last position of the outline with the first and last resource title. We specifically do this action because of the inherent ordering present in the input resource as a part of learning pathway. Also intuitively, picking first and last sets a bound over the topic space to be covered under a particular course.
Finally on this reduced set, we divide the space into bins of equal size from which we randomly sample one outline element from each bin to remaining \(K-2\) positions in the outline list. We use threshold based Jaccard and cosine similarity for filtering syntactic and semantic duplicates respectively. The Jaccard similarity between any two documents is calculated as an intersection over union of word sets for both documents. It helps us get sense of syntactic similarity between documents. For calculating cosine similarity, we vectorise our inputs using
Fig. 4: Trailer Generation Flow
Fig. 5: Trailer Title Fragment Generation Flow
pre-trained Sentence Transformers [25] and then measure the semantic closeness between them using cosine similarity.
```
1:\(resources=Array(1,2,\ldots,N-1,N)\)
2:\(remaining\_resources=Array(1,N)\)
3:for\(i\gets 2\) to \(N-1\)do
4:\(scores=Array()\)
5:for\(r\gets remaining\_resources\)do
6:\(scores\gets calculate\_similarity(i,r)\)
7:endfor
8:if\(max(scores)<threshold\)then
9:\(remaining\_resources\gets i\)
10:endif
11:endfor
12:return\(remaining\_resources\)
```
**Algorithm 1** Duplicates Filter
Since every pathway is composed of different resources of various properties like length, style, etc., having one threshold that fits all does not work. Hence, our threshold is adaptable in a way that guarantees at-least \(K\) items are selected post any of the syntactic or semantic pruning steps. The threshold search space is between 0 to 1 where for efficiency and tractability we quantize it at 0.1. Then for each threshold we get remaining resources as defined in Algorithm 1. Finally the threshold that guarantees at-least \(K\) items and possibly reduces the input set by maximum is chosen as the final threshold.
Meta-Information FragmentThe idea of having Meta-Information Fragment is to inform learners about other important aspects of the course like course structure, total reading time, total number of resources, etc. We believe this would help learners understand more about the learning pathway or resources apart from just knowing the topics that would be covered. Also, such information can be used by learners in charting out their learning hours and estimating the efforts it would take for the successful completion of the course. Some of the elements that we generate automatically as part of this fragment are: generating topical word clouds 6 bases on word frequencies after pre-processing like stop-word removal, estimating total reading time based on average reading speed statistics and other pathway level derived statistics like total resources, availability of discussion forum, etc.
Footnote 6: [https://pypi.org/project/wordcloud/](https://pypi.org/project/wordcloud/)
Social Proof FragmentSocial Proof is one of the most prominent ways of social influence and is based on the heuristic that the users follow others similar to them when uncertain [26]. We collect these statistics from the deployed learning environments. This information is added to the video trailer over time when different learners take this course and the analytical data is available.
Call-to-Action FragmentCTA is a marketing term which is designed to push the audience in taking the desired actions. It is an important aspect of any trailer because all of the enthusiasm that is built in a learner while watching the trailer is of no use if the learner is not clear on the next actionable [27, 28] item. In our system, we randomly select phrases from a set of pre-defined list of potential key-phrases to be placed on the screen at a pre-defined location under this fragment. Some of the phrases we use are 'Start your learning today', 'Let's get started', 'Are you ready?', etc., along with the action that will take the learner on the learning pathway.
### _Additional Elements_
In this subsection, we discuss two other interesting elements that we propose to be added to the trailers, namely, _Definition Extractor_ and _Paraphraser_. These are shown as suggestions to the trailer creator and it's up to the creator to include them and decide their placement in the trailer.
Definition ExtractorDefinitions are descriptive elements that we believe can help in introduction of concepts. To select the definition from the learning resource, we propose a discriminative model that classifies a given piece of text into Definition or Non-Definition class. For building the classifier model, we use a dataset7 that contains positive and negative definition candidates extracted from Wikipedia for various topics. Our best performing model is a fine-tuned DistilBERT-base-uncased8 model with a Definition class F1-score of 0.96 and Non-Definition class F1-score of 0.97 on the test set.
Footnote 7: [http://nlp.uniroma1.it/wcl/](http://nlp.uniroma1.it/wcl/)
Footnote 8: [https://huggingface.co/distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased)
Footnote 9: [https://github.com/ramsrigouthang/Questgen.ai](https://github.com/ramsrigouthang/Questgen.ai)
ParaphraserWe believe that this is an useful utility that can be used in the Outline and Trailer title fragments. This gives the creator an ability to re-write concisely any substantially larger textual content present in any frame. We use a publicly available pre-trained model9 for this task which fine-tunes a large sized T5 (Text-to-Text Transfer Transformer) [7] model on a parallel corpus of sentence and it's corresponding paraphrase.
Footnote 9: [https://zalko.github.io/moviexpy](https://zalko.github.io/moviexpy)
### _Video Composition_
Video Composition module is responsible for stitching together all the elements that need to be part of the trailer, such as the Frame data, Voice-over text, Text-to-Speech (TTS), etc., into a trailer video. Fig. 4 pictorially shows the overall flow of the various components that are a part of the video composition. We use Python's MoviePy library10 as our choice for video editing and composition of the templates as it provides us with all the necessary editing functions like inserting text, concatenations and cuts, which we use to draft our templates.
Footnote 10: [https://github.io/moviexpy](https://github.io/moviexpy)
After the frame-level data elements are in-place, the next step is to generate voice-over text for each of the frames. Voice-over text is defined as the spoken-text that the narrator speaks while a frame is displayed on the screen. For this, we select grammar from a pre-defined set of slot based text grammars which we define per frame. The slots in the grammar are nothing but the screen's text elements. Finally, once the Voice-over Text is generated for every frame, we pass them through the IBM Watson's Text-to-speech (TTS)
API11 with relevant parameters such as voice-type, gender, etc., by choosing from a list of speaker profiles to get the audio files for every frame. Fig. 7 illustrates the flow from grammar selection to voice generation for the Trailer Title Fragment. We then derive the frame duration accordingly to make sure that the visual and audio aspects of the frames are in sync and minimize any kind of lag on either ends. Finally, along with all the above details, we input template constraints like positioning of elements, and styles, user preferences, and some basic animations like fade-in and fade-out settings to come up with the final trailer.
Footnote 11: [https://cloud.ibm.com/catalog/services/speech-to-text](https://cloud.ibm.com/catalog/services/speech-to-text)
## IV Experiments
In this section, we describe the dataset, evaluation strategy and results obtained for the trailers generated by our proposed system.
DatasetApart from the datasets which we have used for training and evaluating specific modules that are responsible for generating fragment relevant data. We created three different learning pathways for our experiments and evaluation of the generated trailers. Each learning pathway differs with each other in the number of resources and stylometry. Two of the pathways are based on text book chapters with difference in number of resources mapped, and one pathway is video lectures. We tried to take different pathways to evaluate our model's flexibility on different types of learning pathways. First one was created by sampling some chapters sequentially from a freely available Machine Learning textbook [29]. For second, we chose the speech-to-text transcription of a week's video lectures from an academic course on NLP. Our third learning pathway is the entire ML textbook [29]12. All the three corpus are analogous to learning pathways as they are all semantically coherent, progressive and share the same global topic.
Footnote 12: Datasets can be found at: [https://bit.ly/3ro3JLO](https://bit.ly/3ro3JLO)
Evaluation and ResultsTrailers can be seen as generative tasks with an inherent notion of creativity. Here the objective evaluation is not straight-forward because the effectiveness of a trailer is highly subjective and relies on the human perception. However, we think that human evaluation on various trailers generated can give us a good perspective on the quality of the trailers. We had 63 human evaluators consisting of Engineering graduates, Post-graduates and PhD students well versed in the technical domain that represent our dataset.
We evaluate 6 trailers13 in total that were generated from 3 different learning pathways as discussed above, i.e., 2 trailer per learning pathway. These two trailers are based on two templates T1, T2 created by us. Both the templates differ in aesthetics and level-of-detail(LOD). The evaluation for each trailer was done on a set of 8 questions on Likert-scale from 1 to 5, where 1 would mean very poor and 5 would mean very good.
Footnote 13: Sample Trailers: [https://bit.ly/3Hscie9](https://bit.ly/3Hscie9)
There were three separate groups of evaluators. Each group was provided with 2 trailers based on 2 templates for the same pathway. We thoughtfully perform this diversification to simulate the cluster sampling procedure, since showing all 6 trailers to the same evaluators would have created boredom, resulting in not so accurate evaluation.
Fig. 6: Outline Text Selection
We also encouraged the evaluators to give free comments for the trailers they evaluated, as this would help us improve our system in future iterations. Table. I and II lists down some of the positive comments and improvements suggested by the users. Fig. 8 shows some of the trailer fragments generated by our proposed system14.
Footnote 14: Detailed demo walk-through: [https://www.youtube.com/watch?v=](https://www.youtube.com/watch?v=) 06VVuAFInTk
Following is the list of 8 questions that were asked to the evaluator during the evaluation. The text in _italics_ highlights the broader aspect of the evaluation feature.
Q1. Did you find the trailer to be _self-contained_?
Q2. How were the _fonts and styles_ used in the trailer in terms of readability?
Q3. How did you find the _length and pace_ of the trailer?
Q4. As a user, how _impressed_ are you with this trailer overall?
Q5. Could this trailer _evoke interest_ in someone taking this course? (Ignoring any prior inclination to the topic)
Q6. How was the _average duration_ of each frame?
Q7. Based on the trailer you just saw, do you think you have a good _impression_ of the course now?
Q8. How did you find the _sync between the audio and visuals_ you saw?
As can be seen in Fig. 9, the scores obtained for each of the survey questions are good and far above the average(score of 3) for almost all the trailers generated by our approach. Also, in our study, we found both the templates performed equally good. However, for Q5, the average scores is relatively lower compared to other questions. On digging deeper we found some of the comments of total 24 comments we received mentioned about the difficulty of the course for not getting interested in the course. This could mean that this question (Q5) is more subjective.
## V Conclusions and Future Work
In this paper, we presented a novel framework for automatically generating video trailers for a learning pathway using ML and NLP techniques. We validated our trailers on multiple corpus of varied granularity with human evaluation and the results obtained were encouraging. This approach can be adapted to different domains given enough data to train the models involved in the entire process. We believe that this approach can lay foundation to building more advanced versions of trailer.
In future, we plan to improve the existing system by incorporating suggestions obtained in the user evaluation and adding more interesting themes like automatically detecting learning outcomes given the resources. We also intend to create an interactive dashboard to take inputs from the creator and allow the creator to make edits to the auto-generated content.
## Acknowledgment
We thank the Center of Excellence on Cognitive Computing, funded by Mphasis F1 Foundation for funding this research. We also thank Dr. Prasad Ram and Gooru team ([https://gooru.org](https://gooru.org)) for the topical discussions and encouragement.
|
2306.16443 | Free fermionic webs of heterotic T-folds | Moduli stabilisation is key to obtaining phenomenologically viable string
models. Non-geometric compactifications, like T-duality orbifolds (T-folds),
are capable of freezing many moduli. However, in this Letter we emphasise that
T-folds, admitting free fermionic descriptions, can be associated with a large
number of different T-folds with varying number of moduli, since the fermion
pairings for bosonisation are far from unique. Consequently, in one description
a fermionic construction might appear to be asymmetric, and hence
non-geometric, while in another it admits a symmetric orbifold description. We
introduce the notion of intrinsically asymmetric T-folds for fermionic
constructions that do not admit any symmetric orbifold description after
bosonisation. Finally, we argue that fermion symmetries induce mappings in the
bosonised description that extend the T-duality group. | Alon E. Faraggi, Stefan Groot Nibbelink, Benjamin Percival | 2023-06-28T18:00:01Z | http://arxiv.org/abs/2306.16443v2 | # Free fermionic webs of heterotic T-folds
###### Abstract
Moduli stabilisation is key to obtaining phenomenologically viable string models. Non-geometric compactifications, like T-duality orbifolds (T-folds), are capable of freezing many moduli. However, T-folds, admitting free fermionic descriptions, can be associated to a large number of different T-folds with varying number of moduli, since the fermion pairings for bosonisation is far from unique. Fermion symmetries induce mappings in the bosonised description that extend the T-duality group.
+
Footnote †: preprint: CERN-TH-2023-115
+
Footnote †: preprint: CERN-TH-2023-115
## I Introduction
String theory realises a unification of gravity, gauge interactions and their charged matter via the properties of Conformal Field Theories (CFTs) residing on its two dimensional (2D) worldsheet. Heterotic strings on toroidal orbifolds [1; 2] led to some of the most realistic string-derived models to date [3; 4; 5]. However, orbifolds and other geometrical backgrounds result in free moduli (such as the metric, B-field or Wilson lines) on which detailed physics, like gauge and Yukawa couplings, depend.
Strings on tori and their orbifolds admit exact quantisation. This was instrumental in the discovery of T-dualities [6], like the famous \(R\to 1/R\) duality, which sets the effective minimum of the radius \(R\) equal to the string scale. Investigations of string backgrounds had a profound impact on mathematics as mirror symmetry showed, which was argued to be a form of T-duality [7].
Modding out T-duality symmetries may lead to exotic non-geometric backgrounds [8; 9], dubbed T-folds. Hence, the landscape of string vacua may be much vaster than suggested by geometrical compactifications alone. Even though non-geometric constructions have been studied far less than their geometric counterparts, they may be vital for phenomenological string explorations as they are capable of freezing many moduli.
Such T-folds may have different actions on their left- and right-moving bosonic coordinate fields, and are thus referred to as asymmetric orbifolds [10; 11]. If only order two symmetries are modded out, an alternative fermionic description may be obtained by bosonisation, a CFT equivalence of chiral bosons and fermions in 2D [12]. This led to a detailed dictionary between these two formulations explicated for symmetric \(\mathbb{Z}_{2}\!\times\!\mathbb{Z}_{2}\) orbifolds [13]. Asymmetric boundary conditions in the fermionic formalism have profound phenomenological consequences such as, doublet-triplet splitting mechanism [14; 15], Yukawa coupling selection [16], and moduli fixing [17].
Although a similar dictionary for asymmetric orbifolds is not this letter's aim, heterotic bosonisation ambiguities suggest identifications of seemingly unrelated T-folds. This sheds new light on non-geometric moduli stabilisation. Fermionic symmetries parameterising these ambiguities, suggest an extension of the T-duality group.
## II Order two bosonic T-fold models
The bosonic formulation of the heterotic string [18] describes \(d\)-dimensional Minkowski space by coordinate fields \(x=(x^{\mu=2\ldots d-1})\) in light-cone gauge. The internal coordinate fields \(X=(X_{\rm R}|X_{\rm L})\), with right- and left-chiral parts, \(X_{\rm R}=(X_{\rm R}^{i=1\ldots D})\) and \(X_{\rm L}=(X_{\rm L}^{\alpha=1\ldots D+16})\), \(D=10-d\), are subject to torus periodicities
\[X\sim X+2\pi\,N\,,\qquad N\in\mathbb{Z}^{2D+16}\,. \tag{1}\]
The worldsheet supersymmetry current,
\[T_{\rm F}(z)=i\,\psi_{\mu}\partial x^{\mu}+i\,\chi^{i}\partial X_{\rm R}^{i}\,, \tag{2}\]
involves the real holomorphic superpartners \(\psi=(\psi^{\mu})\) and \(\chi=(\chi^{i})\) of \(x\) and \(X_{\rm R}\), respectively. Here, \((\bar{\partial})\partial\) denotes (anti-)holomorphic worldsheet derivative and repeated indices are summed over.
An order two generator, defining the orbifold action
\[X\sim e^{2\pi i\,v}\,X-2\pi\,V\,, \tag{3}\]
with \(v=(v_{\rm R}|v_{\rm L}),\,V=(V_{\rm R}|V_{\rm L})\in\frac{1}{2}\,\mathbb{Z}^{2 D+16}\), is called a shift, a twist, or a roto-translation, if \(V\not\equiv 0\equiv v\), \(v\not\equiv 0\equiv V\), or \(v,V\not\equiv 0\), respectively. (\(\equiv\) means equal up to integral vectors.)
An orbifold is called _symmetric_ if there is a basis such that the left- and right-twist parts are equivalent according to
\[v_{\rm L}\equiv(v_{\rm R},0^{16})\,, \tag{4}\]
for all its generators simultaneously [19; 20; 21], and _asymmetric_ if no such basis exists. The addition of \(0^{16}\) is essential as the vectors \(v_{\rm L}\) and \(v_{\rm R}\) have unequal lengths.
## II Real free fermionic models
In the free fermionic formulation [22; 23; 24] the internal degrees of freedom are described by real holomorphic fermions \(f=(y,w)\) with \(y=(y^{i})\) and \(w=(w^{i})\) and real anti-holomorphic fermions \(\bar{f}=(\,\bar{f}^{u=1\ldots 2D+32})\). Worldsheet supersymmetry is realised non-linearly
\[T_{\rm F}(z)=i\psi_{\mu}\partial x^{\mu}+i\chi^{i}y^{i}w^{i}\,. \tag{5}\]
A fermionic model is defined by a set of basis vectors with entries \(0\) or \(1\) for real fermions. Each basis vector \(\mathbf{\beta}=(\beta|\bar{\beta})\) defines boundary conditions
\[f\sim-e^{\pi i\,\beta}\,f\,\qquad\bar{f}\sim-e^{\pi i\,\beta}\,\bar{f}\,. \tag{6}\]
## III Bosonsations
### Holomorphic bosonisation
Assuming that the fermions \(\chi\) are identical in the supercurrents (2) and (5) and they generate the same worldsheet supersymmetry in the bosonic and fermionic descriptions, bosonisation uniquely relates the currents
\[J^{i}=\,: (\lambda^{i})^{*}\lambda^{i}\!:=\,:\!y^{i}w^{i}\!:\cong i\,\partial X ^{i}_{\rm R}\,,\] (7a) and complex fermions \[\lambda^{i}=\frac{1}{\sqrt{2}}(y^{i}+i\,w^{i})\cong\,:\!e^{i\,X^{i}_{\rm R}}\, \tag{7b}\]
to normal ordered exponentials of chiral bosons. Here \(\cong\) emphasises that these expressions are not identities but rather that both sides have identical operator product expansions in either formulation.
The bosonisation formulae relates the boundary conditions in both descriptions. The torus periodicities (1) reflect the \(2\pi\) ambiguities of \(X_{\rm R}\) in the complex exponentials (7b). Comparing the orbifold conditions (3) of the right-moving bosons \(X_{\rm R}\) with boundary conditions (6) of the holomorphic fermions \(y\) and \(w\) in (7) leads to the following identifications:
\[v_{\rm R}=\tfrac{1}{2}\beta(w)-\tfrac{1}{2}\beta(y)\,,\qquad V_{\rm R}=\tfrac {1}{2}(1^{D})-\tfrac{1}{2}\beta(y)\,. \tag{8}\]
### Anti-holomorphic bosonisation
Contrary to the holomorphic side, the pairing of the anti-holomorphic fermions is arbitrary. Associating odd and even fermion labels to the real and imaginary parts of complex fermions results in an anti-holomorphic bosonisation procedure given by:
\[\overline{J}^{a}=\,: (\bar{\lambda}^{a})^{*}\bar{\lambda}^{a}\!: =\,:\!\bar{f}^{2a-1}\bar{f}^{2a}\!:\cong i\,\bar{\partial}X^{a}_{\rm L}\,\] (9a) with \[\bar{\lambda}^{a}=\frac{1}{\sqrt{2}}(\bar{f}^{2a-1}+i\bar{f}^{2a})\cong\,: \!e^{iX^{a}_{\rm L}}\!:\, \tag{9b}\]
for \(a=1,\ldots D+16\).
Then by similar arguments as above, the torus periodicities (1) for \(X_{\rm L}\) follow. And splitting \(\bar{\beta}=(\bar{\partial}_{\circ},\bar{\partial}_{\circ})\) in two \((D+16)\)-dimensional vectors, \(\bar{\beta}_{\circ}=(\bar{\beta}^{1,3\ldots 2D+31})\) and \(\bar{\beta}_{\rm e}=(\bar{\beta}^{2,4\ldots 2D+32})\) leads to the identifications:
\[v_{\rm L}=\tfrac{1}{2}\bar{\beta}_{\rm e}-\tfrac{1}{2}\bar{\beta}_{\circ}\,, \qquad V_{\rm L}=\tfrac{1}{2}(1^{D+16})-\tfrac{1}{2}\bar{\beta}_{\circ}\,. \tag{10}\]
## IV Extension of the T-duality group
### Fermionic inversions and permutations
On the anti-holomorphic side, the fermionic symmetries contain inversions and permutations: \((u)\) denotes
\begin{table}
\begin{tabular}{c c} Fermionic symmetry & Action on twist and shift entries \\ \hline \((2a\!\!-\!1\,2b\!\!-\!1)(2a\,2b)\) & \(v^{a}_{\rm L}\leftrightarrow v^{b}_{\rm L},\ V^{a}_{\rm L}\leftrightarrow V^{a}_{ \rm L}\) \\ \((2a)\) & \(v^{a}_{\rm L}\rightarrow-v^{a}_{\rm L}+2\,V^{a}_{\rm L},\ V^{a}_{\rm L} \leftrightarrow V^{a}_{\rm L}\) \\ \hline \((2a\!\!-\!1\,2a)\) & \(v^{a}_{\rm L}\rightarrow-v^{a}_{\rm L},\ V^{a}_{\rm L}\to V^{a}_{\rm L}-v^{a}_{ \rm L}\) \\ \((2a\!\!\!-\!1\,2b)\) & \(v^{a}_{\rm L}\to v^{b}_{\rm L}+V^{a}_{\rm L}-V^{b}_{\rm L},\ V^{a}_{ \rm L}\to V^{a}_{\rm L}\), \\ & \(v^{b}_{\rm L}\to v^{b}_{\rm L}+V^{b}_{\rm L}-V^{a}_{\rm L},\ V^{b}_{ \rm L}\to V^{b}_{\rm L}\) \\ \((2a\!\!\!-\!1\,\,2b\!\!-\!1)\) & \(v^{a}_{\rm L}\to v^{a}_{\rm L}+V^{a}_{\rm L}-V^{b}_{\rm L},\ V^{a}_{ \rm L}\to V^{b}_{\rm L}\), \\ & \(v^{b}_{\rm L}\to v^{b}_{\rm L}+V^{b}_{\rm L}-V^{a}_{\rm L},\ V^{b}_{ \rm L}\to V^{a}_{\rm L}\) \\ \((2a\!\!\!-\!1)\) & \(v^{a}_{\rm L}\to v^{a}_{\rm L}-2\,V^{a}_{\rm L},\ V^{a}_{\rm L} \to-V^{a}_{\rm L}\) \\ \end{tabular}
\end{table}
Table 2: Fermionic symmetry induced bosonic boundary condition mappings. (Only non–inert entries of the vectors \(v_{\rm L}\) and \(V_{\rm L}\) modulo integral vectors are given.)
\begin{table}
\begin{tabular}{c c} Fermionic symmetry & Action on left–moving bosons \\ \hline \((2a\!\!-\!1\,2b\!\!-\!1)(2a\,2b)\) & \(X^{a}_{\rm L}\leftrightarrow X^{b}_{\rm L}\) \\ \((2a)\) & \(X^{a}_{\rm L}\rightarrow-X^{a}_{\rm L}\) \\ \hline \((2a\!\!-\!1)\) & \(X^{a}_{\rm L}\rightarrow\pi-X^{a}_{\rm L}\) \\ \((2a\!\!-\!1\,2a)\) & \(X^{a}_{\rm L}\rightarrow\frac{1}{2}\pi-X^{a}_{\rm L}\) \\ \end{tabular}
\end{table}
Table 1: Fermionic symmetry induced bosonic coordinate field transformations. (Only non–inert fields are given.)
the fermion inversion \(\bar{f}^{u}\to-\bar{f}^{u}\). The permutation \((u_{1}\ \cdots\ u_{p})\) acts as \(\bar{f}^{u_{1}}\to\bar{f}^{u_{2}}\cdots\to\bar{f}^{u_{p}}\to\bar{f}^{u_{1}}\) leaving the remaining fermions inert. The permutation group contains elements which consist of multiple factors like this provided their entries are all distinct. It is generated by permutations of two elements \((uv)\). The induced fermionic symmetry actions within the bosonic formulation can be identified using the bosonisation (9).
### Induced bosonic coordinate transformations
The fermionic symmetries, that leave these fermion bosonisation pairs intact, realise mappings of the bosonic coordinate fields \(X_{\rm L}\) to themselves. Their generators and their realisations on the bosonic coordinates are listed in Table 1. The bosonic transformations above the middle line of this table are part of the T-duality group, while those below involve translations as well.
### Induced mappings of bosonic boundary conditions
Other fermionic symmetries break up fermion bosonisation pairs and hence correspond to mappings between different coordinate fields between which no obvious coordinate transformation exists. However, all fermionic symmetries, generated by inversions and permutations, map the boundary conditions of one orbifold theory to another. The mappings induced by the generators of the fermionic symmetries are collected in Table 2. The transformations induced by permutations \((2a\,2b)\) and \((2a\)-\(1\,2b\)-\(1)\) combined (in whatever order) leads to the boundary condition mapping associated with \((2a\)-\(1\,2b\)-\(1)(2a\,2b)\) as the group property would suggest. Since some actions can be interpreted as T-duality transformations, while others cannot, this hints at an extension of the T-duality group.
The Table 2 mappings \((2a\)-\(1\,2a)\), \((2a\)-\(1\,2b\)-\(1)\) and \((2a\,2b)\) are of special significance: they mix the twist and shift vector entries. The action of \((2a\)-\(1\,2a)\) recalls that the shift part of a roto-translation in directions, where the twist acts non-trivially, can be removed via the associated coordinate transformation in Table 1. The actions \((2a\)-\(1\,2b\)-\(1)\) and \((2a\,2b)\) imply that a pure shift boundary condition can be turned into a roto-translation. By combining these mappings, a web of equivalent (mostly asymmetric) orbifold theories emerge.
Since all these T-folds are just different bosonic representations of the same fermionic theory, their physical properties are identical, even though they may not look alike. For example, their modular invariance conditions may seem to disagree, as the number of non-zero entries in the twist vectors under mappings, like \((2a\,2b)\), change. However, since only the part of the shift of (3), on which the twist acts trivially, takes part in the modular invariance condition [21], their consistency conditions are numerically identical.
### A free fermionic T-fold web
The basis vectors, \(\mathbf{\beta}\), for a simple illustrative 6D fermionic model are given in Table 3 together with associated twist and shift vectors using the odd-even pairings (9). Within this bosonisation the model is understood as an asymmetric orbifold. The interpretation may change by applying fermionic symmetries.
The permutations \((2\,6)^{p_{1}}(4\,8)^{p_{2}}(10\,14)^{p_{1}}(12\,16)^{p_{4}}\) with \(p_{i}=0,1\), map the twist vector \(v_{\rm L}({\bf 1}-{\bf b})=\frac{1}{2}(0^{20})\to\)
\[|p_{1}p_{2}\,p_{3}p_{4}\rangle=\tfrac{1}{2}(p_{1}p_{2}p_{1}p_{2}\,p_{3}p_{4}p_ {3}p_{4}\,0^{12})\,, \tag{11}\]
while the other twists and shifts remain the same, since \((2a\,2b)\) leave \(V_{\rm L}\) inert (see Table 2). When these permutations are successively switched on, the T-fold web, given in Figure 1, is obtained.
For the cases with two non-zero \(p_{i}\), (11) implies that \(v_{\rm L}=(v_{\rm R},0^{16})\) possibly up to a change of basis. Thus the resulting bosonic models are interpreted as symmetric orbifolds. In particular, the model obtained after the fermionic permutation \((2\,6)(4\,8)\) is conventionally considered as the bosonic representation of this fermionic model in which \(\mathbf{\xi}\) just separates out the \(SO(32)\) gauge group, while for all the other cases with two non-zero \(p_{i}\), \(\mathbf{\xi}\) acts as an asymmetric Wilson line.
Table 4 provides an overview of all inequivalent T-fold models associated to this fermionic model. It indicates in how many directions \({\bf b}\) and \(\mathbf{\xi}\) act as left-moving twists. Apart from the sixteen models depicted in Figure 1 (of which nine are inequivalent), \(\mathbf{\xi}\) has an asymmetric twist action as can inferred from this table. The total number of inequivalent T-fold models associated to the fermionic basis vectors given in Table 3 is 213. This number rapidly increases for fermionic models defined with more basis vectors. For example, for the fermionic model in which \(\mathbf{\xi}\) is split into \(\mathbf{\xi}_{1}\) and \(\mathbf{\xi}_{2}\) the number of inequivalent bosonisations becomes \(11\,273\) and for the NAHE set [25; 26; 27] is \(85\,735\).
\begin{table}
\begin{tabular}{l|l} Fermionic basis vectors & Twist and shift vectors \\ \(\mathbf{\beta}\) & \(v\)=\((v_{\rm R}|v_{\rm L})\) & \(V\)=\((v_{\rm R}|V_{\rm L})\) \\ \hline
**1**=\(\{\psi^{1\ldots 4}\chi^{1\ldots 4}y^{1\ldots 4}y^{1\ldots 4}|\bar{f}^{1\ldots 4 0}\}\) & \((0^{4}|0^{20})\) & \(\tfrac{1}{2}(1^{4}|1^{20})\) \\
**S**=\(\{\psi^{1\ldots 4}\chi^{1\ldots 4}\}\) & \((0^{4}|0^{20})\) & \((0^{4}|0^{20})\) \\
**\(\mathbf{\xi}\)**=\(\{\bar{f}^{9\ldots 40}\}\) & \((0^{4}|0^{20})\) & \(\tfrac{1}{2}(0^{4}|0^{4}1^{16})\) \\
**b**=\(\{\chi^{1\ldots 4}w^{1\ldots 4}|\bar{f}^{1\ldots 4}f^{9\ldots 12}\}\) & \(\tfrac{1}{2}(1^{4}|0^{20})\) & \(\tfrac{1}{2}(0^{4}|1^{2}0^{2}1^{2}0^{14})\) \\ \end{tabular}
\end{table}
Table 3: Fermionic basis vectors \(\mathbf{\beta}\) with the twists \(v\) and shifts \(V\) corresponding to **1**–\(\mathbf{\beta}\) obtained via (8) and (10).
## IV Moduli
The unfixed Narain moduli (\(m_{ij}=g_{ij}+b_{ij}\) with metric \(g_{ij}\), B-field \(b_{ij}\) and Wilson lines \(m_{ix=1\dots 16}\)) of a T-fold correspond to the operators,
\[m_{ia}\,\partial X_{\rm R}^{i}\,\bar{\partial}X_{\rm L}^{a}\,, \tag{12}\]
left inert by (3). Symmetric orbifolds always leave at least the diagonal metric moduli \(m_{ii}=g_{ii}\) free, asymmetric orbifolds may fix all moduli.
This would suggest that the number of frozen moduli may vary dramatically depending on which bosonic description of a given fermionic model is used. There is no paradox here either: the unfixed scalar deformations of the fermionic model can be identified by the Thirring interactions,
\[m_{iuv}\,y^{i}w^{i}\,\bar{f}^{u}\bar{f}^{v}\,, \tag{13}\]
left inert by (6). Thus, the total number of massless untwisted scalars is bosonisation independent, and therefore identical in any bosonic realisation. Which of them are interpreted as free Narain deformations, however, does depend on the choice of bosonisation, as \(X_{\rm L}\) in (12) does.
## V Intrinsically asymmetric T-folds
The previous section showed that whether a real fermionic model should be considered as a symmetric or asymmetric model is very much bosonisation dependent. A free fermionic model is called _intrinsically asymmetric_ if for any bosonisation it corresponds to an asymmetric orbifold. An intrinsically asymmetric T-fold is a bosonic model associated to an intrinsically asymmetric fermionic model.
In light of the observation below (12), a fermionic model that admits a symmetric interpretation has at least inert Thirring interactions (13) with different \(u\neq v\) for each \(i\). If not, the fermionic model is intrinsically asymmetric. This is, in particular, the case when no Thirring interactions (13) are invariant under (6). An example of such a model is given in ref. [28].
Simple examples of intrinsically asymmetric free fermionic models can be obtained by taking basis vectors that act as purely holomorphic twists. (For example, consider the twist basis vector \({\bf b}=\left\{\chi^{1\dots 4},y^{1\dots 4}\right\}\) in 6D or \({\bf b}_{1}=\left\{\chi^{1\dots 4},y^{1\dots 4}\right\}\) and \({\bf b}_{2}=\left\{\chi^{3\dots 6},y^{3\dots 6}\right\}\) in 4D.) As there are no invariant Thirring interactions (13) possible, the corresponding T-fold models are necessarily intrinsically asymmetric.
## VI Discussion
This letter focused on heterotic T-folds that admit fermionic descriptions. Even though the key observation that bosonisation in a fermionic CFT is not unique is not new, its striking consequences seem not to have been appreciated so far: a single free fermionic model can be associated to a large number of seemingly unrelated bosonic theories. Some may admit a symmetric orbifold interpretation while most others are asymmetric, but in many different ways.
In light of this, studies of non-geometric constructions, and T-folds in particular, may need to be revised, since seemingly different non-geometries may, in fact, be equivalent. In particular, in the bosonic orbifold literature it would be inconceivable that symmetric and asymmetric orbifolds can be identified. Moreover, the number of frozen moduli turns out to be a bosonisation dependent quantity; only the total number of massless untwisted scalars is identical in any description.
In addition, the induced bosonic actions of fermionic symmetries hints at an extension of the T-duality group
Figure 1: Web of T–folds associated to the fermionic model given in Table 3 in which only \({\bf b}\) acts as a twist (11).
\begin{table}
\begin{tabular}{c|c c c c c c} \({\bf b}\) & \({\bf 0}\) & \({\bf 2}\) & \({\bf 4}\) & \({\bf 6}\) & \({\bf 8}\) & **Sum** \\ \hline \({\bf 0}\) & 1 & 2 & 3 & 2 & 1 & **9** \\ \({\bf 2}\) & 2 & 11 & 18 & 12 & 3 & **46** \\ \({\bf 4}\) & 3 & 18 & 32 & 19 & 6 & **78** \\ \({\bf 6}\) & 2 & 12 & 19 & 18 & 7 & **58** \\ \({\bf 8}\) & 1 & 3 & 6 & 7 & 5 & **22** \\ \({\bf Sum}\) & **9** & **46** & **78** & **58** & **22** & **213** \\ \end{tabular}
\end{table}
Table 4: The number of \({\bf b}\) and \({\bf\xi}\) twists for the inequivalent T–folds associated to the fermionic model in Table 3.
of toroidal and \(\mathbb{Z}_{2}\) orbifold compactifications.
The findings presented in this letter were derived at free fermionic points. However, the induced transformations of the bosonic boundary conditions by the fermionic symmetries may be considered without referring to any fermionic description at all. Hence, it is an interesting question whether the suggested extension of the T-duality group discussed above is a general duality symmetry of string theory or exists at free fermionic points only.
###### Acknowledgements.
AEF would like to thank the CERN theory division for hospitality and support. SGN would like to thank the organizers of the workshop Gauged Linear Sigma Models @ 30 for their kind invitation and the warm hospitality of the Simons Center at Stony Brook University during this event where part of this work was done. The work of BP is supported by EPSRC grant EP/W522399/1.
|
2303.01939 | Retinal Image Restoration using Transformer and Cycle-Consistent
Generative Adversarial Network | Medical imaging plays a significant role in detecting and treating various
diseases. However, these images often happen to be of too poor quality, leading
to decreased efficiency, extra expenses, and even incorrect diagnoses.
Therefore, we propose a retinal image enhancement method using a vision
transformer and convolutional neural network. It builds a cycle-consistent
generative adversarial network that relies on unpaired datasets. It consists of
two generators that translate images from one domain to another (e.g., low- to
high-quality and vice versa), playing an adversarial game with two
discriminators. Generators produce indistinguishable images for discriminators
that predict the original images from generated ones. Generators are a
combination of vision transformer (ViT) encoder and convolutional neural
network (CNN) decoder. Discriminators include traditional CNN encoders. The
resulting improved images have been tested quantitatively using such evaluation
metrics as peak signal-to-noise ratio (PSNR), structural similarity index
measure (SSIM), and qualitatively, i.e., vessel segmentation. The proposed
method successfully reduces the adverse effects of blurring, noise,
illumination disturbances, and color distortions while significantly preserving
structural and color information. Experimental results show the superiority of
the proposed method. Our testing PSNR is 31.138 dB for the first and 27.798 dB
for the second dataset. Testing SSIM is 0.919 and 0.904, respectively. | Alnur Alimanov, Md Baharul Islam | 2023-03-03T14:10:47Z | http://arxiv.org/abs/2303.01939v1 | # Retinal Image Restoration using Transformer and Cycle-Consistent Generative Adversarial Network
###### Abstract
Medical imaging plays a significant role in detecting and treating various diseases. However, these images often happen to be of too poor quality, leading to decreased efficiency, extra expenses, and even incorrect diagnoses. Therefore, we propose a retinal image enhancement method using a vision transformer and convolutional neural network. It builds a cycle-consistent generative adversarial network that relies on unpaired datasets. It consists of two generators that translate images from one domain to another (e.g., low- to high-quality and vice versa), playing an adversarial game with two discriminators. Generators produce indistinguishable images for discriminators that predict the original images from generated ones. Generators are a combination of vision transformer (VIT) encoder and convolutional neural network (CNN) decoder. Discriminators include traditional CNN encoders. The resulting improved images have been tested quantitatively using such evaluation metrics as peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and qualitatively, i.e., vessel segmentation. The proposed method successfully reduces the adverse effects of blurring, noise, illumination disturbances, and color distortions while significantly preserving structural and color information. Experimental results show the superiority of the proposed method. Our testing PSNR is 31.138 dB for the first and 27.798 dB for the second dataset. Testing SSIM is 0.919 and 0.904, respectively. The code is available at [https://github.com/A/Akka/Transformer-Cycle-GAN](https://github.com/A/Akka/Transformer-Cycle-GAN)
Retinal image restoration, deep learning, transformer, generative adversarial network.
## I Introduction
The retina of the central nervous system is responsible for transforming the incoming light into a neural signal and sending it to the brain's visual cortex for processing. Therefore, ophthalmologists can diagnose eye, brain, and blood circulation diseases by analyzing the retinal images. However, sometimes due to poor imaging device configuration, misoperations during the imaging process, and patient restlessness (primarily applicable to children), these images suffer from such degradation types as out-of-focus blurring, image noise, low, high, and uneven illuminations, and color distortion. It suffers a tremendous negative impact on treatment efficiency, the extra time and money expenditure, and even misdiagnosis.
In recent years, deep learning-based techniques [1, 2, 3] are used to detect and classify diseases using retinal images. However, these methods rely on high-quality images during analysis, which is not always possible to acquire in some cases. Researchers also propose traditional methods [4, 5] to enhance these images. However, they often suffer from generalization issues, meaning they cannot be applied to all cases. For example, Zhang et al. [5], took the reflective feature of the fundus camera into account. They proposed a double-pass fundus reflection model (DPFR) that improves the contrast of retinal images. However, the presence of image artifacts is noticeable. Many learning-based techniques [6, 7, 8, 9, 10, 11] have been introduced to solve this problem. However, most of these works solved the missing paired data problem by manually degrading original high-quality fundus images. Manually degraded images are different from original low-quality retinal images. Thus, these methods show poor performance when enhancing real degradation effects. For instance, the authors of [11] developed a restoration deep learning network that was trained using simulated cataract-like images. However, it performs worse when it comes to restoring real low-quality images.
This paper introduces a novel deep learning retinal image restoration model based on a cycle-consistent generative adversarial network with modified generators that do not require paired datasets. Our model replaced the traditional CNN encoder with a vision transformer encoder, resulting in faster convergence, superior quantitative and qualitative outcomes, and better structural and color preservation than other methods. The method has been trained and validated using two publicly available datasets [12, 13]. The contributions can be summarized as follows:
* We propose a novel CycleGAN architecture consisting of a vision transformer and convolutional neural network. To the best of our knowledge, this is the first attempt to combine CNN with a transformer in CycleGAN architecture.
* Replacing the transformer decoder with a CNN decoder reduced the computational expense compared to pure transformer CycleGAN architecture, which allowed us to use a large image size.
## II Methodology
### _Model Architecture_
Our method is based on CycleGAN, initially proposed by [14]. However, we replaced the CNN encoder in the generator network with a vision transformer encoder, as shown in Fig. 1. Using CNN as a decoder decreased computational expenses compared to a pure transformer network which allowed
to increase the image size by a factor of 4, from 64\(\times\)64 to 256\(\times\)256. This network consists of a pair of generator networks (\(G_{H}\) and \(G_{L}\)) and a pair of discriminators (\(D_{H}\) and \(D_{L}\)). The task of each generator is to translate images from one domain to another and to fool the corresponding discriminator. For instance, \(G_{H}\) is trying to generate realistic, high-quality images from original low-quality ones, and \(D_{H}\) receives this generated image and one original high-quality image and calculates the probability of a given image being authentic. As a result, generators and discriminators are playing an adversarial game.
A discriminator is a CNN-based classifier that consists of such layers as 2D convolution, LeakyReLU activation function, and instance normalization. Firstly, a colorful 256\(\times\)256 image is fed into the convolutional layer with output channels, kernels size, stride, and padding equal to 64, 4, 2, and 1, respectively. Then, we used the LeakyReLU activation function. The result sends to three consecutive downsampling blocks: convolutional layer, instance normalization, and LeakyReLU. Kernel size and padding are equal to 4 and 1, but output features double after each downsampling process starting from 64 and ending with 512. Stride is 1 in the first and 2 in the last two blocks. In the end, it is fed into the final convolutional layer with output channel, stride, and padding equal to 1 and kernel size being 4. As a result, we get a 30\(\times\)30 patch and make predictions using the sigmoid activation function.
The generator consists of a vision transformer encoder and a CNN decoder. Firstly, each input image of size 256\(\times\)256 is divided into patches of size 8\(\times\)8 using a convolutional projection proposed by [15]. It provides additional efficiency. Instead of a depth-wise convolutional layer with batch normalization, we only used a convolutional layer with 1024 output channels, kernel, and stride sizes of 1. Then we flattened the 3D output to 2D and transposed it. It reduced the computation time without affecting the accuracy of the method. Then these flattened sequences are fed into a standard transformer encoder network as shown in Fig. 1. Our network's depth and projection dimensions are 7 and 1024, respectively. The output shape of this encoder is 1024\(\times\)1024. Therefore, we reshaped it into 1024\(\times\)32\(\times\)32 to feed into the CNN decoder. It consists of 3 upsampling blocks: transpose convolution, instance normalization, and ReLU activation function. The output channels have been decreased by 2 after each block; kernel size, stride, padding, and output padding in all blocks are 3, 2, 1, and 1, respectively. The resulted matrix of shape 128\(\times\)256\(\times\)256 is converted into a color image using the last convolutional layer with a kernel size of 7, the stride of 1, and padding of 3. The final output is activated using a hyperbolic tangent.
### _Loss functions_
#### Ii-B1 Adversarial Loss
Adversarial loss has been developed in the first GAN work [16] that can be formulated as discriminator loss. Given low-quality image \(L\), high-quality image \(H\), a generator \(G_{H}\) that transforms \(L\) to high quality \(\hat{H}\), \(G_{L}\) that degrades images (from \(Y\) to \(\hat{X}\)), \(D_{H}\) and \(D_{L}\) classify high- (\(H\), \(\hat{H}\)) and low-quality images (\(L\) and \(\hat{L}\)), respectively. Mathematically, this loss function is expressed as follows:
\[L_{A}(D_{L},L,\hat{L}) =E_{L}[log(D_{L}(L))]+E_{\hat{L}}[log(1-D_{L}(\hat{L}))]\] \[L_{A}(D_{H},H,\hat{H}) =E_{H}[log(D_{H}(H))]+E_{\hat{H}}[log(1-D_{H}(\hat{H}))] \tag{1}\]
Fig. 1: The workflow of the proposed method. Low-quality images go through Generator H and the result - Generated HQ - is inputted into Generator H and Discriminator H. Next, the Generated LQ image is made from high-quality image using Generator L, then this generated image is passed to both Generator H and Discriminator L. After these steps, losses are calculated.
#### Ii-A2 Cycle Consistency Loss
Cycle-consistency can be formulated as follows: a generated image \(\hat{H}\) translated back to \(\hat{L}\) should not be different from \(L\). The same things apply for \(H\) and a regenerated \(\hat{H}\) from \(\hat{L}\). The difference between regenerated and original images is calculated using the \(L1\) function in the following way:
\[L_{C}(\hat{L},L) =E_{L}[||\hat{L}-L||_{1}]\] \[L_{C}(\hat{H},H) =E_{H}[||\hat{H}-H||_{1}] \tag{2}\]
#### Ii-A3 Identity loss
To calculate identity loss, we use generators and images from the same domains. More specifically, \(H\) should not be modified when it is sent to \(G_{H}\). The same things apply to \(L\) and \(G_{L}\). To calculate this loss function, we also use the \(L1\) function in the following way:
\[L_{I}(L,G_{L}(L)) =E_{L}[||L-G_{L}(L)||_{1}]\] \[L_{I}(H,G_{H}(H)) =E_{H}[||H-G_{H}(H)||_{1}] \tag{3}\]
#### Ii-A4 Total loss
The total generator and discriminator loss can be formulated as:
\[L(G_{L},G_{H},D_{L},D_{H}) =L_{A}(D_{L},L,\hat{L})+L_{A}(D_{H},H,\hat{H})+\] \[+\lambda_{1}L_{C}(\hat{L},\hat{L})+\lambda_{1}L_{C}(\hat{H},\hat{ H})+\] \[+\lambda_{2}L_{I}(L,G_{L}(L))+\lambda_{2}L_{I}(H,G_{H}(H)) \tag{4}\]
where \(\lambda_{1}\) and \(\lambda_{2}\) are weights of cycle consistency and identity losses. The main problem that discriminators and generators are trying to solve can be mathematically expressed in the following equation:
\[G_{L}^{*},G_{H}^{*}=arg\min_{G_{L},G_{H}}\max_{D_{L},D_{H}}L(G_{L},G_{H},D_{L},D_{H}) \tag{5}\]
## III Results and Discussion
### _Datasets_
In our work, we used two datasets, such as EyeQ [12] and Mendeley [13]. EyeQ dataset has 16818 "Good", 6434 "Usable", and 5,538 "Bad" images of size 800\(\times\)800. The Mendeley dataset comprises 1146 "Artifacts" and 1060 "No artifacts" images of size 350\(\times\)346. For training, we used all "Good," "No artifacts," 5455 "Usable," and 986 "Artifacts" images. The testing set includes 979 "Usable" and 160 "Artifacts" images. To train vessel segmentation model, we used DRIVE [17] dataset that includes 40 images of size 565\(\times\)584 with corresponding manually annotated segmentation's.
### _Experimental Setup_
We implement the proposed model in the PyTorch framework. The hardware configurations are Intel Core i7-1070f CPU, 32 GB of RAM, and an NVIDIA GeForce RTX 2080 SUPER 8 GB. The model was set to train for 100 epochs with early stopping as soon as average generator loss stops decreasing. The generator loss has decreased, while discriminator loss has increased for 23 epochs. The generator model improved faster than the discriminator until both converged. The total training time is 25 hours for our experiment.
### _Qualitative and Quantitative Performance_
To validate the efficiency of our method, we performed qualitative analysis for restored images using vessel segmentation with 5 state-of-the-art methods: CLAHE [4], CycleGAN [6], Cycle-CBAM [10], ArcNet [11], DPFR [5]. UNet [18] was trained using pairs of fundus images with corresponding manually segmented images from DRIVE dataset [17]. Fig. 2 shows the restoration and vessel segmentation results. Our method enhanced the retinal image better compared to the state-of-the-art. In addition, our result has less noise compared to CycleGAN [6] and CycleCBAM [10]. Algorithm-based methods [4], [5] could improve the contrast making vessels visible, but produced images have artifacts. ArcNet [11] failed to restore tiny blood vessels in the dark region.
To further evaluate our results, we conducted the quantitative analysis with the same methods. The evaluation metrics include PSNR, SSIM, and single image test time (SITT), as shown in TABLE I. As we can see, our method significantly outperforms all other methods in terms of PSNR and SSIM with the EyeQ dataset. Our approach also shows the best results compared to the state-of-the-art for the Mendeley dataset. In terms of SITT, it shows competitive outcomes taking the third position after CLAHE [4] and original CycleGAN [6].
### _Ablation study_
To conduct an ablation study of our work, we trained and tested the original fully convolutional CycleGAN model with the same datasets. Total training time for the traditional CycleGAN required 40 hours to converge, 15 hours more than ours. Additionally, testing PSNR values of our and traditional methods are 31.138 dB and 25.577 dB for the EyeQ dataset, and for the Mendeley dataset, the values are 27.798 dB and
Fig. 2: Comparison with state-of-the-art methods. From top to bottom are fundus image, corresponding segmentation and zoomed segmentation. (a) low-quality image, (b) CLAHE [4], (c) CycleGAN [6], (d) CycleCBAM [10], (e) ArcNet [11], (f) DPFR [5], (g) Ours.
26.08 dB, respectively. The SSIM values are 0.919 and 0.882 for the EyeQ and 0.904 and 0.9 for the Mendeley datasets, respectively. On average, testing one image takes 90ms for CNN CycleGAN and 97ms for our method, meaning that our modification did not add much computational expense. Fig. 3 demonstrates the qualitative comparison of these two methods. Our method is better at restoring tiny, barely visible blood vessels and preserving structural and color information of original images.
## IV Conclusion
In this paper, we combined a vision transformer and convolutional neural network, resulting in better retinal image enhancement than fully convolutional CycleGAN and other state-of-the-art techniques. In addition, our goal was to reduce the computational expense of transformer CycleGAN architecture without compromising the performance. This simple yet effective combination has led to comparatively better quantitatively, qualitatively, and computational performance. The model was tested using two datasets, such as EyeQ [12] and Mendeley dataset [13]. Testing PSNR is 31.138 dB for the first and 27.798 dB for the second dataset. Testing SSIM is 0.919 and 0.904, respectively. The ablation study demonstrates a significant advantage of our method compared to the original CycleGAN architecture.
|
2310.01801 | Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs | In this study, we introduce adaptive KV cache compression, a plug-and-play
method that reduces the memory footprint of generative inference for Large
Language Models (LLMs). Different from the conventional KV cache that retains
key and value vectors for all context tokens, we conduct targeted profiling to
discern the intrinsic structure of attention modules. Based on the recognized
structure, we then construct the KV cache in an adaptive manner: evicting
long-range contexts on attention heads emphasizing local contexts, discarding
non-special tokens on attention heads centered on special tokens, and only
employing the standard KV cache for attention heads that broadly attend to all
tokens. Moreover, with the lightweight attention profiling used to guide the
construction of the adaptive KV cache, FastGen can be deployed without
resource-intensive fine-tuning or re-training. In our experiments across
various asks, FastGen demonstrates substantial reduction on GPU memory
consumption with negligible generation quality loss. We will release our code
and the compatible CUDA kernel for reproducibility. | Suyu Ge, Yunan Zhang, Liyuan Liu, Minjia Zhang, Jiawei Han, Jianfeng Gao | 2023-10-03T05:17:08Z | http://arxiv.org/abs/2310.01801v3 | # Model Tells You What to Discard:
###### Abstract
In this study, we introduce adaptive KV cache compression, a plug-and-play method that reduces the memory footprint of generative inference for Large Language Models (LLMs). Different from the conventional KV cache that retains key and value vectors for all context tokens, we conduct targeted profiling to discern the intrinsic structure of attention modules. Based on the recognized structure, we then construct the KV cache in an adaptive manner: evicting long-range contexts on attention heads emphasizing local contexts, discarding non-special tokens on attention heads centered on special tokens, and only employing the standard KV cache for attention heads that broadly attend to all tokens. Moreover, with the lightweight attention profiling used to guide the construction of the adaptive KV cache, FastGen can be deployed without resource-intensive fine-tuning or re-training. In our experiments across various asks, FastGen demonstrates substantial reduction on GPU memory consumption with negligible generation quality loss. We will release our code and the compatible CUDA kernel for reproducibility.
## 1 Introduction
Based on the Transformer architecture, autoregressive language models have attracted extensive attention (OpenAI, 2023; Touvron et al., 2023b). Along with the increase of model size, these models present significant challenges in terms of computational complexity and GPU memory consumption (Shazeer et al., 2017). Since these models achieve remarkable success across diverse applications, there is a pressing need for serving these models in an economically feasible manner.
The generative inference of LLMs usually involves using the _KV Cache_ mechanism to improve the generation speed. KV cache stores previously computed Key/Value vectors in attention calculation and reuses those values for the current token generation. As such, it avoids recalculations of previous tokens at each token generation step at the cost of extra memory consumption. Despite being a prominent technique, the memory consumption of KV cache increases rapidly as the model size and generation length increase, drastically increasing the pressure of on-device memory.
When memory usage exceeds GPU capacity, the generative inference of LLMs typically resort to offloading (Aminabadi et al., 2022; Sheng et al., 2023). While these methods help mitigate the pressure on the scarce GPU memory from using KV cache, offloading KV cache to CPU/NVMe can still add non-trivial overhead to generative inference performance due to the limited PCIe bandwidth between the GPU and CPU on many devices. Therefore, it becomes a crucial task to reduce the memory footprint of KV cache without costly retraining or fine-tuning.
Our study starts from the observation (Figure 1) that there are abundant structures observed in attention modules (Michel et al., 2019; Voita et al., 2019; Clark et al., 2019; Wang et al., 2020; Child et al., 2019), and not all attention modules need to attend to all tokens (Liu et al., 2023b; Zhang et al., 2023; Liu et al., 2023a). Intuitively, harvesting such structures and compressing cached vectors could substantially reduce memory consumption and accelerate text generation.
Based on this intuition, we propose FastGen to _accelerate the generative inference by adaptively compressing the KV cache on the fly_. First, we employ an efficient profiling algorithm to recognize
the structural patterns for attention modules. Under the guidance of this profiling, we then construct the KV cache for various modules adaptively. With this diagnose-before-compress approach, FastGen effectively reduces the memory footprint of KV cache while preserving the model quality.
```
Input: Feasible Policy Set (\(\mathcal{C}\)), Prompt Output: Adaptive KV Cache
1forAttention Head \(H_{i}\) in LLMdo
2\(\mathbf{K}^{i},\mathbf{Q}^{i},\mathbf{V}^{i}\gets H_{i}(\text{Prompt})\)
3\(\mathbf{A}^{i}\leftarrow\text{softmax}(\mathbf{Q}^{i}\mathbf{K}^{iT})\)
4\(\mathbf{C}^{i}\leftarrow\text{apply Equation 1 to }\mathbf{A}^{i}\) /* \(\mathbf{C}^{i}\): optimal policy \(\mathbf{K}^{i}_{\mathbf{C}^{i}},\mathbf{V}^{i}_{\mathbf{C}^{i}}\gets f(\mathbf{K}^{i},\mathbf{V}^{i },\mathbf{C}^{i})\)
5\(\hat{\mathbf{K}}^{i},\hat{\mathbf{V}}^{i}\leftarrow\mathbf{K}^{i}_{\mathbf{C}^{i}},\mathbf{V}^{i}_{ \mathbf{C}^{i}}\) return\(\{\mathbf{C}^{i},\hat{\mathbf{K}}^{i},\hat{\mathbf{V}}^{i}\}\)
```
**Algorithm 1**FastGen-Prompt Encoding.
```
Input: Adaptive KV cache (\(\{\mathbf{C}^{i},\hat{\mathbf{K}}^{i},\hat{\mathbf{V}}^{i}\}\)) Output: Generated Text
1\(\mathbf{z}_{0}\leftarrow\text{last prompt token}\) for\(j\in\{1,\cdots,Max\text{ Generate Length}\}\)do
2forAttention Head \(H_{i}\) in LLMdo
3\(\mathbf{K}^{i},\mathbf{Q}^{i},\mathbf{V}^{i}\gets H_{i}(\mathbf{z}_{j-1},\hat{\mathbf{K}}^{i}, \hat{\mathbf{V}}^{i})\)
4\(\mathbf{K}^{i}_{\mathbf{C}^{i}},\mathbf{V}^{i}_{\mathbf{C}^{i}}\gets f(\mathbf{K}^{i},\mathbf{V}^{i },\mathbf{C}^{i})\)
5\(\hat{\mathbf{K}}^{i},\hat{\mathbf{V}}^{i}\leftarrow\mathbf{K}^{i}_{\mathbf{C}^{i}},\mathbf{V}^{i}_{ \mathbf{C}^{i}}\)
6\(\mathbf{z}_{j}\leftarrow\text{sample from LLM prediction}\) return\(\{\mathbf{z}_{j}\}\)
```
**Algorithm 2**FastGen-Token Generation.
In our study, FastGen recognizes five fundamental attention structures and applies them correspondingly. Specifically, some attention modules mostly attend to local contexts, for which we construct a KV cache that evicts long-range contexts; some primarily attend to specific tokens/punctuations, for which we create a KV cache that retains only special tokens/punctuations; some have attention maps that are column-wise sparse, for which we discard the least frequently attended tokens; and some broadly attend to all tokens, for which we employ the standard KV cache and store all tokens.
In this way, FastGen is able to compress the KV cache while retaining the original functionality of attention modules. Remarkably, FastGen does not require any fine-tuning and can be applied in a
Figure 1: Different attention heads usually have different structures. Left: Four common attention structures (more details are elaborated in Section 3 and Section 4). Right: Attention map compositions of three attention heads that are in the same layer.
Figure 2: Performance of Adaptive KV Cache (FastGen) and Fixed KV Cache (Frequency, Local, and Frequency+Local; Zhang et al., 2023 and Liu et al., 2023a) on AlpacaEval.
plug-and-play manner. This is a big advantage of FastGen, because the training cost on extra-large models (Brown et al., 2020), can hardly be affordable for many research labs or practitioners.
We evaluate FastGen on LLaMa (Touvron et al., 2023b) with a suite of major benchmarks covering generative tasks in math, code, knowledge, and common sense reasoning. FastGen effectively performs KV cache compression with negligible generation quality loss (i.e., recover over 95% of attention scores with 35% cache compressed). Notably, as to the 30b model in Figure 2, FastGen (50% cache compressed) surpasses all fixed KV compression methods (15% cache compressed).
## 2 Related Work
**Token Dropping and KV Cache Compression.** Many efforts have been made to improve the model efficiency for LLMs. For recurrent neural networks, one method is to skip multiple tokens at a given time step (Campos et al., 2017; Seo et al., 2017; Hansen et al., 2019). Since Transformer models quickly attracted lots of attention, Goyal et al. (2020) proposes to eliminate redundant words in BERT (Devlin et al., 2019) based on their attention scores, while Dai et al. (2020) compresses the input sequence by adding pooling layers to the encoding modules of the transformer architecture. Recently, Huang et al. (2022) adds a token selection task to the original BERT model that learns to select performance-crucial tokens, and Kim et al. (2022) designs a learnable threshold to detect unimportant tokens to prune. Meanwhile, many efforts have been made to explore the possibility of compressing the hidden state of tokens rather than explicitly reducing the sequence length (Guan et al., 2022; Sun et al., 2022; Zhou et al., 2020).
Nevertheless, these methods can only be applied to non-autoregressive models and typically require an additional re-training phrase, making them less suitable for auto-regressive LLMs like ChatGPT and LLaMa. Recognizing this gap, researchers started examining the potential of pruning tokens within the KV cache of auto-regressive LLMs. Mu et al. (2023) learns to compress the prompts into a few special tokens to reduce memory pressure during caching. However, the token prediction requires model re-training and could be an expensive overhead during inference. Meanwhile, several concurrent methods propose to leverage accumulated attention score as the criteria to identify important tokens in the KV cache (e.g., Sheng et al., 2023; Zhang et al., 2023; Liu et al., 2023a). Instead of investigating a specific eviction policy, this study aims to synergistically coordinate diverse eviction policies to better align with model-specific attributes.
**Underlying Structure of Attention.** Inspired by the success of Transformer, extensive studies have been conducted to explore the underlying mechanism of different self-attention heads. Voita et al. (2019) analyzed the self-attention heads in BERT using LRF (Bach et al., 2015) and characterized them into interpretable roles, one of which is attending adjacent tokens all the time. Michel et al. (2019) demonstrated that heads in the same layer could have different impact on the performance while the importance of each head changes across tasks. Clark et al. (2019) and Kovaleva et al. (2019) identified such patterns as some heads primarily attend to separator tokens, adjacent tokens and a combination of these. While most previous studies are mainly done on encoder models, FastGen is motivated by consistent patterns we have observed in decoder-only models. Like previous studies, FastGen also explores the structure of the attention mechanism to improve inference efficiency. But FastGan differs from previous studies by focusing on characterizing the KV cache of different attention heads.
## 3 Adaptive KV Cache Compression
In this section we first introduce the problem formulation, and then present attention profiling and adaptive KV cache compression.
### Generative Inference of Autoregressive LLMs
A typical generative model inference involves two steps: prompt encoding and token generation.
**Prompt Encoding.** When an autoregressive transformer-based LLM generates the \(i\)-th token, the attention module needs to collect contextual information from all the preceding \(i-1\) tokens, i.e., the key and value vectors (KV vectors) of these tokens. To circumvent redundant KV vectors
computations when generating succeeding tokens, all KV vectors are stored in the _KV cache_ once they are generated.
**Token Generation.** Once prompt encoding finished, the LLM generates the output token by token. At each generation step, the LLM needs to encode the new token(s) generated in the previous step. After a new token is generated, its associated KV vectors are appended to the current KV cache. Thus, the size of KV cache increases linearly with the number of tokens being generated.
### FastGen
As described in Section 2, many previous studies of compressing KV cache for improving inference efficiency do not leverage the intricate attention structure in LLMs. As to be detailed in Section 4, attention heads in LLMs often function distinctively, indicating the need for tailoring the compression strategy to each individual attention head.
With these insights, we introduce FastGen: a dual-phase algorithm for crafting an adaptive KV cache. During the prompt encoding phase, model profiling is conducted to discern the behavior of various attention heads, so that we can choose the most appropriate compression strategy for each head. Then, in the token generation phase, instead of indiscriminately appending new KV vectors for each newly generated token, we manage the KV cache for each token based on its selected compression strategy.
### Model Profiling
Model profiling is conducted based on the result of prompt encoding. Specifically, for a compression policy \(\mathbf{C}\), we mark the corresponding KV cache compression as \(\mathbf{K_{C}}\), \(\mathbf{V_{C}}=f(\mathbf{K},\mathbf{V},\mathbf{C})\), where \(\mathbf{K_{C}}\) and \(\mathbf{V_{C}}\) are the compressed KV cache. Then, for attention map \(\mathbf{A}=\text{softmax}(\mathbf{QK}^{T})\), we pick the optimal policy that can recover \(\mathbf{A}\) by a recover ratio \(T\) with the minimum memory cost:
\[\mathbf{C}^{*}=\operatorname*{arg\,min}_{\mathbf{C}\in\mathcal{C}}\ \text{CacheMemoryCost}(\mathbf{C})\ \ \text{s.t.}\ \ \ |\mathbf{A}-\text{softmax}(\mathbf{QK}_{\mathbf{C}}^{T})|\leq 1-T, \tag{1}\]
where \(\mathcal{C}\) is the set of all feasible compression policies, \(\text{CacheMemoryCost}(\mathbf{C})\) is the target KV cache budget of the compression policy \(\mathbf{C}\), and \(T\) is a predefined hyper-parameter representing how much we want the policy to recover \(\mathbf{A}\). As to be discussed in Section 5, FastGen is able to recover +95% of the attention map with +40% compression ratio for a 65B model. The final prompt encoding algorithm that includes model profiling is presented in Algorithm 1.
Intrinsically, our method assumes that the structure of the attention map is stable across different attention heads at different positions. So, it is sufficient to use only the encoded prompt to select a proper compression policy. It is worth noting that existing literature has provided the theoretical justification for using solely encoded prompts to capture attention structures for the full contexts (Zhang et al., 2023; Liu et al., 2023a). In our study, we also empirically verified this, as to be elaborated in Section 4.
### KV Cache Compression Policies
In our experiments we observe that a large number of attention heads closely follow certain patterns, as to be detailed in Section 4. Thus, in addition to the conventional full KV cache policy, we also consider four fundamental KV cache compression policies. While we mainly use these four fundamental KV cache compression policies for evaluation in this study, it is easy for FastGen to use numerous other strategies. The four KV cache compression policies are:
* **Special Tokens.** We keep in KV cache only special tokens, such as the begin-of-the-sentence token \(<\)s\(>\), the instruction token [INST], and so on. This policy is referred to as \(\mathbf{C}_{\text{special}}\).
* **Punctuation.** We keep in the KV cache only punctuation tokens like "\(\cdot\)", "\(\cdot\)", "\(\cdot\)". This policy is referred to as \(\mathbf{C}_{\text{punct.}}\).
* **Locality** This policy evicts long-range contexts. Once the relative distance between the context token and the current token exceeds a threshold, the KV cache of the context token will be evicted.
The threshold is determined by a pre-defined ratio \(r_{l}\) of the length budget of local context over the input sequence length. This policy is referred to as \(\mathbf{C}_{\text{local}}\).
* **Frequency (Heavy Hitter)** This policy has been used in multiple previous studies (e.g., Sheng et al., 2023; Zhang et al., 2023; Liu et al., 2023). We monitor for each token its cumulative sum of attention score, then treat these scores as token _frequency_ and only keep the most frequent tokens in the KV cache. The length budget of frequent tokens over the current sequence length is controlled by a ratio \(r_{f}\). This policy is referred to \(\mathbf{C}_{\text{frequent}}\).
Hybrid Policies.In practice, it is often necessary to use hybrid policies that combines the aforementioned compression policies. Since the total number of hybrid policies is high, in our study we use a greedy method to construct a small set of hybrid-policies as follows
\[\mathcal{C}=\{\mathbf{C}_{\text{special}},\mathbf{C}_{\text{special+punct.}},\mathbf{C}_{ \text{special+punct.+frequent}}.\mathbf{C}_{\text{special+punct.+frequent+local}}, \mathbf{C}_{\text{full}}\}, \tag{2}\]
where the sum of two compression strategies is to compute the union of their compressed KV cache, and \(\mathbf{C}_{\text{full}}\) refers to full KV cache without compression.
We use \(\mathbf{C}_{\text{special}}\) as a component in all hybrid policies for two reasons: 1) We observe that high attention scores are usually allocated towards \(\mathbf{C}_{\text{special}}\), as to be detailed in Section 4, indicating that \(\mathbf{C}_{\text{special}}\) are crucial for attention map recovery; 2) the compressed cache of \(\mathbf{C}_{\text{special}}\) is memory-efficient since there are usually less than 5 special tokens in a sentence. In other words, it brings little-to-no extra memory cost by always including \(\mathbf{C}_{\text{special}}\). Similarly, \(\mathbf{C}_{\text{punct.}}\) is often used as a component to form hybrid policies due to its memory-efficiency, i.e., the number of punctuations in a sentence is small. The final algorithm for token generation is presented in Algorithm 2.
## 4 Diversity and Stability of Attention Structures
In this section we present an empirical study to show the effectiveness of adaptive KV cache compression. First, we demonstrate that different attention heads typically possess distinct structures. Then, we show that these attention head structures remain relatively consistent across different attention heads at different positions. We do so by analyzing the attention scores of LLaMa 65B using random samples from GSM8k (Cobbe et al., 2021).
### Head Distinctive Attention Structure
Setting.We perform model profiling with a recover threshold of \(0.95\) and compute the distribution of profiling results for \(\{1,10,20,30,40,50,60,70,80\}\) layers. The result is shown in Figure 3.
Figure 3: Attention Profiling Result Distribution across different layers.
Observation.Figure 3 shows that attention heads in different layers have vastly different structures. Specifically, for the initial and final layers, they have more attention heads assigned to the full KV cache, indicating attention heads in these layers are likely to attend to all tokens. Meanwhile, for middle layers, the attention map focuses on special tokens, indicating that most attention heads of these layers primarily attend to special tokens (i.e., the accumulated attention score on special tokens is higher than \(0.95\) for these attention heads). Figure 1 shows the structure of different attention heads in the same layer. We see that attention structures differ across different layers and different heads.
These results indicate that it is suboptimal to apply the same KV cache to all layers without adaptation, and that it is beneficial to detect the structure of each attention head so as to select the optimal compression policy to construct the KV cache.
### Profile Tends to Be Consistent in One Sequence
The previous section demonstrates the great potential for constructing an adaptive KV cache in accordance with the structure of different attention heads. Here, we show that it is sufficient to leverage only the user-provided prompts and conduct one-shot model profiling, as outlined in Section 3.3. Specifically, we show that user-provided prompts share the same attention structure in the generation process.
Setting.Following Figure 1, we compute the accumulated attention score for attention heads in different layers of LLaMa 65B at multiple decoding steps (i.e., 1st, 10th, 20th, 30th). We visualized the resulting accumulated score in Figure 4.
Observation.Despite some fluctuations of accumulated attention scores across time steps, the pattern of the attention maps remains relatively stable. For example, Layer 33 Head 0 and Layer 23 Head 2 almost only attend to the special token, while the locality and punctuation plays an important role in Layer 23 Head 0. As to Layer 23 Head 3, more than 10% of the attention score is allocated to the others portion, making it suitable for a uncompressed KV cache \(\mathbf{C}_{\text{full}}\).
In addition, we observe that a large portion of attention scores are on special tokens in all cases. This justifies the greed method we used to construct hybrid policies, as described in Section 3.4.
Figure 4: Accumulated Attention Score at 1st (prompt encoding), 10th, 20th, 30th decoding steps.
Experiment
We conduct comprehensive experiments to demonstrate the effectiveness of FastGen on memory footprint reduction and generation quality preserving. First, we report the trade-off between memory reduction and end-to-end generation quality in Section 5.1, and discuss the compression ratio of FastGen in Section 5.2. Finally, we present ablation studies and discussions in Section 5.3.
### Trade-off between performance and memory reduction
Backbones.We conduct experiments with both LLaMa1 (Touvron et al., 2023a) and its fine-tuned variants, with model sizes ranging from 7B to 65B. For fined-tuned variants, we do not choose the open-sourced ChatLLaMa2 (Touvron et al., 2023b) model due to its grouped-query attention techniques. Instead, we use the original multi-head attention architecture in this study and leave the integration of grouped-query attention to future work. To prepare a comparable instruction-following model for analysis, we fine-tuned the LLaMa1 model with open-sourced instruction-tuning datasets. Specifically, the fine-tuned variants are trained on LIMA1 data (Zhou et al., 2023) and Open Assistant2 (Kopf et al., 2023) data.
Footnote 1: [https://huggingface.co/datasets/GAIR/lima](https://huggingface.co/datasets/GAIR/lima).
Footnote 2: [https://huggingface.co/datasets/OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1).
Tasks.We use standard generation tasks to evaluate LLaMa and our fine-tuned LLaMa models. For LLaMa, we choose four different tasks, including Human Eval (Chen et al., 2021), GSM8k (Cobbe et al., 2021), NQ (Kwiatkowski et al., 2019) and TQA (Kembhavi et al., 2017) to evaluate models' abilities on different domains (code, math, question answering and reading
Figure 5: Performance of Adaptive KV Cache (FastGen) and Fixed KV Cache (Frequency+Local; Zhang et al., 2023 and Liu et al., 2023a) of LLaMa on GSM8k, Human Eval, NQ, and TQA.
comprehension). Note that in the four tasks, each testing sample is in a generative format, where answers are extracted after model generation finishes. This is crucial for a fair comparison on model's generation quality. We evaluate the instruction finetuned LLaMa model on the instruction tuning benchmark AlpacaEval (Li et al., 2023), which consists of 805 question prompts from diverse domains.
Experiment Setup.The evaluation of the LLaMa model follows the default setting and evaluation metrics on each benchmark. We calculate F1 scores for GSM8k, NQ and TQA, and use the code execution Pass@1 rate for Human Eval. While evaluating an instruction-tuning model remains challenging, we follow previous work (Zhou et al., 2023; Touvron et al., 2023b) to use GPT4 as an evaluator for pair-wise comparison between two different model generations. For each prompt, we input the FastGen generation and the generation from the same model with Full KV Cache as a pair, and ask GPT4 to judge which one is better.We then calculate the win rate of FastGen over Full Cache. Hypothetically, the win rate of a lossless method should be around 50%. Aside from full-cache models, we also include non-adaptive KV cache methods for comparison. Specifically, we apply \(\mathbf{C}_{\text{local}}\), \(\mathbf{C}_{\text{frequent}}\), and \(\mathbf{C}_{\text{local}}\)+frequent to all attention head without any adaptation, as baselines. It is worth mentioning that \(\mathbf{C}_{\text{local}}\)+frequent is a very strong baseline as it is identical to the H2O method (Zhang et al., 2023) and the Scissorhand's method (Liu et al., 2023). We set \(r_{l}=0.3\), \(r_{f}=0.3\) in FastGen, and only change the recovery ratio \(T\) to control the pruned KV cache ratio. For generation, we use nucleus sampling (Holtzman et al., 2019) with temperature T = 0.6, p = 0.9. Experiments are conducted on 8 NVIDIA A100 80GB GPUs.
Main Results.In Figure 2 and Figure 5, we present the model quality as a function of KV cache budget increasing from \(30\%\) to \(100\%\). For 30B models, FastGen (50% cache compressed) surpasses all non-adaptive KV compression methods (15% cache compressed). Also, we can see FastGen achieves more KV cache reduction ratio as the model size increases, while preserving the same model quality. For example, given a \(45\%\) win rate, FastGencan get as much as 44.9% pruned ratio on LLaMa-65B, compared to 16.9% pruned ratio on LLaMa-7B. In all settings, FastGen shows consistent and significant improvement over non-adaptive compression methods. The results validate the effectiveness of adaptive KV cache compression using FastGen, despite its simplicity.
### Ablations
For all the ablations, we use a fixed targeted recovery ratio \(\text{T}=0.98\).
How one policy affect all the other policies?We study the complementary effects of each policy on the combination of all other policies in our framework. We examine changes in pruned KV cache and win rate while fixing the targeted recovery ratio \(T\). We take the full policy set as our control set \(\mathcal{C}\). For each ablation, we remove one of the policies from all policy combination in \(\mathcal{C}\). We summarized the results in Table 2, which suggests the \(\mathbf{C}_{\text{frequent}}\)- and the \(\mathbf{C}_{\text{special}}\) are the most important policies. Removing them will incur a \(3.67\%\) and a \(2.11\%\) win rate drop respectively. We can also observe from the pruned cache ratio that \(\mathbf{C}_{\text{frequent}}\) and \(\mathbf{C}_{\text{local}}\) reduce more KV caches than the others. However, their standalone non-adaptive deployment yields suboptimal performance, as depicted in Figure 2, further verifying the importance of adapting different compression policies.
Which policy should we add first (and last)?As in Section 3.4, we use a greed method to construct adaptive KV cache. Here, we examine how the order of introducing each policy affects the performance. Similar to the previous study, we fix the targeted recovery ratio to 0.98, and keep allocating cache budget until the constructed cache hit the recovery ratio. For simplicity, we make every examined order opt-in the \(\mathbf{C}_{\text{special}}\) first, as it's typically the most important tokens and of super-low memory cost, as suggested in Figure 1. We summarize the results in Table 3. Our current order (as in Equation 2) achieves the highest win-rates and the highest pruned ratios. Meanwhile, using alternative orders leads to a different trade-off between KV cache compression and generation quality. For example, using \(\mathbf{C}_{\text{frequent}}\rightarrow\mathbf{C}_{\text{local}}\rightarrow\mathbf{C}_{ \text{punct.}}\) leads to an improved KV cache compression ratio at the cost of generation quality.
Sensitivity Study.We analyze the sensitivity of selecting different hyper-parameters for FastGen, as illustrated in Figure 6. We observe that altering these hyper-parameters does not have a visible impact on the generation quality, as the model maintains a winrate over 45% in all situations. Meanwhile, it leads to a relative large change on the compression ratio. For example, changing the ratio
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Feasible Policy Set** & **Pruned KV Ratio** & **Win Rate** \\ \hline \(\mathcal{C}\) & 36.04\% & 49.75\% \\ \hline \((\mathbf{C}_{\text{punct.}},\mathbf{C}_{\text{punct.+frequent}},\mathbf{C}_{\text{punct.+frequent +local}},\mathbf{C}_{\text{full}})\) & 31.16\% & 47.64\% \\ \hline \((\mathbf{C}_{\text{special}},\mathbf{C}_{\text{special+frequent}}\cdot\mathbf{C}_{\text{ special+frequent}}+\text{local},\mathbf{C}_{\text{full}})\) & 34.23\% & 49.56\% \\ \hline \((\mathbf{C}_{\text{special}},\mathbf{C}_{\text{special+punct.}},\mathbf{C}_{\text{special+punct.+ frequent}},\mathbf{C}_{\text{full}})\) & 30.18\% & 49.06\% \\ \hline \((\mathbf{C}_{\text{special}},\mathbf{C}_{\text{special+punct.}},\mathbf{C}_{\text{special+punct.+ local}},\mathbf{C}_{\text{full}})\) & 21.26\% & 46.08\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Complementary effects of each policy. We display the win rate of each method over full cache setting. We evaluate the fine-tuned LLaMa 65B on AlpacaEval with the same parameters.
\begin{table}
\begin{tabular}{l|c c} \hline \hline
**Cache Order** & **Pruned KV Ratio** & **Win Rate** \\ \hline \(\mathbf{C}_{\text{special}}\to\mathbf{C}_{\text{punct.}}\to\mathbf{C}_{\text{frequent }}\to\mathbf{C}_{\text{local}}\) & 36.04\% & 49.75\% \\ \hline \(\mathbf{C}_{\text{special}}\to\mathbf{C}_{\text{frequent}}\to\mathbf{C}_{\text{local}} \to\mathbf{C}_{\text{punct.}}\) & 36.40\% & 47.64\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Policy order ablation on fine-tuned LLaMa 65B with AlpacaEval.
Figure 6: Hyper-parameter ablation on fine-tuned LLaMa 65B with AlpacaEval.
for the frequency policy from 0.3 to 0.1 leads to +10% more KV cache. In our experiments, we set the ratio to 0.3 for both \(r_{l}\) and \(r_{f}\).
## 6 Conclusion
We have presented FastGen, a novel method that significantly improves the inference efficiency of LLMs, with no visible quality loss, using lightweight model profiling and adaptive key-value caching. Areas for future explorations include combining FastGen with other model compression techniques, such as quantization and distillation, and other efficient attention architectures, such as grouped-query attention. |
2308.14920 | Matbench Discovery -- A framework to evaluate machine learning crystal
stability predictions | Matbench Discovery simulates the deployment of machine learning (ML) energy
models in a high-throughput search for stable inorganic crystals. We address
the disconnect between (i) thermodynamic stability and formation energy and
(ii) in-domain vs out-of-distribution performance. Alongside this paper, we
publish a Python package to aid with future model submissions and a growing
online leaderboard with further insights into trade-offs between various
performance metrics. To answer the question which ML methodology performs best
at materials discovery, our initial release explores a variety of models
including random forests, graph neural networks (GNN), one-shot predictors,
iterative Bayesian optimizers and universal interatomic potentials (UIP).
Ranked best-to-worst by their test set F1 score on thermodynamic stability
prediction, we find CHGNet > M3GNet > MACE > ALIGNN > MEGNet > CGCNN > CGCNN+P
> Wrenformer > BOWSR > Voronoi tessellation fingerprints with random forest.
The top 3 models are UIPs, the winning methodology for ML-guided materials
discovery, achieving F1 scores of ~0.6 for crystal stability classification and
discovery acceleration factors (DAF) of up to 5x on the first 10k most stable
predictions compared to dummy selection from our test set. We also highlight a
sharp disconnect between commonly used global regression metrics and more
task-relevant classification metrics. Accurate regressors are susceptible to
unexpectedly high false-positive rates if those accurate predictions lie close
to the decision boundary at 0 eV/atom above the convex hull where most
materials are. Our results highlight the need to focus on classification
metrics that actually correlate with improved stability hit rate. | Janosh Riebesell, Rhys E. A. Goodall, Philipp Benner, Yuan Chiang, Bowen Deng, Alpha A. Lee, Anubhav Jain, Kristin A. Persson | 2023-08-28T22:29:57Z | http://arxiv.org/abs/2308.14920v2 | # Matbench Discovery
###### Abstract
Matbench Discovery simulates the deployment of machine learning (ML) energy models in a high-throughput search for stable inorganic crystals. We address the disconnect between (i) thermodynamic stability and formation energy and (ii) in-domain vs out-of-distribution performance. Alongside this paper, we publish a Python package to aid with future model submissions and a growing online leaderboard with further insights into trade-offs between various performance metrics. To answer the question which ML methodology performs best at materials discovery, our initial release explores a variety of models including random forests, graph neural networks (GNN), one-shot predictors, iterative Bayesian optimizers and universal interatomic potentials (UIP). Ranked best-to-worst by their test set F1 score on thermodynamic stability prediction, we find CHGNet > M3GNet > MACE > ALIGNN > MEGNet > CGCNN > CGCNN+P > Wrenformer > BOWSR > Voronoi tessellation fingerprints with random forest. The top 3 models are UIPS, the winning methodology for ML-guided materials discovery, achieving F1 scores of 0.6 for crystal stability classification and discovery acceleration factors (DAF) of up to 5x on the first 10k most stable predictions compared to dummy selection from our test set. We also highlight a sharp disconnect between commonly used global regression metrics and more task-relevant classification metrics. Accurate regressors are susceptible to unexpectedly high false-positive rates if those accurate predictions lie close to the decision boundary at 0 eV/atom above the convex hull where most materials are. Our results highlight the need to focus on classification metrics that actually correlate with improved stability hit rate. Finally, we share valuable insights for maintainers of high throughput materials databases by demonstrating that these models have matured enough to play a useful role as triaging tools to more effectively allocate compute budget for DFT relaxations.
+
Footnote †: Correspondence to [email protected]
## I Introduction
Material science can be viewed as a combinatorial problem of mixing and arranging different atoms to leverage the complex range of properties that emerge. To date, \(\sim\)10\({}^{5}\) combinations have been tested experimentally [1; 2] and \(\sim\)10\({}^{7}\) have been simulated [3; 4; 5; 6]. Davies _et al._[7] identified \(\sim\)10\({}^{10}\) possible quaternary materials allowed by electronegativity and charge-balancing rules. The space of quiternaries and higher is even less explored, leaving vast numbers of potentially useful materials to be discovered. The discovery of new materials is a key driver of technological progress and lies on the path to more efficient solar cells, lighter and longer-lived batteries, smaller and more efficient transistor gates just to name a few. In light of global warming, these advances cannot come fast enough. Any speed-up new methods might yield should be leveraged to their fullest extent.
Despite significant advances in empirical, theoretical and computational materials science, discovering new materials still requires complex calculations, labor-intensive trial-and-error experimentation, and often happens fortuitously rather than through rational design. Machine learning (ML) methods efficiently extract and distill trends from huge datasets, can handle high dimensionality, multiple objectives, uncertainty, and noisy or sparse data, making them powerful additions to the computational materials science tool set.
ML models are less accurate and reliable but orders of magnitude faster than ab-initio simulation. This makes them most suitable for use in high-throughput (HT) searches to triage more expensive, higher-fidelity simulation methods. The use of neural networks for learning the Kohn-Sham density-functional theory (DFT) potential energy surface (PES) can be traced as far back as [8]. This work kicked off rapid advances and significant efforts to fit ever more sophisticated ML models to known samples of the PES. Initially, most of these models were trained and deployed as interatomic potentials to study known materials of interest, a workflow that requires curating custom training data for each new system of interest [9; 10]. As larger and more diverse datasets have emerged from initiatives like the Materials Project (MP) [3], AFLOW [5] or the Open Quantum Materials Database (OQMD) [4], researchers have begun to train so-called universal models that cover 90 or more of the most-application relevant elements in the periodic table. This opens up the prospect of ML-guided materials discovery to increase the hit rate of stable crystals and speed
up DFT- and expert-driven searches.
Progress in ML for materials is often measured according to performance on standard benchmark data sets. As ML models have grown in complexity and applicability, benchmark datasets need to grow with them to accurately measure their usefulness. However, due the the rapid pace of the field and the variety of possible approaches for framing the discovery problem, no large-scale benchmark yet exists for measuring the ability of ML to accelerate materials discovery. As a result, it is unclear which methodologies or models are best suited for this task. Recent works focusing on prospective computational materials discovery have proposed strategies based on one-shot coordinate free predictors [11], iterative Bayesian optimizers [12], and universal interatomic potentials (UIP) [13; 14; 15]. These papers deploy their respective model on specific systems and custom datasets to highlight certain strengths but have yet to be compared in a systematic standardized manner that allows them to be ranked by their ability to accelerate materials discovery. Our work aims to identify the state-of-the-art (SOTA) model by proposing a novel evaluation framework that closely simulates a real-world discovery campaign guided by ML models.
Specifically, we designed a benchmark task that tackles three central challenges. We believe these challenges to be requirements when seeking to justify experimental validation of ML predictions:
1. **Mimic Real Applications**: Idealized and overly simplified benchmarks can fail to reflect the real-world challenges a model faces when used in an actual discovery campaign. This can result in a disconnect between benchmark metrics and real-world performance. Possible reasons for this include choosing the wrong target [16] or picking an unrepresentative train/test split [17; 18]. In the case of materials discovery, formation energies - although widely used as regression targets - do not indicate thermodynamic stability. That is determined by the distance to the convex hull spanned by competing phases in the same chemical system. Moreover, ML models relying on relaxed crystal structures as input render any discovery pipeline circular since obtaining relaxed structures requires computationally expensive DFT simulations, thereby depending on the very process we intend to accelerate.
2. **Opportunity cost**: Accurate regressors are susceptible to unexpectedly high false-positive rates if those accurate predictions lie close to the decision boundary. Looking purely at global metrics like MAE, RMSE and \(R^{2}\) can give practitioners a false sense of security about their model's reliability. Failed experiments incur a high opportunity cost by wasting lab time and resources.
3. **Scalability**: Future discovery efforts are likely to encounter large data regimes. Small benchmarks can lack chemical diversity, obfuscate poor scaling relations or poor out-of-distribution (OOD) performance. For instance, random forests achieve excellent performance on small datasets but are typically outperformed by neural networks on large datasets due to the benefits of representation learning [19].
Two prior efforts worth highlighting that have partially addressed the above challenges are Matbench [20] and the Open Catalyst Project (OCP) [21].
By providing a standardized collection of 13 datasets ranging in size from \(\sim\)300 to \(\sim\)132,000 samples from both DFT and experimental sources, Matbench addresses the scalability challenge, highlighting how model performance changes as a function of data regime. Matbench helped focus the field of ML for materials, increase comparability across papers and provide a quantitative measure of progress in the field. Importantly, all tasks were exclusively concerned with the properties of known materials. We believe a task that simulates a materials discovery campaign by requiring materials stability prediction from unrelaxed structures to be a missing piece here.
OCP is a large-scale initiative to discover substrate-adsorbate combinations that catalyze key industrial reactions that process said adsorbates into more useful products. The OCP has released two data sets thus far, OCP20 [21] and OCP22 [22], for training and benchmarking ML models. OCP certainly addressed challenge 1 of closely mimicking a real-world problem. They have recently shown that despite not reaching their target accuracy to entirely replace DFT, using ML in conjunction with confirmatory DFT calculations dramatically speeds up their combinatorial screening workflow [23].
We believe that addressing these three challenges will enable future ML-guided discovery efforts to expand materials databases to confidently select appropriate models and methodology. Our initial findings show universal potentials to outperform all other methodologies we tested both in accuracy and robustness.
### Evaluation Framework for Materials Discovery
We propose a novel evaluation framework that places no constraints on the type of training data models use. We only enforce that at test time, all models must make predictions on the convex hull distance of the relaxed structure with only the unrelaxed structure as input. This setup avoids circularity in the discovery process, as unrelaxed structures can be cheaply enumerated through elemental substitution methodologies and do not contain information inaccessible in a prospective discovery campaign. We choose to focus on the relaxed structure's convex hull distance as a measure of thermodynamic stability rather than the formation energy as it informs the decision on whether to pursue a potential candidate crystal. This decision was also motivated by Bartel _et al._
[16] who found even composition-only models capable of predicting DFT formation energies with useful accuracy. However, when tasking those same models with predicting decomposition enthalpy, performance deteriorated sharply. This insight meant that ML models are much less useful than DFT for discovering new inorganic solids. Moreover, the qualitative leap in performance from Roost [19], the best compositional model benchmarked, to CGCNN [24], the single structural model they tested, highlights that structure plays a crucial role in determining material stability. Hence our decision to restrict test-time model input to unrelaxed structures to avoid the above-mentioned circularity and measure true prospective utility while still allowing for subtle atomic-configuration-informed energy estimates. We recognize that our ground truth is ignorant of entropic stabilization and metastable states. While these factors influence experimental synthesizability, we have to neglect them when simulating an HT search since the convex hull distance as predicted by zero-temperature DFT is the best proxy of true crystal stability available for large datasets.
For the first realization of this framework we use the Materials Project (MP) [3] database release (v2022.10.28 as our training set, and the unrelaxed structures in the WBM dataset [25] as our test set.
#### ii.1.1 Materials Project Training Set
The Materials Project is a well-known effort to calculate the properties of all inorganic materials using high-throughput ab-initio methods. Seeded from a subset of the Inorganic Crystal Structure Database (ICSD) [26], the initial release of the database consisted of \(\sim\)9 k crystals. At the time of writing, the Materials Project database [3] has grown to \(\sim\)154 k crystals, covering diverse chemistries and providing relaxed and initial structures as well as the relaxation trajectory for every entry.
Our benchmark defines the training set as all data available from the 2022.10.28 MP release. We recorded a snapshot of energies, forces, stresses and magnetic moments for all MP ionic steps on 2023-03-15 as the canonical training set for v1 of Matbench Discovery, and provide convenience functions through our Python package for easily feeding that data into future model submissions to our benchmark. This flexibility is intended to allow authors to experiment with and exploit the large variety of available data.
#### ii.1.2 WBM Test Set
The WBM data set [25] consists of \(\sim\)257 k structures generated via chemical similarity-based elemental substitution of MP source structures followed by DFT relaxation and calculating each crystal's convex hull distance. Which element replaces an existing one in a given source structure was determined by random sampling according to the weights in a chemical similarity matrix data-mined from the ICSD [27].
The WBM authors performed 5 iterations of this substitution process (we refer to these as batches). After each step, the newly generated structures found to be thermodynamically stable after DFT relaxation flow back into the source pool to partake in the next round of substitution. This split of the data into batches of increasing substitution count is a unique and compelling feature of the test set as it allows out-of-distribution (OOD) testing by seeing if model performance degrades with substitution count. A higher number of elemental substitutions on average carries the structure further away from the region of material space covered by the MP training set (see fig. 8 for details). Whilst this batch information makes the WBM data set an exceptionally useful data source for examining the extrapolation performance of ML models, our evaluation primarily looks at metrics that consider all batches as a single test set.
Throughout this work, we define stability as being on or below the convex hull of the MP training set, \(\sim\)42k out of \(\sim\)257k materials in WBM satisfy this criterion. Our code treats the stability threshold as a dynamic parameter for future more detailed performance analysis at different thresholds. For initial analysis in this direction, see fig. 3 in the SI.
As WBM explores regions of materials space not well sampled by MP, many of the discovered materials that lie below MP's convex hull are not stable relative to each other. Of the \(\sim\)42k that lie below the MP convex hull less than half, or around \(\sim\)20k, remain on the convex hull when merging the MP and WBM hulls. This observation suggests that many WBM structures are repeated samples in the same new chemical spaces. It also highlights a critical aspect of this benchmark in that we purposely operate on an incomplete convex hull. Only current knowledge is accessible to a real discovery campaign and our metrics are designed to reflect this.
### Models
To test a wide variety of methodologies proposed for learning the potential energy landscape, our initial benchmark release includes 10 models.
1. **CHGNet[14]** (UIP-GNN) - CHGNet is a UIP for charge-informed atomistic modeling. Its distinguishing feature is that it was trained to predict magnetic moments on top of energies, forces and stresses in the MPTrj dataset consisting of relaxation trajectories for \(\sim\)1.5 million MP structures. By modeling magnetic moments, CHGNet learns to accurately represent the orbital occupancy of electrons which allows it to predict both atomic and electronic degrees of freedom.
2. **M3GNet[13]** (UIP-GNN) - M3GNet is a GNN-based universal (as in full periodic table) inter
atomic potential (UIP) for materials trained on up to 3-body interactions in the initial, middle and final frame of MP DFT relaxations. The model takes the unrelaxed input and emulates structure relaxation before predicting energy for the pseudo-relaxed structure.
3. **MACE**[15] (UIP-GNN) - MACE builds upon the recent advances [28, 29] in equivariant neural network architectures by proposing an approach to computing high N-body order features in an efficient manner via Atomic Cluster Expansion [30]. Unlike the other UIP models considered MACE was primarily developed for molecular dynamics of single material systems and not the universal use case studied here. It is the only equivariant model we tested.
4. **ALIGNN**[31] (GNN) - The Atomistic Line Graph Neural Network (ALIGNN) is a message passing GNN architecture that takes as input both the interatomic bond graph and a line graph corresponding to 3-body bond angles. The ALIGNN architecture involves a global pooling operation which means that it is ill-suited to force-field applications. To address this the ALIGNN-FF model was later introduced without global pooling [32].
5. **MEGNet**[33] (GNN) - MatErials Graph Network is another GNN similar to CGCNN for material properties of relaxed structures that also updates the edge and global features (like pressure, temperature, entropy) in its message passing operation. This work showed that learned element embeddings encode periodic chemical trends and can be transfer-learned from large data sets (formation energies) to predictions on small data properties (band gaps, elastic moduli).
6. **CGCNN**[24] (GNN) - The Crystal Graph Convolutional Neural Network (CGCNN) was the first neural network model to directly learn 8 different DFT-computed material properties from a graph representing the atoms and bonds in a periodic crystal. CGCNN was among the first to show that just like in other areas of ML, given large enough training sets, neural networks can learn embeddings that outperform human-engineered structure features directly from the data.
7. **CGCNN+P**[34] (GNN) - This work proposes simple, physically motivated structure perturbations to augment stock CGCNN's training data of relaxed structures with structures resembling unrelaxed ones but mapped to the same DFT final energy. Here we chose \(P=5\), meaning the training set is augmented with 5 random perturbations of each relaxed MP structure mapped to the same target energy. In contrast to all other structure-based GNNs considered in this benchmark, CGCNN+P is not attempting to learn the Born-Oppenheimer potential energy surface. The model is instead taught the PES as a step-function that maps each valley to its local minimum. The idea is that during testing on unrelaxed structures, the model will predict the energy of the nearest basin in the PES. The authors confirm this by demonstrating a lowering of the energy error on unrelaxed structures.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline
**Model** & **F1** & **DAF** & **Precision** & **Accuracy** & **TPR** & **TNR** & **MAE** & **RMSE** & **R2** & **training size** & **Model Class** \\ \hline
**CHGNet** & 0.58 & 3.06 & 0.52 & 0.84 & 0.66 & 0.88 & 0.07 & 0.11 & 0.61 & 1,580,395 & UIP-GNN \\ \hline
**M3GNet** & 0.57 & 2.67 & 0.45 & 0.80 & 0.77 & 0.81 & 0.07 & 0.11 & 0.60 & 188,349 & UIP-GNN \\ \hline
**MACE** & 0.57 & 2.78 & 0.47 & 0.81 & 0.72 & 0.83 & 0.07 & 0.11 & 0.63 & 1,580,395 & UIP-GNN \\ \hline
**ALIGNN** & 0.56 & 2.92 & 0.50 & 0.83 & 0.65 & 0.87 & 0.06 & 0.16 & 0.27 & 154,719 & GNN \\ \hline
**MEGNet** & 0.51 & 2.70 & 0.46 & 0.81 & 0.47 & 0.86 & 0.16 & 0.20 & 0.28 & 69,239 & GNN \\ \hline
**CGCNN** & 0.51 & 2.63 & 0.45 & 0.81 & 0.89 & 0.85 & 0.16 & 0.22 & 154,719 & GNN \\ \hline
**CGCNN+P** & 0.51 & 2.40 & 0.54 & 0.78 & 0.67 & 0.80 & 0.48 & 0.08 & 154,719 & GNN \\ \hline
**Wrenformer** & 0.48 & 0.48 & 0.55 & 0.70 & 0.69 & 0.75 & 0.60 & 0.15 & 154,719 & Transformer \\ \hline
**BOWSR** & 0.44 & 0.51 & 0.52 & 0.68 & 0.74 & 0.67 & 0.92 & 0.16 & 0.12 & 69,239 & BO-GNN \\ \hline
**Voronoi RF** & 0.52 & 1.51 & 0.26 & 0.67 & 0.51 & 0.70 & 0.62 & 0.61 & 154,719 & Fingerprint \\ \hline
**Dummy** & 0.19 & 1.00 & 0.17 & 0.68 & 0.28 & 0.77 & 0.12 & 0.16 & 0.00 & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Classification and regression metrics for all models tested on our benchmark ranked by F1 score. The heat map ranges from yellow (best) to blue (worst) performance. DAF = discovery acceleration factor (see text), TPR = true positive rate, TNR = false negative rate, MAE = mean absolute error, RMSE = root mean squared error. The dummy classifier uses the ‘scikit-learn’ ‘stratified’ strategy of randomly assigning stable/unstable labels according to the training set prevalence. The dummy regression metrics MAE, RMSE and \(R^{2}\) are attained by always predicting the test set mean. The Voronoi RF, CGCNN and MEGNet models are seen to be worse than the dummy result on regression metrics but better on some of the classification metrics, highlighting the importance of looking at the right metrics for the task at hand to gauge model performance.
8. **Wrenformer** (Transformer) - For this benchmark, we introduce Wrenformer which is a variation on the coordinate-free Wren model [11] constructed using standard QKV-self-attention blocks [35] in place of message-passing layers. This architectural adaptation reduces the memory usage allowing the architecture to scale to structures with greater than 16 Wyckoff positions. Like its predecessor, Wrenformer is a fast coordinate-free model aimed at accelerating screening campaigns where even the unrelaxed structure is a priori unknown. The key idea is that by training on the coordinate anonymized Wyckoff positions (symmetry-related positions in the crystal structure), the model learns to distinguish polymorphs while maintaining discrete and computationally enumerable inputs. The central methodological benefit of an enumerable input is that it allows users to predict the energy of all possible combinations of spacegroup and Wyckoff positions for a given composition and maximum unit cell size. The lowest-ranked prototypes can then be fed into downstream analysis or modeling.
9. **BOWSR**[12] (BO-GNN) - BOWSR combines a symmetry-constrained Bayesian optimizer (BO) with a surrogate energy model to perform an iterative exploration-exploitation-based search of the potential energy landscape. Here we use MEGNet [33] for the energy model as proposed in the original work. The high sample count needed to explore the PES with BO makes this by far the most expensive model tested.
10. **Voronoi RF**[36] (Fingerprint) - A random forest trained to map a combination of composition-based Magpie features [37] and structure-based relaxation-robust Voronoi tessellation features (effective coordination numbers, structural heterogeneity, local environment properties,...) to DFT formation energies. This fingerprint-based model predates most deep learning for materials but significantly improved over earlier fingerprint-based methods such as the Coulomb matrix [38] and partial radial distribution function features [39]. It serves as a baseline model to see how much value the learned featurization of deep learning models can extract from the increasingly large corpus of available training data.
## II Results
Table 1 shows performance metrics for all models included in the initial release of Matbench Discovery. CHGNet takes the top spot on all metrics except true positive rate (TPR) and emerges as the current SOTA for ML-guided materials discovery. The discovery acceleration factor (DAF) measures how many more stable structures a model found compared to the dummy discovery rate of 43k / 257k \(\approx\) 16.7% achieved by randomly selecting test set crystals. The maximum possible DAF is the inverse of the dummy discovery rate which on our dataset is \(\sim\)6. The current SOTA of 3.06 achieved by CHGNet leaves room for improvement. However, evaluating models on their 10k most stable predictions only as shown in table 2, CHGNet achieves an impressive DAF of 4.96 approaching optimal performance.
We find a large performance gap between models that make one-shot predictions directly from unrelaxed inputs (MEGNet, Wrenformer, CGCNN, CGCNN+P, ALIGNN, Voronoi RF) compared to UIPs that predict forces and use them to emulate DFT relaxation (CHGNet, M3GNet, MACE). While the F1 scores and DAFs of force-free models are less unaffected, their coefficients of determination (\(R^{2}\)) are significantly worse. Of the force-free models, only ALIGNN, BOWSR and CGCNN+P achieve positive \(R^{2}\). Negative \(R^{2}\) means model predictions explain the observed variation in the data less than simply predicting the test set mean. In other words, these models are not predictive in a global sense (across the full dataset range). However, even models with negative \(R^{2}\) can be locally good in the positive and negative tails of the test set hull distance distribution. They suffer most in the mode of our hull distance distribution near the stability threshold of 0 eV/atom above the hull. This reveals an important shortcoming of \(R^{2}\) as a metric for classification tasks like stability prediction.
The results for M3GNet and MACE depart from the trend that F1 is rank-correlated with the other classification metrics. Of all models, M3GNet achieves the highest true positive rate (TPR) but an unusually low true negative rate (TNR). A similar trend is seen for MACE. fig. 2 provides a visual understanding of this observation. M3GNet and MACE have the lowest rolling mean of the absolute errors (rolling MAE) as a function of hull distance for materials above the convex hull (see right half of plot) but incur comparably large errors for materials below the hull (left half of plot). Since \(\text{TPR}=\frac{\text{TN}}{\text{TN}+\text{FP}}\), lower error for energies above the hull increases both TN and decreases FP, resulting in the high TPR values observed.
The reason CGCNN+P achieves better regression metrics than CGCNN but is still worse as a classifier becomes apparent from fig. 7 by noting that the CGCNN+P histogram is more sharply peaked at the 0 hull distance stability threshold. This causes even small errors in the predicted convex hull distance to be large enough to invert a classification. Again, this is evidence to choose carefully which metrics to optimize. Regression metrics are far more prevalent when evaluating energy predictions. However, our benchmark treats energy predictions as merely means to an end to classify compound stability. Improvements in regression accuracy are of limited use to materials discovery in their own right unless they also improve classification accuracy. Our results demon
strate that this is not a given.
Section II has models rank materials by model-predicted hull distance from most to least stable; materials furthest below the known hull at the top, materials right on the hull at the bottom. For each model, we iterate through that list and calculate at each step the precision and recall of correctly identified stable materials. This simulates exactly how these models would be used in a prospective materials discovery campaign and reveals how a model's performance changes as a function of the discovery campaign length. As a practitioner, you have a certain amount of resources available to validate model predictions. These curves allow you to read off the best model given these conditions and based on the optimal trade-off between fewer false positives (precision) or fewer negatives (recall) for the discovery task at hand. In this case, it so happens that CHGNet achieves the highest precision _and_ recall at any number of screened materials.
A line terminates when a model believes there are no more materials in the WBM test set below the MP convex hull. The dashed vertical line shows the actual number of stable structures in our test set. All models are biased towards stability to some degree as they all overestimate this number, most of all BOWSR by 133%, least of all MEGNet by 30%. This is only a problem in practice for exhaustive discovery campaigns that validate _all_ stable predictions from a model. More frequently, model predictions will be ranked most-to-least stable and validation stops after some pre-determined compute budget is spent, say, 10k DFT relaxations. In that case, the concentration of false positive predictions that naturally accumulates near the less stable end of the candidate list can be avoided with no harm to the campaign's overall discovery rate (see table II a similar metrics table considering only the 10k materials predicted by each model to be furthest below the known convex hull).
The diagonal Optimal Recall line would be achieved if a model never made a false negative prediction and stopped predicting stable crystals exactly when the true number of stable materials is reached. Zooming in on the top-left corner of the precision plot, we observe that CHGNet is the only model without a sudden drop in precision right at the start of the discovery campaign. This means CHGNet is the only model whose first few hundred most stable materials are largely actually stable. MACE in particular suffers from a large number of initial failure cases. These are unstable materials whose energy MACE underpredicts by several eV/atom (see fig. 4), resulting in MACE's initial precision starting at 0 before quickly climbing to \(\sim\)0.8. We experienced this behavior with multiple independently trained MACE variants; even more so with a checkpoint we received directly from the MACE authors which was trained on the M3GNet training set. The results shown here are from a superior MACE we trained ourselves on the much larger MPtrj dataset. Even M3GNet, the other universal potential and 2nd best model, exhibits the early-on precision drop. As a result, CHGNet has a strong lead over M3GNet until reaching \(\sim\)3k screened materials. From there, CHGNet and M3GNet slowly converge until they almost tie at a precision of 0.52 after \(\sim\)56k screened materials. At that point, CHGNet's list of stable predictions is exhausted while M3GNet continues, dropping in precision to 0.45 at 76 k, attributable to many false positives near the end of the list of stable predictions.
All force-free models exhibit a much worse case of early-on precision drop, falling to 0.6 or less in the first 5k screened materials. Many of these models (all except BOWSR, Wrenformer and Voronoi RF) display an interesting hook shape in their cumulative precision, recovering again slightly in the middle of the simulated campaign between 5k and up to 30k before dropping again until the end.
Figure 2 provides a visual representation of the reliability of different models based on the rolling mean absolute error (MAE) of model-predicted hull distances as a function of DFT distance to the Materials Project (MP) convex hull. The red-shaded area, referred to as the 'triangle of peril', emphasizes the zone where the average model error surpasses the distance to the stability threshold at 0 eV. As long as the rolling MAE remains within this triangle, the model is most susceptible to misclassifying structures. Because the average error is larger than the distance to the classification threshold at 0, it is large enough to flip a correct classification into an incorrect one (if the error happens to point toward the stability threshold).
The sooner a model leaves the triangle on the left side, the less likely it is to incorrectly predict stable structures as unstable, thereby reducing false negatives. On the right side, early exits result in a lower likelihood of predicting unstable structures as stable, leading to fewer false positives.
On the left side, CHGNet exits the triangle first. We can expect fewer false negatives from CHGNet than from other models. On the right side, M3GNet is first to exit, at a remarkably low \(\sim\)40 meV/atom error. M3GNet is overtaken by Wrenformer for highly unstable structures towards the right end of the plot. Therefore, both of these models are expected to produce fewer false positives.
Overall, this visualization underscores the importance of considering a model's average error relative to the distance to the stability threshold when evaluating its performance as a material stability predictor. Low overall hull distance error can be a misleading metric if that error is not particularly low at the decision boundary.
The imbalance between the left and right half of the plot shows that models are more prone to false negative predictions, even for very stable materials far below the known hull, than predicting very unstable materials as stable.
As the convex hull becomes more thoroughly sampled by future discovery, the fraction of unknown stable structures decreases, naturally leading to less enriched future test sets. This has several implications, including al
Figure 1: CHGNet achieves the highest cumulative precision and recall at any point during a simulated discovery campaign. A discovery campaign consists of using a model to rank test set materials by their predicted energy above the training set convex hull. A higher fraction of correct stable predictions corresponds to higher precision and fewer stable materials overlooked correspond to higher recall. This figure highlights how different models perform better or worse depending on the length of the discovery campaign. The UIP models (CHGNet, M3GNet, MACE) are seen to offer significantly improved precision on shorter campaigns as they are less prone to early false positive predictions.
Figure 2: Universal potentials are more reliable classifiers because they exit the red triangle earliest. These lines show the rolling MAE on the WBM test set as the energy to the convex hull of the MP training set is varied, lower is better. The red-highlighted ‘triangle of peril’ shows where the models are most likely to misclassify structures. As long as a model’s rolling MAE remains inside the triangle, its mean error is larger than the distance to the convex hull. If the model’s error for a given prediction happens to point towards the stability threshold at 0 eV from the hull (the plot’s center), its average error will change the stability classification of a material from true positive/negative to false negative/positive. The width of the ’rolling window’ box indicates the width over which errors hull distance prediction errors were averaged.
lowing for higher maximum DAFs but also skewing the hull distance distribution towards positive values (i.e. the right half of fig. 2) as there are fewer undersampled chemical spaces. Consequently, to have accurate stability classifications, model accuracy will need to improve concomitantly. For fig. 2, this means models need to be much more accurate to exit the shaded triangle in the left half of the plot.
## III Discussion
We have demonstrated the effectiveness of ML-based triage in HT materials discovery and posit that the benefits of including ML in discovery workflows now clearly outweigh the costs. Table 1 shows in a realistic benchmark scenario that several models achieve a discovery acceleration greater than 2.5 across the whole dataset and up to 5 when considering only the 10k most stable predictions from each model (table 2). When starting this project, we were unsure which is the most promising ML methodology for HT discovery. Our findings demonstrate a clear superiority in accuracy and extrapolation performance of UIPs like CHGNet, M3GNet and MACE. Modeling forces enables these models to chart a path through atomic configuration space closer to the DFT-relaxed structure from where a more informed final energy prediction is possible. The consistently linear log-log learning curves observed in related literature [40] suggest that further decreases in the error of UIPs can be readily unlocked with increased training data.
Despite its fast pace of progress, the field of universal ML potentials is still in its early stages. To realize the full potential of these models, we will need to pivot significant resources to generate large quantities of higher-than-PBE fidelity training data. That said, we believe our work (e.g. fig. 8) and others [23] have caught early glimpses that sufficiently large training sets allow us to narrow the performance gap between in-domain and out-of-domain performance. This should further increase the discovery acceleration factor, resulting in a virtuous cycle between ML models accelerating the creation of new training data while becoming more accurate and even stronger accelerators after retraining. Meanwhile, the setup cost and learning curve of ML potentials are steadily decreasing, likely with a much lower floor than DFT given the intricacies of HT ab-initio simulation. Combined with their already strong performance, this suggests that UIPs combined with data accumulation over the next few years may unlock our way to a comprehensive yet cheap map of the PES that truly commoditizes HT materials discovery.
Despite this optimistic outlook, not only has the predictive power UIPs attained for crystal stability over the last 3 years since Bartel _et al._[16] sobering 2020 analysis yet to enter common knowledge, but many remaining shortcomings invite further research. For example, all models are biased towards stability, overestimating the correct number of stable materials in our test set anywhere between 30% and 133%. Moreover, even the best models in our analysis still make large numbers of false positive predictions, even for materials far above the convex hull. This indicates the need to train future models on even more diverse datasets that are less biased towards ground state crystals than the MP training set we used. No such dataset built with MP-compatible DFT settings exists to our knowledge, inviting future efforts in this direction.
Looking beyond mere thermodynamic stability prediction at zero Kelvin for the purpose of materials discovery, future materials science will also require understanding and predicting the properties under varying environmental conditions such as finite temperature and pressure. This is where interatomic potentials can unlock further utility. Their force predictions enable insights into dynamical properties. One open question of particular relevance is the extent to which such models may aid research into the computational prediction of synthesis pathways. Many existing approaches for reaction pathway prediction involve the use of heuristic rules for dealing with the significant added complexity of metastability alongside traditional ground state ab-initio data [41; 42; 43]. These algorithms stand to benefit massively from more efficient estimates of the reaction energy barriers that future UIPs may provide. Provided they reach sufficient accuracy, this may enable timely calculation of approximate reaction barriers [44] and open up a whole new field to high-throughput inquiry.
## IV Acknowledgments
J.R. acknowledges support from the German Academic Scholarship Foundation (Studienstiftung). A.A.L. acknowledges support from the Royal Society. A.J. and K.A.P. acknowledge the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under contract no. DE-AC02-05-CH11231 (Materials Project program KC23MP). This work used computational resources provided by the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231.
Our profound gratitude extends to Hai-Chen **W**ang, Silvana **B**otti and Miguel A. L. **M**arques for their valuable contribution in crafting and freely sharing the WBM data set.
We would like to thank Jason Blake Gibson, Shyue Ping Ong, Chi Chen, Tian Xie, Bowen Deng, Peichen Zhong and Ekin Dogus Cubuk for helpful discussions.
We thank Rokas Elijosius for assisting in the initial implementation of the Wrenformer model.
Author contributions
Janosh Riebesell: Methodology, Software, Data Curation, Training and Testing Models, Formal Analysis. Rhys Goodall: Conceptualization, Software, Formal Analysis. Anubhav Jain: Supervision. Philipp Benner: Software, Training Models. Kristin Persson: Supervision. Alpha Lee: Supervision.
## VI Code availability
We welcome further model submissions to our GitHub repo [https://github.com/janosh/matbench-discovery](https://github.com/janosh/matbench-discovery).
## VII Data availability
We chose the latest Materials Project (MP) [3] database release (v2022.10.28 at time of writing) as the training set and the WBM dataset [25] available at [https://figshare.com/articles/dataset/22715158](https://figshare.com/articles/dataset/22715158) as the test set for this benchmark. A snapshot of every ionic step including energies, forces, stresses and magnetic moments in the MP database is available at [https://figshare.com/articles/dataset/23713842](https://figshare.com/articles/dataset/23713842).
|
2302.11803 | A Comprehensive Survey on Source-free Domain Adaptation | Over the past decade, domain adaptation has become a widely studied branch of
transfer learning that aims to improve performance on target domains by
leveraging knowledge from the source domain. Conventional domain adaptation
methods often assume access to both source and target domain data
simultaneously, which may not be feasible in real-world scenarios due to
privacy and confidentiality concerns. As a result, the research of Source-Free
Domain Adaptation (SFDA) has drawn growing attention in recent years, which
only utilizes the source-trained model and unlabeled target data to adapt to
the target domain. Despite the rapid explosion of SFDA work, yet there has no
timely and comprehensive survey in the field. To fill this gap, we provide a
comprehensive survey of recent advances in SFDA and organize them into a
unified categorization scheme based on the framework of transfer learning.
Instead of presenting each approach independently, we modularize several
components of each method to more clearly illustrate their relationships and
mechanics in light of the composite properties of each method. Furthermore, we
compare the results of more than 30 representative SFDA methods on three
popular classification benchmarks, namely Office-31, Office-home, and VisDA, to
explore the effectiveness of various technical routes and the combination
effects among them. Additionally, we briefly introduce the applications of SFDA
and related fields. Drawing from our analysis of the challenges facing SFDA, we
offer some insights into future research directions and potential settings. | Zhiqi Yu, Jingjing Li, Zhekai Du, Lei Zhu, Heng Tao Shen | 2023-02-23T06:32:09Z | http://arxiv.org/abs/2302.11803v1 | # A Comprehensive Survey on
###### Abstract
Over the past decade, domain adaptation has become a widely studied branch of transfer learning that aims to improve performance on target domains by leveraging knowledge from the source domain. Conventional domain adaptation methods often assume access to both source and target domain data simultaneously, which may not be feasible in real-world scenarios due to privacy and confidentiality concerns. As a result, the research of Source-Free Domain Adaptation (SFDA) has drawn growing attention in recent years, which only utilizes the source-trained model and unlabeled target data to adapt to the target domain. Despite the rapid explosion of SFDA work, yet there has no timely and comprehensive survey in the field. To fill this gap, we provide a comprehensive survey of recent advances in SFDA and organize them into a unified categorization scheme based on the framework of transfer learning. Instead of presenting each approach independently, we modularize several components of each method to more clearly illustrate their relationships and mechanics in light of the composite properties of each method. Furthermore, we compare the results of more than 30 representative SFDA methods on three popular classification benchmarks, namely Office-31, Office-home, and VisDA, to explore the effectiveness of various technical routes and the combination effects among them. Additionally, we briefly introduce the applications of SFDA and related fields. Drawing from our analysis of the challenges facing SFDA, we offer some insights into future research directions and potential settings.
Domain Adaptation, Transfer Learning, Computer Vision, Data-Free Learning.
## 1 Introduction
Deep neural networks are able to achieve satisfying performance in supervised learning tasks thanks to their generalization ability. However, collecting sufficient training data can be expensive due to factors such as cost and privacy concerns, etc. For instance, manually annotating a single cityscape image for semantic segmentation can take up to 90 minutes. To alleviate this issue, transfer learning [1, 2] has been proposed to enable cross-domain knowledge transfer under label deficient conditions.
Domain adaptation (DA) [3, 4], as an important branch of transfer learning, focuses on how to improve model performance on unlabeled target domains with the assistance of labeled source domain data. Under the condition of independent homogeneous distribution, the biggest challenge of domain adaptation is how to reduce the domain shift, which can be mainly classified as conditional shift, covariate shift, label shift and concept shift, and has been widely discussed in the previous DA surveys [5, 6].
In conventional domain adaptation approaches, it is mostly assumed that both the source and target domain data could be accessed during adaptation. However, this is not realistic in many applications. On the one hand, the raw source domain data will be unavailable in some cases due to personal privacy, confidentiality and copyright issues; on the other hand, it is unrealistic to keep the complete source dataset on some resource-limited devices for training. All the above problems have hindered the further spread of this field. To relax the dependence of DA methods on source data, Source-Free Domain Adaption (SFDA) [7, 8] has been proposed in recent years, which has rapidly become the focus of domain adaptation and is widely studied in image classification [9, 10], semantic segmentation [11, 12] and object detection [13, 14].
As shown in Fig. 1, the most significant difference between SFDA and Unsupervised Domain Adaptation (UDA) [15, 16] is that the UDA model can be trained using both the source and target domain data, while SFDA can only utilize the source model to initialize the target model and then update it with the unlabeled target data. Existing mainstream UDA methods can be categorized into two main types of methods: those that align the source and target domain distributions by designing specific metrics [3, 17, 18], and those that learn domain-invariant feature representations through adversarial learning [19, 20, 21]. However, these mainstream UDA methods are not suitable for scenarios without source domain data, highlighting the need to investigate SFDA methods as an alternative. Despite the considerable efforts devoted to SFDA, there has been no comprehensive review of all the works and a summary of the current progress related to SFDA. To fill this gap, we aim to provide a timely and comprehensive review of recent advances in SFDA in this survey.
In this work, we inherit from existing transfer learning
Fig. 1: Comparison of UDA and SFDA settings.
surveys [2, 22] and divide SFDA into two directions: data-based one and model-based one. In addition, we find that conventional UDA methods begin to work in the SFDA setting through some additional steps with the reconstructed source domain, and thus we discuss this regime as the domain-based reconstruction methods to reflect the extensibility of UDA research. We depict the overall topology of SFDA methods in Fig. 2.
From the data-centric perspective, the domain-based reconstruction and the image style translation can be seen as some derivatives of UDA. The intuition behind domain-based reconstruction is quite straightforward, i.e., it aims to reconstruct a domain or make further divisions within the target domain to compensate for missing source domain data, so that the UDA approaches can be extended to the SFDA setting. Concretely, it can be subdivided into virtual domain generation, intra-domain adversarial alignment and perturbed domain supervision. Image style translation translates the target domain data into the unseen source-style through the Batch Normalization (BN) layers of the source classifier, thereby the target domain data can be better compatible with the source model. Neighborhood clustering is based on the observation that the underlying intrinsic structure in the target data is embedded in the neighbor relationship, even though the target domain data distribution can not explicitly align with the source classifier. One assumption is that each category of the target domain will exist neighbors located within the source model's boundary, so those hard samples can be mapped to the source domain distribution by maintaining consistency among neighbors. At last, local structure clustering is based on manifold learning [23, 24, 25], and clustering is performed through the selection of neighbor nodes.
From the model-centric perspective, the majority of approaches assume that the source-pretrained model has a certain degree of generalization over the target domain due to the similarity of the source and target domains. Therefore, the model can be fine-tuned by exploring the outputs of the model on the target data. This self-training scheme is inherited from the idea of semi-supervised learning [26, 27, 28], and can be futher divided into pseudo-labeling, entropy minimization and contrastive learning. It is noted that we pay additional attention to the pseudo-labeling methods since it is most commonly used in SFDA. Pseudo-labeling is usually achieved by first pseudo-labeling the high-confidence samples in the target domain, and optimize the model thereafter by the obtained pseudo-labels. Therefore, how to obtain high-quality pseudo-label is the main concern of this category of methods. In this paper we try to discuss this technical route comprehensively based on each process in terms of prototype generation, pseudo-label assignment and pseudo-label filtering.
Moreover, we observed that most of the SFDA methods are comprised of multiple technical components, and each component might correspond to a category in our taxonomy. Therefore, unlike previous surveys on domain adaptation that treated each work independently, we aim to modularize each SFDA method and categorize them to reveal the interrelatedness among different research directions. The goal of this survey is to discuss the problem formulation of SFDA, summarize the current advances in this field, and highlight the distinctive components of SFDA techniques
Fig. 2: Taxonomy of SFDA methods.
to provide a comprehensive understanding of this area and inspire more ideas and applications. The main contributions of this survey can be summarized as follows.
* We propose an overall taxonomy framework for the newly emerged SFDA setting from both data-centric and model-centric perspectives. This survey fills a gap in the existing literature by providing a timely and comprehensive summary of the latest SFDA methods and applications.
* We modularize more than 30 representative works and compare the results on the most widely used classification datasets, i.e., Office-31, Office-home and VisDA. The combination of different components is visually presented and comparatively analyzed, which may be instructive for further research in the SFDA setting.
The remainder of this survey is structured as follows. In Section 2, we introduce the notations and preliminary knowledge related to SFDA. Section 3 and Section 4 provide a comprehensive review of data-centric and model-centric SFDA methods, respectively. Section 5 presents a comparison of existing SFDA methods on three mainstream classification datasets. Section 6 discusses the potential applications of SFDA, while Section 7 presents visions for future directions of SFDA. Finally, we conclude the paper in Section 8.
## 2 Overview
In this section, we introduce some notations and the preliminary knowledge related to SFDA, with a brief description of several settings in SFDA methods.
### _Notations and Definitions_
For consistency among the literature, we used as many notations as possible that are the same as those in the surveys related to transfer learning [22] and deep domain adaptation [6, 29]. We summarize them in Table II. Below are some basic definitions that are closely related with the problem of SFDA.
**Definition 1**.: (Domain in SFDA) A domain usually contains a data set \(\Phi\) and a corresponding label set \(\mathbf{L}\), where the data set contains the instance set \(\mathcal{X}\) drawn from the marginal distribution \(\mathcal{P}\left(\mathcal{X}\right)\) and corresponding dimension \(d\). In SFDA, the source domain \(\mathcal{D}^{s}=\{\left\{\mathcal{X}^{s},\mathcal{P}\left(\mathcal{X}^{s} \right),d^{s}\right\}\mathbf{L}^{s}\}\) is usually complete, but note that it is only available during pre-training. The label set of the target domain \(\mathcal{D}^{t}=\{\left\{\mathcal{X}^{t},\mathcal{P}\left(\mathcal{X}^{t} \right),d^{t}\right\}\}\) is not available and its marginal distribution is also different from the source domain. In typical SFDA, we assume a closed form, i.e., the label spaces of the source and target domains are the same.
**Definition 2**.: (Task of SFDA) Given a model \(\mathcal{M}\) trained on the source domain \(\mathcal{D}^{s}=\{\left\{\mathcal{X}^{s},\mathcal{P}\left(\mathcal{X}^{s} \right),d^{s}\right\}\mathbf{L}^{s}\}\), the process of SFDA consists of two stages. In the first stage (pre-training), the source model is trained by labeled
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline & No source data & Source model & Unlabeled target & Extra source information & No training \\ \hline Data-free KD & \(\boldsymbol{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{ \check{\check{\check{\
source data under supervised learning. In the second stage (adaptation), the source domain data is unavailable, and the goal of SFDA is to adapt the source-trained model to the target domain with unlabeled target data \(\Phi^{t}=\{\{\mathcal{X}^{t},\mathcal{P}\left(\mathcal{X}^{t}\right)^{t},d^{t}\}\}\), i.e., to get high accuracy for outputs \(\mathcal{M}(\mathcal{X}^{t})=P(\mathcal{Y}^{t}|\mathcal{X}^{t})\), where \(\mathcal{P}\left(\mathcal{Y}^{t}|\mathcal{X}^{t}\right)\) is the prediction result of target data \(\mathcal{X}^{t}\).
It is worth noting that the above definitions may also enable some extensions. For the definition of domain, SFDA may include the multi-source domains [30, 31], i.e., \(\mathcal{D}^{s}=\{\sum_{i=1}^{m}\mathcal{D}_{i}^{s}\}\) where m is the number of source domains, or there may be non-closed set cases such as Universal source-free domain adaptation [9]; for the definition of task in SFDA, for example, it is also possible in some extended SFDA settings to use a small portion of labeled target domain data when performing the task (i.e., Active SFDA [32, 33]). We will discuss different variants of SFDA in the next section.
### _Overview of the settings in SFDA_
Generally, domain adaptation can be divided into homogeneous domain adaptation and heterogeneous domain adaptation according to whether the source and target domain feature spaces are identical. In heterogeneous domain adaptation [34], both the distribution space and feature space of the source and target may be different, i.e., \(P(\mathcal{X}^{s})\neq P(\mathcal{X}^{t})\) and \(d^{s}\neq d^{t}\), so both distribution adaptation and feature space adaptation may be required. While in homogeneous domain adaptation, the feature spaces of the source and target domains are identical and only distribution adaptation is performed. Among the existing SFDA methods, most of them only involve homogeneous domain adaptation, which is also the focus of our discussion in this survey.
According to the label-set correspondence between the source and target domains, the settings can be classified as closed-set, partial, and open-set, and the relationship between them are shown in Fig. 3. Based on the number of source and target domains, there are also single-source, multi-source, and multi-task settings. In this survey, we mainly focus on the most general closed-set single-source SFDA setting, and other settings are also briefly outlined.
## 3 Data-based methods
The research of this broad class of data-based approaches can be divided into two routes based two different motivations. On the one hand, consider the unavailability of the source domain data, one of the most intuitive ideas is to mimic the source domain data through the source domain information implied in the model, or to reconstruct an intermediate domain or another domain to compensate for the absence of the source domain data. On the other hand, consider the unannotation of the target domain, another class of methods can be seen as trying to explore the potential data structure or clustering information in the unlabeled target domain data, making it possible to perform the domain adaptation task independently with the target domain data alone. In our categorization, we call the above two directions as Domain-based Reconstruction and Image-based Information Extraction, respectively. In the following, we describe each of them separately in detail.
### _Domain-based Reconstruction_
The core purpose of this class of methods is to reconstruct a new domain to supervise the target domain. In general, there are three directions to construct a new domain, one is to generate a virtual source domain, one is to construct a perturbed target domain for robust learning, and the last one is to divide the target domain into source-like and target-like parts and then perform an intra-domain adversarial to achieve alignment.
#### 3.1.1 Virtual Domain Generation
In unsupervised domain adaptation, a typical paradigm is to supervise the model on the source domain, and meanwhile employing some techniques to align the distributions of the source and target domains directly. The overall training goal of this process can be formulated as:
\[\mathcal{L}_{UDA_{M}}=\mathcal{L}_{cls}\left(\mathcal{C}\left(\mathcal{F}\left( \mathcal{X}^{s}\right)\right),\mathcal{Y}^{s}\right)+\mathcal{L}_{div}\left( \mathcal{F}\left(\mathcal{X}^{s}\right),\mathcal{F}\left(\mathcal{X}^{t} \right)\right), \tag{1}\]
where \(\mathcal{L}_{cls}\) is the supervised loss in the source domain, usually using cross-entropy. \(\mathcal{L}_{div}\) denotes the criterion used for alignment. However, in SFDA, \(\mathcal{X}^{s}\) is not available during adaptation. Therefore, the virtual domain generation methods attempt to construct a domain to compensate for the absence of source data. The above equation can be extended as follows:
\[\mathcal{L}_{SFDA}=\mathcal{L}_{gen}\left(\mathcal{C}\left(\mathcal{F}\left( \mathcal{X}^{v}\right)\right)\right)+\mathcal{L}_{div}\left(\mathcal{F}\left( \mathcal{X}^{v}\right),\mathcal{F}\left(\mathcal{X}^{t}\right)\right), \tag{2}\]
where \(\mathcal{X}^{v}\) denotes the virtual domain and \(\mathcal{L}_{gen}\) denotes the generative loss of the virtual domain. Note that the model has been pre-trained on the source domain, so it is possible to simulate the domain distribution by exploring the outputs of the model.
It can be found that compared to Eq. 3, SFDA should be concerned with the quality of virtual domain generation in addition to the alignment between domains. Since when the virtual domain is generated, the alignment problem of SDFA becomes similar to that of UDA, e.g., ADDA [21] is used in VDA-DA [35] and MMD [36] is employed as an
Fig. 3: An illustration of different settings in SFDA.
alignment metric in STDA [37], we concentrate on how to generate the virtual domain in this section. Technically, the virtual domains can be generated in two ways: based on adversarial generation or based on Gaussian distribution.
Drawing on the idea of GAN [38], the adversarial generation can be expressed as:
\[\mathcal{L}\left(\mathcal{C},\mathcal{F},\mathcal{D},G\right)=\mathcal{L}_{adv }+\mathcal{L}_{con}, \tag{3}\]
where \(\mathcal{L}_{con}\) is the consistency loss used to constrain the semantic consistency across domains and can contain multiple perspectives. \(\mathcal{L}_{adv}\) represents the adversarial loss between the generator and the domain discriminator, and is usually based on the following equation:
\[\begin{split}&\mathcal{L}_{adv}\left(G\right)=\mathbb{E}_{y,\bar{z}} \left[\log\mathcal{D}\left(1-G\left(y,\bar{z}\right)\right)\right],\\ &\mathcal{L}_{adv}\left(\mathcal{D}\right)=\mathbb{E}_{x_{t} \sim\mathcal{X}_{t}}\left[\log\mathcal{D}\left(x_{t}\right)\right]+\mathbb{E}_ {y,\bar{z}}\left[\log\left(1-\mathcal{D}\left(G\left(y,\bar{z}\right)\right) \right)\right],\end{split} \tag{4}\]
where \(\bar{z}\) stands for the noise vector, \(G\left(y,\bar{z}\right)\) represents the generator conditioned on a pre-defined labeled \(y\), which differs from the traditional generator [39, 40].
The focus of this class of methods lies in the design of \(\mathcal{L}_{con}\). 3C-GAN [41] is a pioneering work in this area, for the generator, it emphasizes the semantic similarity between the generated \(x_{v}=G\left(y,\bar{z}\right)\) and the label \(y\). For the classifier, a deterministic constraint, a weight constraint, and a clustering-based constraint have been proposed to improve the performance. Further, SDDA [42] proposes a consistency loss based on the domain discriminator, which guides the feature extractor to extract domain-invariant features by a binary classification loss. CPGA [43] is inspired by InfoNCE [44] and generates more representative prototypes for each category by adding a contrastive loss.
Another way of virtual domain generation is to simulate the distribution of the source domain based on Gaussian distribution according to the implicit knowledge contained in the pre-trained model. Generally, there are mainly two views. One view is to sample noises from standard Gaussian distribution as the inputs of the generator, then the virtual source data generation process \(\widetilde{x}\) can be expressed as:
\[\widetilde{x}=G\left(\bar{z}\right),\widetilde{z}\sim\mathcal{N}\left(0,1 \right). \tag{5}\]
Based on data-free knowledge distillation [45], Liu _et al._[11] propose a batch normalization statistical loss to model the source domain distribution using the mean and variance stored in the BN layer of the source domain model.
An alternative view is to regard the source domain distribution as a mixture of multiple Gaussian distributions. For instance, VDM-DA [35] constructs a Gaussian mixture model to represent the distribution of the virtual domain in the feature space as:
\[D^{v}\left(f_{v}\right)=\sum_{k=1}^{K}\pi_{k}\mathcal{N}\left(f_{v}|\mu_{k}, \sigma^{2}I\right), \tag{6}\]
where \(f_{v}\) stands for the virtual features and \(\pi_{k}\) denotes the mixing coefficients with \(\sum_{k=1}^{K}\pi_{k}=1\). To estimate the parameters of the Gaussian mixture model, VDM-DA empirically sets \(K\) to the number of categories and considers that the weights of the source classifier implicitly contain prototypical information about each category and derives the mean and standard deviation of the model based on the source classifier. SoFA [46] models the inference process and the generation process, respectively, and derives the mixture of Gaussian distribution from the predicted classes as the reference distribution.
However, using generative models for virtual domain is not only costly but also difficult to perform the domain generalization well particularly when the underlying data pattern is complex. Therefore, some approaches attempt to use a non-generative approach, e.g., to construct a virtual source domain by selecting some reliable data directly from the target domain. One most common idea is that by feeding the target domain images into the source model, samples with high prediction entropy are considered closer to the source domain distribution and can be used to represent the source domain distribution to some extent. Another key issue raised by this method, however, is the problem of insufficient data for the virtual source domain. For example, Du _et al._[47] found that the number of samples that can be selected to construct the virtual source domain is only one-tenth of the target domain, which is not sufficient to support a data distribution that represents the entire source domain.
To tackle this problem, a range of approaches focus on how to scale up virtual source domain data. Mixup [48] can be regarded as an effective technique and PS [47] mixes the samples in the virtual source domain with each other, which can be expressed as:
\[\begin{split}&\widetilde{x}_{s,aug}=\lambda\widetilde{x}_{s}^{i}+ \left(1-\lambda\right)\widetilde{x}_{s}^{j},\\ &\widetilde{y}_{s,aug}=\lambda\widetilde{y}_{s}^{i}+\left(1- \lambda\right)\widetilde{y}_{s}^{j},\end{split} \tag{7}\]
where \(\lambda\) denotes the mixup coefficient, \(\widetilde{x}_{s}^{i}\widetilde{x}_{s}^{j}\) are the samples selected by prediction entropy from the virtual source domain and \(\widetilde{y}_{s}^{i}\widetilde{y}_{s}^{j}\) are their corresponding labels respectively. In this way, an augmented virtual source domain are obtained by \(\widetilde{D}_{aug}^{s}=D^{s}\cup\widetilde{D}^{aug}\) with \(\widetilde{D}^{aug}=\{\widetilde{x}_{s,aug},\widetilde{y}_{s,aug}\}\). UIDM [49], instead, first performs a mixup in the target domain, and then takes into account not only the prediction entropy in the sample selection of the virtual source domain,
Fig. 4: An overview of Domain Reconstruction methods.
but also the data uncertainty brought by the mixup operation and the model uncertainty brought by the dropout operation in a more comprehensive manner.
In addition to data augmentation by mixup, another idea is to keep transferring samples from the target domain to the virtual source domain during training. Ye _et al._[50] proposes a nonlinear weight entropy minimization loss to continuously reduce the entropy value of high confidence samples while leaving low confidence samples unaffected during the training process, resulting in more reliable samples. Although the number of samples in the virtual source domain is increasing, for some difficult categories their virtual source domain samples may still be severely lacking. ProxyMix [51] poses this issue and constructs a class-balanced proxy source domain using the nearest neighbors of each prototype as well as a intra-domain mixup.
#### 3.1.2 Intra-domain Adversarial Exploration
In the previous section, we introduced the way to handle the problem of unavailable source data by generating the virtual domain, which views SFDA as a cross-domain problem. Instead, some other methods perform a binary split within the target domain, and then conduct adversarial learning between the two data populations while maintaining the knowledge of the source model. Consequently, it achieves the consistency of the overall data distribution in the target domain, which views SFDA as an intra-domain alignment. It is worth noting that the previously introduced virtual source domain methods share certain inspiration with this class of methods in terms of confidence, but the former emphasizes on how to construct the virtual source domain, while the latter focuses on intra-domain distribution alignment in an adversarial manner. Therefore, we categorize the former into the virtual domain reconstruction class.
Technically, intra-domain adversarial alignment mostly uses diverse classifiers and the feature extractor for adversarial purpose, which can be seen as a variant of bi-classifier paradigm adversarial methods [52, 53, 54] in conventional UDA. Here we first review the generic three-step procedures of the bi-classifier paradigm:
* **Step 1** Source training: \[\min_{\theta_{\mathcal{F}},\theta_{\mathcal{C}_{1}},\theta_{\mathcal{C}_{2}}} \sum_{i=1}^{2}\mathcal{L}_{cls}\left(\mathcal{C}_{i}\left(\mathcal{F}\left( \mathcal{X}^{s}\right)\right),\mathcal{Y}^{s}\right),\] where \(\mathcal{L}_{cls}\) denotes the cross entropy loss, \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) represent the two classifiers, \(\theta_{\mathcal{F}},\theta_{\mathcal{C}_{1}},\theta_{\mathcal{C}_{2}}\) is the model parameters corresponding to the feature extractor and classifiers respectively.
* **Step 2** Maximum classifier prediction discrepancy: \[\min_{\theta_{\mathcal{C}_{1}},\theta_{\mathcal{C}_{2}}}\mathcal{L}_{cls} \left(\mathcal{X}^{s},\mathcal{Y}^{s}\right)-\mathcal{L}_{dis}\left(\mathcal{ Y}_{1}^{t}|\mathcal{X}^{t},\mathcal{Y}_{2}^{t}|\mathcal{X}^{t}\right),\] where the first term is identical to step 1, \(\mathcal{L}_{dis}\) refers to the discrepancy loss of the two classifiers' prediction.
* **Step 3** Minimize classifier prediction discrepancy: \[\min_{\theta_{\mathcal{F}}}\mathcal{L}_{dis}\left(\mathcal{Y}_{1}^{t}| \mathcal{X}^{t},\mathcal{Y}_{2}^{t}|\mathcal{X}^{t}\right).\]
We now revisit this three-step procedures in the SFDA setting. For the first step, the source model is usually trained in the same way. For the second and third steps, the \(\mathcal{L}_{cls}\) term is intractable because of the missing source data, and source knowledge can not be well maintained. As a result, the prediction discrepancy term \(\mathcal{L}_{dis}\) in this case becomes unanchored.
To solve the above problem in the SFDA setting, a source-specific classifier can be preserved by freezing the parameters of one classifier, and a target-specific classifier can be trained on the unlabeled target data. In addition, the target domain can be divided into source-similar and source-dissimilar samples by the source-specific classifier's prediction confidence or some other criteria, thus the intra-domain adversarial alignment can be achieved by the prediction between the target-specific classifier and the source-specific classifier on the two data populations. Formally, the intra-domain adversarial alignment methods can be generalized to two steps as:
**Step \(1^{*}\)** Update the target-specific classifier by maximizing the prediction discrepancy between the target-specific classifier and the source-specific classifier on the source-dissimilar samples while minimizing the same prediction discrepancy on the source-similar samples:
\[\min_{\theta_{\mathcal{C}_{1}}}\sum_{x\in\mathcal{X}_{h}^{t}}\mathcal{L}_{dis }\left(p^{s}\left(x\right),p^{t}\left(x\right)\right)-\sum_{x\in\mathcal{X}_{ h}^{t}}\mathcal{L}_{dis}\left(p^{s}\left(x\right),p^{t}\left(x\right)\right), \tag{8}\]
where \(\mathcal{X}_{h}^{t}\) and \(\mathcal{X}_{l}^{t}\) represent the high-confidence (source-similar) and low-confidence (source-dissimilar) samples in the target domain with \(\mathcal{X}_{h}^{t}\cup\mathcal{X}_{l}^{t}=\mathcal{X}^{t}\), \(p^{s}\left(x\right)\) and \(p^{t}\left(x\right)\) denote the predictions of source-specific classifier and target-specific classifier, respectively.
**Step \(2^{*}\)** Update the feature extractor to pull the target features within the two classifiers' decision boundaries:
\[\min_{\theta_{\mathcal{F}}}\sum_{x\in\mathcal{X}^{t}}\mathcal{L}_{dec}\left(p ^{s}\left(x\right),p^{t}\left(x\right)\right), \tag{9}\]
where \(\mathcal{L}_{dec}\) stands for the decision loss to improve the generalization ability of the feature extractor.
In general, there are a total of three directions to improve the effect of intra-domain adversarial alignment, which are the selection of \(\mathcal{L}_{dis}\) and \(\mathcal{L}_{dec}\), the division between \(\mathcal{X}_{h}^{t}\) and \(\mathcal{X}_{l}^{t}\), and the optimization of network structure. Since we summarize the general framework for in-domain adversarial alignment, we now discuss the relevant methods under this framework.
\(\mathrm{A}^{2}\mathrm{Net}\)[10], as the first work of intra-domain adversarial alignment, is based on a voting strategy to divide the target domain by concatenating the predictions of the source-specific and target-specific classifiers together and performing a Softmax operation. Concretely, if the voting score of the source-specific classifier is higher than the voting score of the target-specific classifier, it will be seen as like the source-similar features and vice versa. Notably it proposes a Soft-Adversarial mechanism to train the target-specific classifier and feature extractor with the exchange of voting scores,
which can be expressed as:
\[\begin{split}\min_{\theta_{\mathcal{C}_{t}}}&-\sum_{i=1 }^{n_{t}}\left(\alpha_{i}^{s}\log\left(\sum_{k=1}^{K}p_{(i)k}^{st}\right)+ \alpha_{i}^{t}\log\left(\sum_{k=K+1}^{2K}p_{(i)k}^{st}\right)\right),\\ \min_{\theta_{\mathcal{F}}}&-\sum_{i=1}^{n_{t}} \left(\alpha_{i}^{t}\log\left(\sum_{k=1}^{K}p_{(i)k}^{st}\right)+\alpha_{i}^{ s}\log\left(\sum_{k=K+1}^{2K}p_{(i)k}^{st}\right)\right),\end{split} \tag{10}\]
where \(\alpha_{i}^{s}\), \(\alpha_{i}^{t}\) represent the voting scores of the source-specific classifier and target-specific classifier respectively, \(p^{st}=\sigma\left(\left[p^{s},p^{t}\right]^{T}\right)\in\mathbb{R}^{2K}\) is the concentration of \(p^{s}\) and \(p^{t}\) after activation. Although the presentation of Eq. 10 is different from that of Eq. 8, it is still essentially an adversarial training between a feature extractor and a target classifier, so our framework is still applicable.
BAIT [55] is another typical approach that conforms to the in-domain adversarial alignment paradigm, which uses KL divergence as the discrepancy loss as well as a bite loss that raises the output entropy of the two classifiers, denoted as follows:
\[\begin{split}\min_{\theta_{\mathcal{C}_{t}}}& \sum_{x\in\mathcal{X}_{t}^{k}}\mathcal{D}_{SKL}\left(p^{s}\left(x \right),p^{t}\left(x\right)\right)-\sum_{x\in\mathcal{X}_{t}^{l}}\mathcal{D}_{ SKL}\left(p^{s}\left(x\right),p^{t}\left(x\right)\right),\\ &\min_{\theta_{\mathcal{F}}}\sum_{i=1}^{n_{t}}\sum_{k=1}^{K}\left[ -p_{i,k}^{s}\log p_{i,k}^{t}-p_{i,k}^{t}\log p_{i,k}^{s}\right],\end{split} \tag{11}\]
where \(\mathcal{D}_{SKL}(a,b)=\frac{1}{2}\left(\mathcal{D}_{KL}\left(a|b\right)+ \mathcal{D}_{KL}\left(b|a\right)\right)\). It is worth mentioning that BAIT empirically found in its experiments that for the selected thresholds, it is better to divide the source-similar and source-dissimilar sets into the same size, we consider this to be an informative implications.
The KL divergence serves as a relatively good measure of the difference in prediction distributions between two classifiers, but it ignores the determinacy of the classifier outputs, which may lead to ambiguous output problems. Both D-MCD [56] and DAMC [57] are concerned with this problem and have adopted CDD distance [58] to measure both consistency and determinacy of the outputs. In addition to this, they have made some improvements to the network structure. In D-MCD, Chu _et al._ takes advantage of the model's tendency to remember simple samples in the early stages of the training process and designs a strong-weak paradigm to jointly filter the samples to avoid incorrect high-confidence samples. DAMC, on the other hand, uses a network structure with more than two classifiers to provide a tighter upper bound on the domain gap for classification and derives the optimal number of classifiers from being the same as the number of categories. However, all these extra network structures also increase the computational overhead.
So far, two major lines of research in UDA have been reflected in SFDA. In the virtual domain generation class, it is possible to use some metrics or adversarial-learning-based strategy to align the virtual and target domain distributions. In the intra-domain adversarial alignment, a variant of bi-classifier paradigm can be used. Both these methods indicate the potential research connection between SFDA and UDA.
#### 3.1.3 Perturbed Domain Supervision
This class of methods is essentially based on the assumption that both source and target domain features originate from a domain-invariant feature space [59], i.e., the source and target domains are actually formed by domain-invariant features plus some domain-biased factors. From the viewpoint of perturbation, it is feasible to use the target domain data to perturb the source data for semantic augmentation [60] in UDA, so as to guide the model to learn domain-invariant features. Then in the absence of source domain data, it is also possible to add some appropriate domain-related perturbations on the unlabeled target data during the model training. If the model can resist these domain-related perturbations, it is possible to make the model acquire domain-invariant features. As shown in Fig. 4(c), the perturbed target domain acts as the guidance to the source domain, so we refer to it as the perturbed target domain supervision. Although there are still few approaches on this technique route, due to the distinct technical characteristic, we consider it as a category parallel with other two classes of approaches in Domain-based Reconstruction.
As a representative method in this category, SOAP [61] argues that the ultimate goal of the model is to find domain-invariant features, and expresses the adjustment direction of the model as a vector and reflects it in UDA and SFDA as:
\[\begin{split}\left\{\overrightarrow{e}\left(\mathcal{D}^{t}, \mathcal{D}^{I}\right)_{UDA}&=\overrightarrow{e}\left(\mathcal{D} ^{t},\mathcal{D}^{s}\right)+\overrightarrow{e}\left(\mathcal{D}^{s}, \mathcal{D}^{I}\right),\\ \overrightarrow{e}\left(\mathcal{D}^{t},\mathcal{D}^{I}\right)_{ SFDA}&=\overrightarrow{e}\left(\mathcal{D}^{t+},\mathcal{D}^{t} \right),\end{split}\right.\end{split} \tag{12}\]
where \(\mathcal{D}^{I}\) denotes the domain-invariant feature space, and \(\mathcal{D}^{t+}\) is a super target domain constructed by adding a target-specific perturbation \(\widetilde{N}_{T}\) to the target domain samples as \(X_{T+}^{i}=\beta X_{T}^{i}+(1-\beta)\widetilde{N}_{T}\).
SMT [62] is also based on this strategy, which dynamically updates the perturbed target domain. Although the motivation of the super target domain is attractive, the construction is still too simple, e.g., taking the mean value of the target images, which may not enough to reflect the property of the target domain.
For the same purpose, FAUST [63] interprets this data perturbation from the perspective of uncertainty [64], which first applies a random augmentation to the target images, and then encourages the feature extractor to extract consistent features over the target images before and after the perturbation by both epistemic uncertainty and aleatoric uncertainty. Unlike perturbing target data, VMP [65] introduces perturbations to the parameters of the model by variational Bayesian inference to maintain the model's discriminative power while performing model adaptation. AAA [66], on the other hand, introduces adversarial perturbations [67] into domain adaptation by designing adversarial examples to attack the model, and the generation process of the adversarial examples can be represented as follows:
\[\begin{split}\widetilde{x}_{t}&=x_{t}+\frac{G\left(x_{t }\right)}{\left|\left|G\left(x_{t}\right)\right|\right|_{2}}\epsilon,\\ \max_{\theta_{\mathcal{C}}}&\frac{1}{n_{t}}\sum_{i=1} ^{n_{t}}\mathcal{L}_{cls}\left(\mathcal{C}\left(\mathcal{F}\left(\widetilde{x}_ {t}\right)\right),\hat{y}_{t}^{i}\right),\end{split} \tag{13}\]
where \(G\) denotes the generator, \(||\cdot||\) denotes the Euclid norm, \(\epsilon>0\) is the perturbation magnitude to control that the perturbation will not destroy the samples' original semantic information and \(\hat{y}_{t}^{i}\) is the pseudo label corresponding to \(\widetilde{x}_{t}\). The authors argue that if the model defends against these attacks, the generalization ability of the model can be significantly improved in the process. This defense process can be formulated as:
\[\max_{\theta\mathcal{F},\theta\mathcal{C}}\frac{1}{n_{t}}\sum_{i=1}^{n_{t}} \mathcal{L}_{cls}\left(\mathcal{C}\left(\mathcal{F}\left(\widetilde{x}_{t} \right)\right),\hat{y}_{t}^{i}\right). \tag{14}\]
In general, the key point of this class of methods is the design of perturbations. The first point is that the magnitude of the perturbation should be appropriate, which means enhancing the model's generalization ability without destroying the original properties of the data or the model. The second point is that the perturbation should preferably reflect the characteristics of the target domain to accomplish the adaptation task on the target domain.
### _Image-based Information Extraction_
The information contained in one image can be mainly divided into two parts [68]: one is the content information related to the image label, and the other is the style information that is often domain-specific. Neighborhood Clustering starts from the content information of the target images and builds on the observation that the target data itself has a clear structure and clustering, thus the consistency between neighboring nodes is required to increase the intra-class distance and decrease the inter-class distance in the feature space, so as to reduce the classification difficulty, since the misclassification mostly appears at the decision boundary of the model [69]; Image Style Translation starts from the style information of the image, and converts the target image to the source-style without changing the content information, so that it can be better recognized by the source classifier.
#### 3.2.1 Neighborhood Clustering
The key idea behind Neighborhood Clustering is that although the feature distribution in the target domain can be deviated from that in the source domain, they can still form clear clusters in the feature space and the features in the same clusters will come from the same category. Therefore, if the consistency between neighbor nodes in the feature space is encouraged, it enables feature points from the same clusters to move jointly towards a common category.
G-SFDA [70] is a pioneer job in this category, which proposes to use Local Structure Clustering for consistency constraints. And the methodology can be represented as follows:
\[\begin{split}\mathcal{L}_{LSC}=-\frac{1}{n}\sum_{i=1}^{n}\sum_{k= 1}^{K}\log\left[p\left(x_{i}\right)\cdot\mathcal{B}_{S}\left(\mathcal{N}_{k} \right)\right]+\sum_{c=1}^{C}\mathrm{KL}\left(\mathbb{P}_{c}||\mathrm{q}_{c} \right),\\ \mathcal{N}_{\{1,\cdots,K\}}=\{\mathcal{B}_{F}^{j}|top-K\left( cos\left(\mathcal{F}\left(x_{i}\right),\mathcal{B}_{F}^{j}\right),\forall \mathcal{B}_{F}^{j}\in\mathcal{B}_{F}\right)\},\\ \overline{p}=\frac{1}{n}\sum_{i=1}^{n}p_{c}\left(x_{i}\right), \text{and }\mathrm{q}_{\{c=1,\cdots,C\}}=\frac{1}{C}.\end{split} \tag{15}\]
From Eq. 15, LSC first finds the top-K features that are most similar to \(x_{i}\) in the feature bank \(\mathcal{B}_{F}\) by calculating the cosine similarity, and then constrains the prediction consistency of the prediction scores of these K features in the scores bank \(\mathcal{B}_{S}\) with \(x_{i}\). The second term of \(\mathcal{L}_{LSC}\) is used to prevent the degenerated solution [71, 72] to encourage prediction balance. CPGA [43] simplifies this step by constraining only the normalized similarity between neighbor nodes as an auxiliary loss, which also helps improve accuracy.
Based on this, NRC [73] further refines the neighbor nodes into reciprocal and non-reciprocal neighbors based on whether they are the nearest neighbors to each other,i.e., \((j\in\mathcal{N}_{K}^{j})\cap(i\in\mathcal{N}_{K}^{j})\), and builds an expanded neighborhood to aggregate more information in the common structure, thereby weighting the affinity of different neighbors as:
\[\begin{split}\mathcal{L}_{\mathcal{N}}=-\frac{1}{n_{t}}\sum_{i} \sum_{k\in\mathcal{N}_{K}^{j}}A_{ik}\mathcal{B}_{S,k}^{T}p_{i},\\ \mathcal{L}_{E}=-\frac{1}{n_{t}}\sum_{i}\sum_{k\in\mathcal{N}_{K}^ {j}}\sum_{m\in\mathcal{B}_{M}^{j}}r\mathcal{B}_{S,m}^{T}p_{i},\end{split} \tag{16}\]
where \(A_{ik}=1\) while \(i\) and \(k\) are reciprocal nodes, otherwise \(A_{ik}=r=0.1\). Similarly, Tang _et al._[74] construct semantic neighbors on the manifold to portray more complete geometric information, Tian _et al._[75] combine with pseudo labeling techniques to obtain structure-preserved pseudo-label by the weighted average predictions of neighboring nodes. However, previous methods have only considered maintaining the consistency of the same clusters (reducing the intra-class distance), but ignored the dissimilarity of different clusters (increasing the inter-class distance), AaD [76] achieves this by defining two likelihood function:
\[\begin{split} P\left(\mathbb{C}_{i}\right)=\prod_{j\in\mathbb{C} _{i}}p_{ij}&=\prod_{j\in\mathbb{C}_{i}}\frac{e^{p_{i}^{T}p_{j}}}{ \sum_{k=1}^{N_{t}}e^{p_{i}^{T}p_{k}}},\\ P\left(\mathbb{B}_{i}\right)=\prod_{j\in\mathbb{B}_{i}}p_{ij}& =\prod_{j\in\mathbb{B}_{i}}\frac{e^{p_{i}^{T}p_{j}}}{\sum_{k=1}^{N_{t}}e^{p_{ i}^{T}p_{k}}},\end{split} \tag{17}\]
where neighbor set \(\mathbb{C}_{i}\) includes \(K\)-nearest neighbors of node \(i\), and background set \(\mathbb{B}_{i}\) includes the nodes which are not the neighbor of node \(i\). Once these two likelihood functions are obtained, constraints can be placed on both intra-cluster and inter-cluster by the negative log-likelihood as \(L_{i}\left(\mathbb{C}_{i},\mathbb{B}_{i}\right)=-\log\frac{P\left(\mathbb{C}_{ i}\right)}{P\left(\mathbb{B}_{i}\right)}\).
In addition, Neighborhood Clustering also appears widely in a new setting called active SFDA, i.e., selectively labeling a small portion of data, which has two main benefits: (1) a small portion of difficult data can be selectively labeled by constructing a neighbor graph, e.g., ELPT [77] selects ungraphable data on the basis of KNN; MHPL [32] considers those data satisfying the properties of neighbor-chaotic, individual-different, and target-like as the most effective choice; (2) the discriminability of the target domain can also be improved by label propagation [78, 79] among neighbor nodes. Overall, the key point of Neighborhood Clustering methods lies in how to mine more structural information from the unlabeled target data, and since unsupervised clustering is inherently suitable for
SFDA settings for unavailable source data, there is still a wide scope for this class of methods in the future.
#### 3.2.2 Image Style Translation
Image Style Translation [80, 81] aims to render the same content into different styles, which generally contains two loss functions. The first one called content loss is used to ensure consistent semantic information before and after translation. The other one called transfer loss which is based on feature statistics of multiple intermediate layers to perform style translation. However, in previous methods, most of the feature statistics are considered in the form of Gram matrix [68] or instance normalization [82], which requires access to the source domain style data, but this is not feasible in the SFDA setting. So Hou _et al._ used the mean \(\mu_{stored}^{n}\) and variance \(\sigma_{stored}^{n}\) of the image batches for the n-th layer stored in the Batch normalization (BN) layers [83] as a style representative of the source domain and proposed a source-free image style translation as follows:
\[\mathcal{L}_{content}=\left\|\mathcal{F}^{N}\left(\widetilde{x}_{ t}\right)-\mathcal{F}^{N}\left(x_{t}\right)\right\|_{2}, \tag{18}\] \[\mathcal{L}_{style}= \frac{1}{N}\sum_{n=1}^{N}\left\|\mu_{current}^{n}-\mu_{stored}^{n }\right\|_{2}+\left\|\sigma_{current}^{n}-\sigma_{stored}^{n}\right\|_{2},\]
where the feature maps in the source classifier have \(N\) layers, \(\widetilde{x}_{t}\) denotes the source-styled image through the generator.
CPSS [84], on the other hand, focuses on improving the robustness of the model by augmenting the samples with diverse styles based on AdaIN [82] during training. A model that is insensitive to changes in image style can be obtained and thus has a strong generalization capability. The training loss can be expressed as:
\[\mathcal{L}_{intra}=\sigma\left(\widetilde{F}_{i,j}\right)\left( \frac{F_{i,j}-\mu\left(F_{i,j}\right)}{\sigma\left(F_{i,j}\right)}\right)+\mu \left(\widetilde{F}_{i,j}\right), \tag{19}\] \[\mathcal{L}_{inter}=\sigma\left(\widetilde{F}_{k,i,j}\right) \left(\frac{F_{k,i,j}-\mu\left(F_{k,i,j}\right)}{\sigma\left(F_{k,i,j}\right) }\right)+\mu\left(\widetilde{F}_{k,i,j}\right),\]
where \(\mathcal{L}_{intra}\) and \(\mathcal{L}_{inter}\) represent the intra-image style swap and inter-image style swap among \(k\) images. \(\widetilde{F}_{i,j}\) and \(\widetilde{F}_{k,i,j}\) denote the shuffled patch that provides the style feature and one image can be divided into \(n_{h}\times n_{w}\) patches as:
\[F=\begin{bmatrix}F_{1,1}&\cdots&F_{1,n_{w}}\\ \vdots&\ddots&\vdots\\ F_{n_{h},1}&\cdots&F_{n_{h},n_{w}}\end{bmatrix}\]
Similar to CPSS, SI-SFDA [85] simulates learning representations from corrupted images in medical image segmentation by randomly masking some patches with a black background. How to make better use of the BN layer is also a concern for this class of methods, Ishii _et al._[86] implicitly implements style translation by aligning the statistics in the BN layer with the target domain distribution. As of now, this class of methods is still relatively rare, and is mostly found in the semantic segmentation.
## 4 Model-based methods
Unlike data-based methods that focus on the data generation or the exploring data properties, model-based methods separate the model into several sub-modules, and then adjust the parameters to some of them for domain adaptation. In this category, self-training is the most dominant paradigm and even the most popular one in the whole SFDA research. Self-training methods mostly use auxiliary modules to improve model robustness and preserve source domain knowledge in combination with other methods, and we classify them separately in order to reflect our intention of modularizing the methods for an explicit insight in this survey.
### _Self-training_
Since the supervision of the source data is unavailable, self-training, also known as self-supervised learning, makes use of the model's predictions on the unlabeled target domain to refine the model in a self-supervised manner. In SFDA, most self-training methods are carried out based on the ideas of pseudo-labeling, entropy minimization, and contrastive learning.
#### 4.1.1 Pseudo Labeling
Without supervision, the most intuitive idea is to label the target samples based on the predictions of the source model, and then perform self-supervised learning based on these pseudo-labels, e.g., SHOT [7]. Specifically, this process can be divided into three steps: the first step **Prototype Generation** is to generate class prototypes based on the features of some high-confidence samples or all samples, the second step **Pseudo-label Assignment** is to specify pseudo-labels by the distance or similarity between other samples and class prototypes, and the third step **Pseudo-label Filtering** finally filters out some noisy labels to keep the purity of the labels. It is worth noting that neither the first step nor the third step in this process necessarily exists. For example, a sample can be directly pseudo-labeled by the most possible class predicted by the source classifier, but we argue that they have a tight relationship with each other. We will expand on each step in more detail below.
**Prototype Generation.** One of the simplest ways is to select class prototypes by self-entropy [87, 88], i.e., to select samples in each class with self-entropy greater than a certain threshold. Ding _et al._[51] define the weights of the source classifier as the class prototypes and found that the mean accuracy is higher than employing entropy-criterion. However, the class prototypes selected in this way may not be representative. Similar to this, some methods [89, 77] are inspired by active learning, and consider target samples with higher free energy to be more representative of the target domain distribution which can be used as class prototypes. Another popular approach is to calculate the centroid [92, 91, 7, 90] of each class based on DeepCluster [93], which can be formulated as follows:
\[c_{k}=\frac{\sum_{x_{t}\in\mathcal{X}_{t}}\delta_{k}\left(\mathcal{F}\circ \mathcal{G}_{t}\left(x_{t}\right)\right)\mathcal{G}_{t}\left(x_{t}\right)}{ \sum_{x_{t}\in\mathcal{X}_{t}}\delta_{k}\left(\mathcal{F}_{t}\left(x_{t} \right)\right)}, \tag{20}\]
where \(c_{k}\) denotes the centroid of \(k\)-th class, \(\delta_{k}\) denotes the \(k\)-th element in the soft-max operation. Some recent
works [51, 94] argue that using only one prototype does not fully characterize the class, so multiple prototypes are generated in each class. In general, the aim of this step is to attain a class prototype \(p_{k}\) that can represent the distribution of each category in the target domain.
**Pseudo-label Assignment.** Once the class prototype is obtained, the pseudo-label of each sample can be obtained by comparing it with different class prototypes, either in terms of similarity or distance in the feature space:
\[y_{\widetilde{t}}=\arg\min_{k}\mathcal{L}_{pse}\left(\mathcal{G}\left(x_{t} \right),p_{k}\right), \tag{21}\]
where \(\mathcal{L}_{pse}\) represents the metric function of how the pseudo-label is given. Note that when no class prototype is generated, the equation degenerates to
\[y_{\widetilde{t}}=\arg\min_{k}\mathcal{F}\circ\mathcal{G}\left(x_{t}\right), \tag{22}\]
where the pseudo-label is assigned directly by the most confident category of the model outputs. Moreover, the way of pseudo-labeling is also worth discussing. In the pioneering work SHOT [7], it is time-consuming to touch all the data before assigning the pseudo-label. Based on this, BMD [91] proposes a dynamic pseudo labeling strategy to update the pseudo label in the process of domain adaptation. Shen _et al._[95] label only a subset of the target domain to ensure accuracy. In this labeling process, how to maintain the balance between categories is also a major concern. Qu _et al._[91] use the idea of multiple instance learning [96, 97] to form a balanced global sampling strategy. You _et al._[98] set category-specific thresholds to keep the number between categories as consistent as possible, Li _et al._[99] noticed that the model may be biased towards most categories, thus proposing an imbalanced SFDA strategy with secondary label correction.
**Pseudo-label Filtering.** Due to the presence of the domain shift, the outputs of the model inevitably contain noise, i.e., incorrect pseudo-labels, as illustrated in Fig. 5. Therefore, the false label filtering [56, 87, 100] and the noisy label learning [94, 101, 102] are two important directions to improve the accuracy of pseudo labels. False label filtering is mainly to reject unreliable pseudo-labels by designing some reasonable mechanism. One is to design a specific rule mechanism, such as Kim _et al._[87] propose an end-to-end filtering mechanism, which only recognizes a sample as reliable when its Hausdorff distance from its most similar class prototype is smaller than its distance from the second similar class prototype; another is to design an optimized network mechanism to perform filtering, such as Chu _et al._[56] utilized the advantage that a weak model is more likely to identify hard samples [103], and propose an additional untrained weak network to filter on incorrect pseudo-labels. Yang _et al._[100] looked at the stability of negative learning and proposed a multi-class negative learning strategy to learn the filtering threshold of pseudo-label selection adaptively. On the other hand, noise label learning is no longer just filtering wrong pseudo-labels, but also learning useful information from the noisy labels for self-refinement. SHOT++ [102] is an extension of SHOT [7], attempting to improve the reliability of low confidence samples by MixMatch [104] between high confidence samples and low confidence samples for information propagation. NEL [94] introduces ensemble learning [105, 106] into the SFDA setting, first performing data augmentation on the target domain samples with multiple angles based on input and feedback, and then using negative ensemble learning across multiple versions to refine the pseudo label. From the whole process of the pseudo-labeling methods, pseudo-label filtering is used as a key step to improve the accuracy of pseudo-label, but it also brings additional resource overhead, so the efficient pseudo-label filtering technique is still worth discussing.
#### 4.1.2 Entropy Minimization
Entropy minimization has profound applications in semi-supervised learning [107, 108] and unsupervised learning [16, 109] methods, and was first introduced to the SFDA setting by [7]. Previous unsupervised domain adaptation mostly improves the adaptive capability of the model in the target domain by aligning the feature distributions in the source and target domains, but this is not feasible without the source domain data. Therefore, another idea is to start directly from the results, assuming that a model with adaptive ability has been obtained, then the model outputs of each sample should be deterministic, i.e., entropy minimization, and this ideal result constraint can be inversely employed to guide the optimization of the model. The entropy minimization loss can be expressed as:
\[\mathcal{L}_{ent}=-\mathbb{E}_{x_{t}\in\mathcal{X}_{t}}\sum_{k=1}^{K}\delta_{k} \left(p_{t}\left(x_{t}\right)\right)\log\delta_{k}\left(p_{t}\left(x_{t}\right) \right), \tag{23}\]
where \(p_{t}\) denotes the model's prediction with respect to sample \(x_{t}\). Since entropy minimization loss is easy to implement and can be easily combined with other methods such as pseudo labeling methods as a loss function, image classification in the passive setting, image segmentation, medical image analysis and blind image quality assessment are rapidly adopted, it has been rapidly adopted for image classification [110, 111], image segmentation [12, 112, 113],
Fig. 5: An illustration of the process in the Pseudo Labeling methods. Circles with different colors indicate samples of different classes. Circles with solid margins indicate samples with pseudo labels (imstarch between the margin color and the filled color indicates the wrong pseudo label), and triangles indicate class prototypes. The class prototypes are first generated based on the distribution of samples in the feature space, then pseudo-labels are assigned to each sample based on the class prototypes, and finally, the noisy labels are filtered. In particular, some of the methods adjust the position of the class prototypes again in the first step based on the results.
medical image analysis [113] and blind image quality assessment [114] in the SFDA setting. However, some works [115, 116] have found that entropy minimization loss may lead to a trivial solution during training, i.e., all predictions are biased towards a certain class, so the batch entropy maximization loss can also be used to ensure the diversity of predictions:
\[\mathcal{L}_{div}=\sum_{k=1}^{K}p_{k}\log p_{k}, \tag{24}\]
where \(p_{k}=\mathbb{E}_{x_{t}\in\mathcal{X}_{t}}\left[\delta\left(p_{t}^{k}\left(x_{t }\right)\right)\right]\), is to accumulate the outputs by class first in one batch, and then perform the softmax operation.The reason for this is that within a batch, the number of samples in each category should be balanced, so that the uniform distribution has the maximum entropy value. In many SFDA methods, Eq. 23 and Eq. 24 are combined to form the information maximization loss:
\[\mathcal{L}_{IM}=\mathcal{L}_{ent}+\mathcal{L}_{div}. \tag{25}\]
Based on the \(\mathcal{L}_{IM}\), Ahmed _et al._[31] weight the different source domains in the multi-source domain setting to get the weighted information maximization, and Mao _et al._[111] incorporate the neighbor node information to graph adaptation. However, except after information maximization, innovations based on entropy minimization are relatively lacking at present, and more approaches just directly use it as an auxiliary loss to improve accuracy. The key point of entropy minimization is judging from the results how to constrain the outputs of the model on the target domain from different levels such as individual and whole, deterministic and diverse, etc., and what outputs are more reasonable may be a topic worth more in-depth and detailed in the future.
#### 4.1.3 Contrastive Learning
Self-supervised contrastive learning [117, 118] is a common route used for unsupervised representation learning, which focuses on learning discriminative representations by increasing the distance between negative pairs while gathering positive pairs. This approach is characterized by the construction of positive and negative sample pairs regarding the same sample, which can be mainly categorized as the memory-bank based [119], encoder based [117] and mini-batch based [120, 121]. One most typical contrastive loss is the InfoNCE loss [44, 117]:
\[\mathcal{L}_{Info}=-\underset{v^{+}\in V^{+}}{\sum}\log\frac{\exp\left(u^{T}v ^{+}/\tau\right)}{\exp\left(u^{T}v^{+}/\tau\right)+\sum_{v^{-}\in V^{-}}\exp \left(u^{T}v^{-}/\tau\right)}, \tag{26}\]
where \(V^{+}\),\(V^{-}\) are the sets of positive and negative pairs about the same sample \(u\) respectively, and \(\tau\) is the temperature parameter. In unsupervised domain adaptation, the construction of positive and negative sample pairs is usually done with the target domain sample as the key value and then the source domain sample as the query. However, the latter is not available in the source-free setting, so how to construct queries that can be representative of the source domain is the central problem of this class of SFDA methods, which can be mainly divided into 3 ways. The first manner is to use the source classifier's weights as prototype features for each class of source samples [90]:
\[\mathcal{L}_{cdc}^{1}=-\sum_{m=1}^{M}\mathbb{I}_{\bar{y}_{u}=m}\log\frac{\exp \left(u^{T}w_{s}^{m}/\tau\right)}{\sum_{j=1}^{M}\exp\left(u^{T}w_{s}^{j}/\tau \right)}, \tag{27}\]
where the classifier weight \(W_{s}=[w_{s}^{1},\cdots,w_{s}^{M}]\), \(\mathbb{I}_{\bar{y}_{i}^{i}=m}\) is the pseudo label indicator matrix on target sample \(u\). Similarly, [43] uses the generated prototypes for contrastive learning and introduces weight loss to enhance the reliability of the pseudo label. The second way is to increase the similarity of positive pairs in the current batch for contrastive matching [10]:
\[\mathcal{L}_{cdc}^{2}=-\log\frac{\exp\left(s_{ij}\right)}{\sum_{v=1}^{b} \mathbb{I}_{v\neq i}|\gamma_{iv}|\exp\left(s_{iv}\right)}, \tag{28}\]
where \(s_{ij}=[\sigma(p_{i})]^{T}\sigma(p_{v})\) denotes the similarity of the i-th sample and the v-th sample with the soft-max operation on their model outputs \(p_{i}\), \(p_{v}\), \(\gamma_{iv}\) is a threshold function used to determine whether the two samples belong to one class. The last class of approaches preserves a memory bank to store the outputs of the historical models, thus constructing positive and negative sample pairs [122, 123]. HCID [122] uses the current model to encode the samples of the current batch as the query \(q^{t}=M^{t}\left(x_{q}\right)\), and then uses the historical model to encode the previously stored samples as keys \(k_{n}^{t-m}=M^{t-m}\left(x_{k_{n}}\right)\):
\[\mathcal{L}_{cdc}^{3}=-\sum_{x_{q}\in X_{t}}\log\frac{\exp\left(q^{T}k_{+}^{t- m}/\tau\right)\gamma_{+}^{t-m}}{\sum_{i=0}^{N}\exp\left(q^{t}k_{i}^{t-m}/\tau \right)\gamma_{i}^{t-m}}, \tag{29}\]
where \(N\) is the number of stored keys, \(\gamma\) is the parameter used to measure the reliability of the keys, and one implementation is the classification entropy. In general, part of the processing of this class of source-free methods is similar to the basic idea of data-based SFDA, i.e., using generative or non-generative approaches to compensate for missing source data.
### _Self-attention_
In computer vision tasks, localizing object regions plays an important role in domain adaptation [124], but traditional CNN models prefer to capture local domain-specific information, such as background information, which may not be helpful for focusing objects we really care about. Therefore, the self-attention mechanism [125, 126] is introduced into the SFDA setting:
\[\mathcal{L}_{Attn_{self}}(Q,K,V)=softmax(\frac{QK^{T}}{\sqrt{d_{k}}})V, \tag{30}\]
where \(Q\), \(K\), \(V\) denote query, key and value respectively, \(d_{k}\) is the dimensions of \(K\). To alleviate the context loss problem in the source domain, some methods [122, 77, 112] equip the self-attentive module directly with the traditional
feature extractor. Eq. 30 can also be extended with cross-domain attention [127]:
\[\mathcal{L}_{Att_{cross}}(Q_{s},K_{t},V_{t})=softmax(\frac{Q_{s}K_{t}^{T}}{\sqrt{d _{t}}})V_{t}, \tag{31}\]
where \(Q_{s}\) is from the source domain, \(K_{t}\) and \(V_{t}\) are from the target domain, and together they form a sample pair. Based on this, CADX [128] divides the target domain image into supported images and query images under the source-free setting, and improves the original patch-to-patch operation to image-to-image in order to capture the overall representations and reduce the computational burden. In addition, some approaches [11, 129] further process the features from both spatial attention and channel attention perspectives to enrich the contextual semantics of the representations. Currently, attention-based SFDA methods are still relatively rare, especially transformer-based SFDA methods.
## 5 Comparison
Since the classification task constitutes the main body of existing source-free domain adaptation methods, we compare current leading SFDA methods on three most widely used classification datasets in this section. In particular, we modularize these methods according to Section 3 and Section 4 to reflect the effectiveness of the different modules.
### _Comparison Datasets_
The classification datasets used in this paper include Digits [39], VisDA-C [130], Office-31 [131], Office-Home [132], and DomainNet [133], and we have aggregated the statistics in Table. III. However, due to very little room for improvement for the Digits dataset (98.9% in [7]) and the difficulty of the DomainNet dataset, the other three datasets are more popular in the community. Therefore, we mainly summarize the experimental results on these three datasets to represent the majority. Here are the details of these three datasets.
**Office-31** is the dominant benchmark dataset in visual transfer learning, which contains 4,652 images of 31 types of target objects commonly found in office environments, such as laptops, filing cabinets, keyboards, etc. These images are mainly derived from Amazon (online e-commerce images), Webcam (low-resolution images taken by webcam), and DSLR (high-resolution images taken by DSLR). There are 2,817 images in the Amazon dataset, with an average of 90 images per category and a single image background; 795 images in the Webcam dataset, with images exhibiting significant noise, color, and white balance artifacts; and 498 images in the DSLR dataset, with 5 objects per category and each object has taken an average of 3 times from different viewpoints.
**Office-Home** is a baseline dataset for domain adaptation that contains 4 domains, each consisting of 65 categories. The four domains are Art (artistic images in the form of drawings, paintings, decorations, etc.), Clipart (a collection of clipart images), Products (images of objects without backgrounds, and Real World (images of objects taken with a regular camera). It contains 15,500 images, with an average of about 70 images and a maximum of 99 images in each category.
**VisDA-C** is a large-scale dataset for domain adaptation from simulators to realistic environments. It consists of 12 common classes shared by three public datasets: Caltech-256 (C), ImageNet ILSVRC2012 (I), and PASCALVOC2012 (P). The source domain samples and target domain samples are synthetic images generated by rendering 3D models and real images, respectively.
To better represent the modules included in each method, we categorize the SFDA methods into four classes: Domain-based Reconstruction (DR), Image-based Information Extraction (IIE), Self-Training (ST), and Self-Attention (SA). More finely, we then divide these four types of methods into nine modules, which are Virtual Domain Generation (**M\({}_{vdg}\)**), Intra-domain Adversarial Alignment (**M\({}_{iaa}\)**), Perturbed Domain Supervision (**M\({}_{pds}\)**), Neighborhood Clustering (**M\({}_{nc}\)**), Image Style Translation (**M\({}_{ist}\)**), Pseudo Labeling (**M\({}_{pl}\)**), Entropy Minimization (**M\({}_{em}\)**), Contrastive Learning (**M\({}_{cl}\)**) and Self-Attention (**M\({}_{sa}\)**).
datasets, the highest accuracies of the SFDA methods are only 3.6% (Kundu _et al._ + SHOT++ [139][140] on Office), 2.7% (TransDA [124] on Office-Home) and 0.9% (BMD + SHOT++ [91] on VisDA-C) lower than the accuracy under the target supervision (oracle). To some extent, SFDA shows a tendency to outperform unsupervised domain adaptation methods, both in terms of the number of works and accuracy, which not only reflects the great potential of this field, but also indicates the need for a comprehensive SFDA survey.
**Self-training is still the most popular research line at the moment.** Most of the SFDA methods involve the strategy of self-training. In addition, Entropy minimization [136, 102, 63, 7, 6] as an auxiliary loss can be easily combined with most SFDA methods to improve the discriminability of the target domain representation. Besides, contrastive learning [10, 66, 43] generally works with the pseudo-labeling module to play the role of domain alignment. In the absence of source data and target labels, self-supervision by means of pseudo-labeling for the target data is the most common method. However, this inevitably introduces the problem of noisy label and error accumulation. In this regard, some approaches translate the SFDA problem into the noisy label learning problem [94, 56] to improve the model performance. There are some approaches [138, 140] that try to bypass the noisy label problem and implicitly regularize the self-training direction of the target domain, achieving satisfying results. From the experimental results on two of the three datasets, the methods that achieve the highest accuracy all use the pseudo labeling method SHOT++ [102] as the baseline. For instance, Kundu _et al._[139] achieves 90.7% on Office, and BMD [91] achieves 88.7% on VisDA. These observations show that pseudo labeling is a versatile and effective technique, which is also a strong baseline for improvement.
**Domain-based reconstruction is commonly used in SFDA.** The intuition of SFDA methods behind domain-based reconstruction is quite clear: it aims to reconstruct a new source or target domain to replace the missing source domain for supervision. Among this category of methods, intra-domain adversarial alignment methods were the first batch methods, such as BAIT [55] and 3C-GAN [41]. However, their influence is unenviable, probably because the source-similar samples selected in the target domain contain a lot of noise and are not representative enough, which is also why D-MCD [56] targets at the de-noising problem. The attention of virtual domain generation is on the rise, probably because virtual domain generation methods [137, 51, 138, 51] can be closely combined with self-training methods that play the supervised role of the target domain, thus together further improving the effect, e.g., reaching the highest accuracy of 90.7% (Kundu _et al._[139]) on the Office-31 dataset.
**Neighborhood clustering seems to be effective in image-based information extraction.** Compared with image style transformation methods [141], neighbor clustering is obviously more effective. For example, SCLM [74] achieves the highest accuracy on Cl\(\rightarrow\)Ar and Cl\(\rightarrow\)Rw transfer tasks on Office-Home. The reason may be that neighbor clustering can better preserve the underlying structural information
\begin{table}
\begin{tabular}{c|c c c c|c c c c c c|c} \hline Method (Source\(\rightarrow\)Target) & DR & IIE & ST & SA & Modules & A \(\rightarrow\) W & D \(\rightarrow\) W & W \(\rightarrow\) D & A \(\rightarrow\) D & D \(\rightarrow\) A & W \(\rightarrow\) A & Avg. \\ \hline Source-only [134] & - & - & - & - & - & 68.4 & 96.7 & 99.3 & 68.9 & 62.5 & 60.7 & 76.1 \\ Target-supervised [102] & - & - & - & - & - & 98.7 & 98.7 & 98.0 & 98.0 & 98.7 & 86.0 & 94.3 \\ \hline SHOT [7] & & & & \(\mathbf{M}_{pl}+\mathbf{M}_{em}\) & 90.1 & 98.4 & 99.9 & 94.0 & 74.7 & 74.3 & 88.6 \\
3C-GAN [41] & ✓ & & & \(\mathbf{M}_{tan}\) & 93.7 & 98.5 & 99.8 & 92.7 & 75.3 & 77.8 & 89.6 \\ BAIT [55] & ✓ & & & & \(\mathbf{M}_{tan}\) & 94.6 & 98.1 & **100.0** & 92.0 & 74.6 & 75.2 & 89.4 \\ \hline SHOT++ [102] & & & & ✓ & \(\mathbf{M}_{pl}+\mathbf{M}_{em}\) & 90.4 & 98.7 & 99.9 & 94.3 & 76.2 & 75.8 & 89.2 \\ Kim _et al._[87] & & & ✓ & & \(\mathbf{M}_{pl}\) & 91.1 & 98.2 & 99.5 & 92.2 & 71.0 & 71.2 & 87.2 \\ A\({}^{2}\)Net [10] & ✓ & & ✓ & \(\mathbf{M}_{ina}+\mathbf{M}_{cl}\) & 94.0 & 99.2 & **100.0** & 94.5 & 76.7 & 76.1 & 90.1 \\ NRC [73] & & ✓ & & & \(\mathbf{M}_{nc}\) & 90.8 & 99.0 & **100.0** & 96.0 & 75.3 & 75.0 & 89.4 \\ ASL [135] & & & ✓ & & \(\mathbf{M}_{pl}+\mathbf{M}_{em}\) & 94.1 & 98.4 & 99.8 & 93.4 & 76.9 & 75.0 & 89.5 \\ TransDA [124] & & & ✓ & ✓ & \(\mathbf{M}_{pl}+\mathbf{M}_{an}\) & 95.0 & **99.3** & 99.6 & **97.2** & 73.7 & **79.3** & **90.7** \\ CFGA [43] & & ✓ & ✓ & & \(\mathbf{M}_{ac}+\mathbf{M}_{pl}+\mathbf{M}_{cl}\) & 94.1 & 98.4 & 99.8 & 94.4 & 76.0 & 76.6 & 89.9 \\ AAA [66] & ✓ & & ✓ & \(\mathbf{M}_{ps}+\mathbf{M}_{pl}+\mathbf{M}_{cl}\) & 94.2 & 98.1 & 99.8 & 95.6 & 75.6 & 76.0 & 89.9 \\ VDM-DA [35] & ✓ & & ✓ & \(\mathbf{M}_{udg}+\mathbf{M}_{cm}\) & 94.1 & 98.0 & **100.0** & 93.2 & 75.8 & 77.1 & 89.7 \\ HCL+SHOT [122] & & & ✓ & \(\mathbf{M}_{pl}+\mathbf{M}_{cl}\) & 92.5 & 98.2 & **100.0** & 94.7 & 75.9 & _77.7_ & 89.8 \\ \hline AaD [76] & & ✓ & & & \(\mathbf{M}_{nc}\) & 92.1 & 99.1 & **100.0** & 96.4 & 75.0 & 76.5 & 89.9 \\ U-SFAN [136] & ✓ & & ✓ & \(\mathbf{M}_{vdg}+\mathbf{M}_{em}\) & 92.8 & 98.0 & 99.0 & 94.2 & 74.6 & 74.4 & 88.8 \\ CoWA-JMDS [137] & ✓ & & ✓ & \(\mathbf{M}_{vdg}+\mathbf{M}_{pl}\) & 95.2 & 98.5 & 99.8 & 94.4 & 76.2 & 77.6 & 90.3 \\ CDCL [90] & & & ✓ & \(\mathbf{M}_{pl}+\mathbf{M}_{cl}\) & 92.1 & 98.5 & **100.0** & 94.4 & 76.4 & 74.1 & 89.3 \\ D-MCD [56] & ✓ & & ✓ & \(\mathbf{M}_{la}+\mathbf{M}_{pl}\) & 93.5 & 98.8 & **100.0** & 94.1 & 76.4 & 76.4 & 89.9 \\ BMD + SHOT++ [91] & & & ✓ & \(\mathbf{M}_{pl}+\mathbf{M}_{em}\) & 94.2 & 98.0 & **100.0** & 96.2 & 76.0 & 76.0 & 90.1 \\ ProxyMix [51] & ✓ & & ✓ & \(\mathbf{M}_{udg}+\mathbf{M}_{pl}\) & **96.7** & 98.5 & 99.8 & 95.4 & 75.1 & 75.4 & 85.6 \\ UTR [138] & & & ✓ & \(\mathbf{M}_{pl}\) & 93.5 & 99.1 & **100.0** & 95.0 & 76.3 & 78.4 & 90.3 \\ SCLM [74] & & ✓ & ✓ & \(\mathbf{M}_{nc}+\mathbf{M}_{pl}\) & 90.0 & 98.9 & **100.0** & 95.8 & 75.5 & 76.0 & 89.4 \\ Jing _et al._[65] & ✓ & & & & \(\mathbf{M}_{pds}\) & 93.3 & 98.6 & **100.0** & 96.2 & 75.4 & 76.9 & 90.0 \\ Kundu _et al._ + SHOT++ [139] & ✓ & ✓ & ✓ & \(\mathbf{M}_{udg}+\mathbf{M}_{pl}\) & 93.2 & 98.9 & **100.0** & 94.6 & **78.3** & 78.9 & **90.7** \\ \hline \end{tabular}
\end{table} TABLE IV: Classification Accuracy (%) Comparison for Source-free Domain Adaptation Methods on the Office-31 Dataset (ResNet-50).
in clipart images. Besides, AaD [76] achieves the second highest classification accuracy of 88% on VisDA using only one loss function, which can be used as a simple but strong baseline.
**Self-attention is promising, especially transformer-based SFDA methods.** Although the self-attention-based SFDA methods are less used in image classification [70, 124] and mostly seen in semantic segmentation [11, 112, 112], the transformer-based method TransDA [124] achieves the highest accuracy of 79.3% on the Office-Home dataset, which is 4.8% higher than the second highest method [139]. This somewhat suggests that encouraging models to turn their attention to the object region may be quite effective for reducing domain shift. However, TransDA [124] merely injects the transformer into the convolutional network, and it may be an interesting topic to see how the transformer can be better combined with the SFDA setting.
## 6 Application
Domain adaptation aims to reduce the cost of labeling target domain data, while source-free domain adaptation goes a step further to preserve the privacy of source domain data. Therefore, source-free domain adaptation and unsupervised
\begin{table}
\begin{tabular}{c|c c c c|c c c c c c c c c c c c} \hline \hline Method & DR & IIE & ST & SA & Modules & plane & bcycl & bus & car & horse & knife & mcycl & person & plant & sidbed & train & truck & Avg. \\ \hline Source-only [134] & - & - & - & - & - & 55.1 & 53.3 & 61.9 & 59.1 & 80.6 & 77.9 & 79.7 & 31.2 & 81.0 & 26.5 & 72.3 & 8.5 & 52.4 \\ Targets-supervised [102] & - & - & - & - & - & 97.0 & 86.6 & 84.3 & 88.7 & 96.3 & 94.4 & 92.0 & 89.4 & 95.5 & 91.8 & 90.7 & 68.7 & 89.6 \\ \hline SHOT [7] & & ✓ & & & & & & & & & & & & & & & & \\
domain adaptation methods [5, 6, 29] are highly overlapped in their application areas. Currently, most of SFDA methods are applied in the field of computer vision and natural language processing.
### _Computer Vision_
Most SFDA methods focus on image classification, which is the fundamental task in computer vision. With the popularity of large-scale datasets like VisDA [130] and DomainNet [142], the demand of the adaptation capability also increases. Image classification can also extend to various application scenarios such as real-world image dehazing [143], cross-scene hyperspectral image classification [144], and blind image quality assessment [114]. Semantic segmentation methods [11, 12, 50, 146, 129, 145] have also emerged rapidly and have been widely used, including multi-organ segmentation [113], cross-modal segmentation [147, 148], cross-device [100], cross-central domain segmentation [85], multi-site and lifespan brain skull stripping [77], and road segmentation [98, 149]. Others applications include object detection [13, 14, 62], person re-identification [150], and video analysis [88, 144] can also benefit from SFDA.
### _Natural Language Processing_
The application of source-free domain adaption in natural language processing (NLP) is still relatively limited. Related settings and studies in NLP include continuous learning [151, 152] and generalization capabilities of pre-trained models [153]. Laparra _et al._ designed the SemEval 2021 Task 10 dataset [154] on two tasks, i.e., negation detection and time expression recognition. Su _et al._[33] extended self-training [155], active learning [156] and data augmentation [157] baselines to the source-free setting for systematic comparison.
### _Other Related Problems_
Domain generalization (DG) [158, 159], which makes the model work on previously unseen target domains, can be seen as a relevant setting for domain adaptation, and some source-free DG methods [160, 161] have also emerged, which can be seen as a variant of the data-based SFDA approach. Source-free zero-shot domain adaptation [162, 163] is proposed to alleviate the requirement for streaming data with small batch size and class distribution in test-time domain adaptation [164]. Ondrej [128]_et al._ investigated feedforward source-free domain adaption, which is back propagation-free, to further protect user privacy as well as reduce overhead. There are also methods that combine source-free domain adaptation setting with federated learning [149], black box test [165], or robust transfer [166] to meet different practical scenarios.
## 7 Future Research Direction
Despite the rapid growth of research on source-free domain adaptation and the resulting evolution on methodology and performance, the problem settings and targets studied in SFDA are still somewhat limited and homogeneous. Specifically, the classification tasks dominate the research of SFDA, which largely outpaces the growth of SFDA on other computer vision tasks. Below we discuss where the research landscape of SFDA can be potentially extended.
### _Enrichment of weak research lines_
Although we have discussed numerous SFDA methods in this paper, most of them focus on pseudo-labeling. With the categorization in Fig. 2, some kinds of methods are still less explored, such as perturbed domain supervision, neighborhood clustering, self-attention especially transformer-based self-attention mechanisms. Contrastive learning [167] and image style translation [168] are still constructed in a relatively homogeneous and simple way, and it is worth introducing some of the latest methods [169, 170] in their respective fields. In addition, entropy minimization, although widely used in SFDA, is mostly used directly as an auxiliary loss in the form of entropy loss or information loss, lacking further innovation. In general, SFDA still has abundant room for enrichment in terms of the methodology.
### _Further theoretical support_
Ben-David _et al._[15] derived a general UDA theory that upper-bounds the expected target error based on the distribution divergence of source and target domains, the error of the model on the source domain and the ideal joint error. However, the domain divergence is hard to measure in the source-free setting. Most current SFDA methods are motivated by intuition and achieve empirical success on current datasets. Liang _et al._[7] suggested information maximization as a means of achieving deterministic and decentralized outputs. While there have been SFDA theoretical analyses centered on pseudo-labeling [56], multi-source domains [30, 31], and model smoothness [171], these approaches are only suitable for particular methods. Therefore, theoretical support that is universally applicable to SFDA is highly beneficial.
### _More Applications_
Regarding computer vision tasks, most of the current SFDA methods focus on image analysis, while video data contains significantly more spatial and temporal semantic information, which presents greater processing challenges. Consequently, there are only a few methods specifically designed for video analysis. Furthermore, unsupervised domain adaptation has found numerous applications in natural language processing [172, 173, 174], time-series data analysis [175, 176], recommendation systems [177, 178], and geosciences [179, 180]. However, these applications have not yet been extended to source-free settings, indicating that SFDA may realize its potential in these areas.
### _Comprehensive datasets and evaluation_
The majority of datasets currently used for SFDA evaluation have balanced categories and clean data, however, some SFDA research has highlighted the significant impact of unbalanced categories [99] and noise [166] on model accuracy. Additionally, some existing datasets are limited in
size and no longer present sufficient challenges to assess the capabilities of modern domain adaptation methods. For example, the W to D task on the Office-31 classification dataset [131] has already reached an accuracy of 100%. Thus, more diverse and challenging datasets are necessary for advancing SFDA research. Furthermore, a more comprehensive evaluation scheme, including model robustness and overhead, would provide a more complete understanding of model characteristics and improve the applicability of SFDA in various scenarios, such as edge devices.
### _Extended settings_
This survey mainly covers the single-source closed-set setting of source-free domain adaptation, which is also the most extensively studied case. Nevertheless, as illustrated in Fig. 3, domain adaptation can be categorized into partial DA, open-set DA, multi-source DA, and multi-task DA, based on the relationship between the source and target domains. Furthermore, depending on various practical situations, the SFDA setting can be combined with test-time DA [164], federated DA [181], and active DA [182]. In the future, SFDA research is expected to become more diverse as researchers explore different settings and scenarios.
## 8 Conclusion
Unsupervised Domain adaptation, a crucial subset of transfer learning, facilitates the transfer of knowledge acquired from one labeled domain to another unlabeled domain, thereby reducing the need for extensive annotation of neural networks. The source-free setting, which lacks access to source domain data, satisfies privacy and security requirements in real-world scenarios and has rapidly gained attention since its emergence. In this paper, we provide a comprehensive overview of existing source-free adaptation methods and present a unified categorization framework. We analyze the experimental outcomes of more than 30 representative SFDA methods on the three most popular datasets, namely Office-31, Office-home, and VisDA, and modularize each method to facilitate comparisons. Lastly, based on our analysis and the current state of SFDA research, we suggest potential research directions that could benefit this community.
|
2301.08802 | Impact of PCA-based preprocessing and different CNN structures on
deformable registration of sonograms | Central venous catheters (CVC) are commonly inserted into the large veins of
the neck, e.g. the internal jugular vein (IJV). CVC insertion may cause serious
complications like misplacement into an artery or perforation of cervical
vessels. Placing a CVC under sonographic guidance is an appropriate method to
reduce such adverse events, if anatomical landmarks like venous and arterial
vessels can be detected reliably. This task shall be solved by registration of
patient individual images vs. an anatomically labelled reference image. In this
work, a linear, affine transformation is performed on cervical sonograms,
followed by a non-linear transformation to achieve a more precise registration.
Voxelmorph (VM), a learning-based library for deformable image registration
using a convolutional neural network (CNN) with U-Net structure was used for
non-linear transformation. The impact of principal component analysis
(PCA)-based pre-denoising of patient individual images, as well as the impact
of modified net structures with differing complexities on registration results
were examined visually and quantitatively, the latter using metrics for
deformation and image similarity. Using the PCA-approximated cervical sonograms
resulted in decreased mean deformation lengths between 18% and 66% compared to
their original image counterparts, depending on net structure. In addition,
reducing the number of convolutional layers led to improved image similarity
with PCA images, while worsening in original images. Despite a large reduction
of network parameters, no overall decrease in registration quality was
observed, leading to the conclusion that the original net structure is
oversized for the task at hand. | Christian Schmidt, Heinrich Martin Overhoff | 2023-01-20T21:01:39Z | http://arxiv.org/abs/2301.08802v1 | Impact of PCA-based preprocessing and different CNN structures on deformable registration of sonograms
###### Abstract
Central venous catheters (CVC) are commonly inserted into the large veins of the neck, e.g. the internal jugular vein (IJV). CVC insertion may cause serious complications like misplacement into an artery or perforation of cervical vessels. Placing a CVC under sonographic guidance is an appropriate method to reduce such adverse events, if anatomical landmarks like venous and arterial vessels can be detected reliably. This task shall be solved by registration of patient individual images vs. an anatomically labelled reference image. In this work, a linear, affine transformation is performed on cervical sonograms, followed by a non-linear transformation to achieve a more precise registration. Voxelmorph (VM), a learning-based library for deformable image registration using a convolutional neural network (CNN) with U-Net structure was used for non-linear transformation. The impact of principal component analysis (PCA)-based pre-denoising of patient individual images, as well as the impact of modified net structures with differing complexities on registration results were examined visually and quantitatively, the latter using metrics for deformation and image similarity. Using the PCA-approximated cervical sonograms resulted in decreased mean deformation lengths between 18% and 66% compared to their original image counterparts, depending on net structure. In addition, reducing the number of convolutional layers led to improved image similarity with PCA images, while worsening in original images. Despite a large reduction of network parameters, no overall decrease in registration quality was observed, leading to the conclusion that the original net structure is oversized for the task at hand.
Medical image registration, deformable registration, sonograms, Voxelmorph, CNN +
Footnote †: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or re-publish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
## 1 Introduction
Placement of a central venous catheter (CVC) is a procedure that carries risk for multiple complications, e.g., arterial puncture of the common carotid artery has an occurrence rate of 6% - 9% [1]. This work aims to further improve the ultrasound guided CVC placement into the internal jugular vein (IJV) (Fig. 1), by detecting the IJV and indicating a needle target position in a manually acquired patient individual ultrasound image. The needle target position in such an image is to be determined by automated, computer-based analysis. It is assumed that an optimal needle target position is defined in a reference image. The task at hand is to map the optimal needle target position onto patient individual images. In order to realize this mapping, the patient individual images are to be registered vs. the reference image.
Overfitting occurs when a model learns the training data well, but does not generalize the acquired information to new data. This is an issue in medical machine learning applications, since these datasets are usually small compared to the complexity of deep neural network structures, or due to low signal-to-noise ratio in the data.
The hypothesis of this work is: An improvement of the signal-to-noise ratio in image data and a systematic reduction of network size can yield improved registration results with overall less deformation, and thus a more regular registration field. In this work, principal com
ponent analysis (PCA) is used for noise reduction, and Voxelmorph, a U-Net-based convolutional neural network (CNN), is used as a reference network structure. To evaluate the hypothesis, three models for both image types are parameterized for a fine registration of affinely pre-registered ultrasound images of the human neck. The main tasks of this work are:
* reduce the size of original ultrasound images to a region of interest (ROI) that contains mainly the IV. This shall be done by feature-based image segmentation. Subsequently, apply affine pre-registration to the ROI images. Because only few clinical images are available, and those have varying anatomical structures and image contrast, this procedure shall make the definable image registration less error-prone.
* Perform a PCA on the image data set and approximate it by linear combination of the most relevant principal components.
* Train neural networks with three different structures and different number of free parameters to register image pairs (non-linear, deformable transformation). For each net structure, train two versions, one for original images, and one for PCA-approximated images.
* Quantitatively analyze the impact of the number of net parameters and the image type on the registration result, using evaluation metrics for deformation and image similarity.
## 2 Related Work
Many different methods for deep learning-based medical image registration have been proposed in the past. For rigid transformations in particular, deep reinforcement learning (RL) techniques [14] have gained some popularity. [13] proposed a RL strategy for rigid 3D-3D registrations in computer tomography images, which is based on finding the optimal sequence of motion actions (rotations and translations) for image alignment. Since RL networks are constrained to low dimensionality of outputs, they have been used almost exclusively for rigid registrations, since those can be expressed by a small number of transformation parameters. With the rise of networks, which can directly estimate deformation vector fields (DVF) and are not constrained to rigid transformations, RL-based methods fell out of favor in recent years [15].
Networks, which directly estimate the DVF can be classified into supervised and unsupervised methods. Supervised networks require ground truth transformations (either DVF, in case of deformable registration, or rigid transformation parameters). Ground truth transformations can be obtained by artificially de-aligning images with random rotations and translations [12, 1] or by using traditional, non-learning methods to register image pairs and use the resulting DVFs as ground truth
Figure 1: **Example image of an original cervical sonogram. Internal jugular vein (IJV) and common carotid artery (CCA) are labelled.**
Figure 3: **Example sonogram of the IJV before (left) and after (right) affine transformation. Ellipse parameters resulting from the preceding feature-based segmentation, are used to translate (x\({}_{OR}\)), rotate (\(-\varphi\)) and scale (\(s_{x}\), \(s_{y}\)) the images to coarsely pre-register them for the subsequent deformable deformation by the CNN.**
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c} \(q\) & 1 & 3 & 5 & 7 & 9 & 11 & 13 & 15 & 17 & 19 \\ \hline \(cEVR\) & 0.12 & 0.31 & 0.46 & 0.55 & 0.62 & 0.67 & 0.71 & 0.75 & 0.77 & 0.79 \\ \end{tabular}
\end{table}
Table 1: **Cumulative explained variance ratio \(cEVR\) by number of first \(q\) principal components.**
Figure 2: \(n\) **largest objects after binarization are delineated with green contours, the object identified as correct is marked with a red ellipse (left). Top row shows objects before, bottom row objects after watershed-transformation. Corresponding binary images after thresholding with \(g_{\text{thresh}}\) are shown on the right. Sonograms are dispaved with inverted grayscale pixel values for better visibility.**
for training [14]. Lack of medical datasets with known ground truth DVFs led to a rising demand for unsupervised networks. With the introduction of spatial transformer networks [1], calculating image similarity losses was made possible during training. This is achieved by warping the estimated DVF with the input image and comparing the resulting image with the reference image. These networks do not require supervision by ground truth annotations and in addition to image similarity loss, employ a regularization loss term, to ensure smooth and anatomically plausible transformations [15, 2].
Looking at image modalities, DL-based registration of ultrasound (US)-images only played a subordinate role in recent research, despite the high prevalence of sonography in clinical practice. A review of current publications in medical image registration [1] found, that US-images were only used in about 5% of research papers with the topic of DL-based medical image registration, while magnetic resonance imaging (MRI) (52%) and computer tomography (CT) (19%) dominated the field. This is mainly due to higher availability of public MR training datasets and the fact, that the majority of papers examined registrations of brain images, which are predominantly recorded in MR and CT scans. US-images were most often used in multi-modal registration tasks [13, 14], in which, e.g., pre-procedural MR scans were aligned with intra-procedural US-images.
## 3 Proposed Solution
### Segmentation and affine pre-registration
Firstly, the original ultrasound image is being cropped and the interface of the ultrasound machine is removed. To segment the image into foreground and background, it is binarized using a binarization threshold value \(g_{\text{thresh}}\). Subsequently, everything but the \(n\) largest objects are removed, to filter out structures that are too small to reasonably be considered. A watershed transform is implemented (Fig. 2), since as \(g_{\text{thresh}}\) increases, the increasingly forming clusters need to be separated, in order to be detected as individual entities. Choosing the correct IJV-object out of the remaining \(n\), is done via the \(y\)-coordinate of the object's center and the distance between the common carotid artery (CCA) and the respective object. The object center's \(y\)-coordinates can be utilized because the distal location of the IJV is roughly the same for all subjects. Additionally, since CCA and IJV are in close anatomical proximity, all objects outside of a certain distance to the CCA can be excluded.
To obtain a smoother contour and an object which is geometrically parameterizable, the correctly identified object is approximated with an ellipse. The ellipse parameters major and minor axis length (\(a\), \(b\)), and major axis rotation angle vs. the \(x\)-axis (\(\varphi\)) are extracted from the ellipse approximation. These parameters are subsequently used in the affine transformation (pre-registration).
This affine transformation between object IJV (\(O\)) and reference IJV (\(R\)) (Fig. 3) was performed as follows: At first, the image is translated by \(\mathbf{x}_{OR}\), which aligns the center of the approximated ellipse with the image center. This is also the origin of the new reference coordinate system. Secondly, a rotation of \(-\varphi\) degrees is applied to the image. This alignment of the center and rotation angle of the object IJV and the new reference coordinate system can be described as a 2D rigid body transform. Subsequently, image coordinate axes are scaled using the scaling factors \(s_{x}\) and \(s_{y}\). Values for \(s_{x}=\frac{a_{B}}{a_{O}}\) and \(s_{y}=\frac{b_{B}}{b_{O}}\) are used to match the major and minor axes lengths of object vs. reference IJV ellipses. Finally, a rectangle of size 208 \(\times\) 128 pixels around the object center is cropped, to only leave relevant parts of the image for later use as training data.
### Principal Component Analysis
PCA is used to reduce dimensionality in the ultrasound image data set. For this purpose, \(p=81\) pre-registered
Figure 4: **First \(q=1\dots 8\) principal component images \(\mathbf{G(y_{1})}\) through \(\mathbf{G(y_{8})}\) from performing a PCA on the ultrasound image data set (top row \(\mathbf{G(y_{1})\dots G(y_{4})}\), bottom row \(\mathbf{G(y_{5})\dots G(y_{8})}\)).**
ultrasound images of the human neck (transversal plane) from 14 different subjects (five to six images per subject) of size \(208\times 128\) pixels are investigated.
PCA is a statistical method, which is used for projecting a \(p\)-dimensional data set into a \(q\)-dimensional sub-set \((q<p)\), while preserving characteristic data variability [16, 17]. A data set consists of \(p\) variables \(x_{i}\), \(1\leq i\leq p\), with \(n\) observations. Each variable \(x_{i}\) has a mean \(\mu_{i}\) and a variance \(\sigma_{i}^{2}\) calculated over its \(n\) observations. The sum over the variance of all variables is the total variance
\[\sigma_{\text{total}}^{2}=\sum_{i=1}^{p}\sigma_{i}^{2}\]
Observations \(x_{mi}\) of variable \(x_{i}\), \(1\leq m\leq n\), are noted as a \(n\times 1\) vector \(\mathbf{x}_{i}\). With the observed mean for vector \(\mathbf{x}_{i}\) being \(\mathbf{\mu}\), the PCA is calculated over centered observations \(\mathbf{X}_{i}=\mathbf{x}_{i}-\mathbf{\mu}\). The eigenvalues \(\lambda_{j}\) of the co-variance matrix are indexed in descending order, they represent variances and fulfill
\[\sigma_{\text{total}}^{2}=\sum_{j=1}^{p}\lambda_{j}\]
The PCA yields new variables \(\mathbf{y}_{j}\), the so-called principal components (PCs). PCs are linear combinations of centered observations and have an identical co-variance matrix, i.e., the eigenvalues \(\lambda_{j}\) are the variances of variables \(\mathbf{y}_{j}\). The total variance \(\sigma_{\text{total}}^{2}\) is identical for PCs, original data, and centered observations, but is distributed differently among the variables. Goal of the PCA is to explain a major part of the total variance with a small number of variables \(q\), the cumulative explained variance ratio (\(cEVR\)) is given by
\[cEVR=\frac{\sum\limits_{j=1}^{q}\lambda_{j}}{\sigma_{\text{total}}^{2}}\]
The first \(q\) of all \(p\) new variables \(\mathbf{y}_{1}\dots\mathbf{y}_{q}\) determine the data sub-set, such that
\[\mathbf{x}_{i}\approx\tilde{\mathbf{x}}_{i}=\sum_{j=1}^{q}\beta_{ij}\mathbf{y} _{j}+\mathbf{\mu}\]
To perform the PCA, pre-registered ultrasound images of the human neck \(\mathbf{G}_{i}\) are reshaped into column vectors \(\mathbf{x}\left(\mathbf{G}_{i}\right)\) with \(n=26624\) observations each. After dimensionality reduction, the above-described image vectors can be reorganized as images \(\mathbf{G}(\tilde{\mathbf{x}}_{i})\approx\mathbf{G}\). The first \(q\) PCs (Fig. 4) are used to approximate the original dataset. In the upcoming sections, \(q=8\) is used, which accounts for about 58% of the data's variance (Table 1) while reducing 90% of dimensionality (from \(208\times 128\times 81\) to \(208\times 128\times 8\)).
### Voxelmorph and variation of net structures
Voxelmorph (VM) [1], a learning-based library for deformable image registration, which uses a U-Net-based [16] net structure, is used to perform the deformable sonogram registrations. An atlas-based registration approach is used in this work; thus, an image pair consists of a varying patient individual image (moving image \(m\)) and a reference image (fixed image \(f\)).
Voxelmorph uses a two-part loss function
\[J=\mathcal{L}_{\text{sim}}(f,m\circ\phi)+\gamma\mathcal{L}_{\text{smooth}}(\phi),\]
which consists of a similarity term \(\mathcal{L}_{\text{sim}}(f,m\circ\phi)\) and a deformation term \(\mathcal{L}_{\text{smooth}}(\phi)\). The loss function \(J\) penalizes differences in grayscale values as well as deformations and is minimized by learning optimal convolutional kernels (filters). When registering an image pair, the network yields pixel-wise displacement vectors
\[\mathbf{u}=\begin{bmatrix}u_{x}\\ u_{y}\end{bmatrix}=\begin{bmatrix}x_{f}\\ y_{f}\end{bmatrix}-\begin{bmatrix}x_{m}\\ y_{m}\end{bmatrix},\,l=\|\mathbf{u}\|\]
between moving image \(m\) and fixed image \(f\). The registration field
\[\phi=Id+\mathbf{u}\]
is formed by adding \(\mathbf{u}\) to the identity transform. The resulting registration field \(\phi\) generates a moved image \(m\circ\phi\), which is similar to \(f\).
By employing a regularization term, Voxelmorph encourages smooth, diffeomorphic deformations, i.e. deformations which are anatomically reasonable. \(\gamma\) serves as the regularization parameter, in this work we used \(\gamma=0.001\). As \(\gamma\) increases, deformation becomes more costly and the resulting deformation field, therefore, becomes more regular (smooth) and vice versa. The resulting registration field \(\phi\) is applied to the moving image \(m\) by a spatial transformer function, to obtain the moved image \(m\circ\phi\) (\(m\) warped by \(\phi\)).
To examine the effects of reducing the number of free parameters in the CNN by cutting down its size, three net structures are introduced:
* The "full" structure proposed in the original VM paper, consisting of four encoder and seven decoder convolutional layers with 16 or 32 filters (convolutional kernels) each (16, 32, 32, 32! 32, 32, 32, 32, 32, 16, 32, 16). This net structure contains about 110,000 parameters.
* The "reduced" structure. Two encoder and decoder layers are removed for the second configuration (16, 32! 32, 32, 32, 16 16), resulting in about 53,000 parameters, a reduction of 52% compared to the full net.
* The "16 filters" structure contains all eleven convolutional layers, with 16 filters in each layer (16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16), resulting in about 33,000 parameters, 70% less than the full net.
For each net structure, two versions are trained:
* Original image data vs. the reference image, i.e., \(\mathbf{G}(\mathbf{x}_{i})=\mathbf{G}(\mathbf{\tilde{x}}_{i}(q=81))\) vs. \(\mathbf{G}(\mathbf{x}_{\text{(Ref)}})=\mathbf{G}(\mathbf{\tilde{x}}_{\text{( Ref)}}(q=81))\)
* PCA-approximated image set vs. the reference image, i.e., \(\mathbf{G}(\mathbf{x}_{i})=\mathbf{G}(\mathbf{\tilde{x}}_{i}(q=8))\) vs. \(\mathbf{G}(\mathbf{x}_{\text{(Ref)}})=\mathbf{G}(\mathbf{\tilde{x}}_{\text{( Ref)}}(q=81))\)
In the upcoming results section however, PCA images are visually evaluated against the PCA-approximation \(\mathbf{G}(\mathbf{\tilde{x}}_{\text{(Ref)}}(q=8))\) of the reference image to account for the difference in brightness and contrast between original and PCA images.
### Quantitative analysis
To quantitatively analyze the properties and quality of performed registrations, two evaluation metrics are introduced:
* Let \(0\ \leq\ I\ \leq\ 1\) be the normalized image grayscale intensities, and \(\Delta I=I\left(f\right)-I\left(m\circ\phi\right)\) the pixel-wise differences between intensities of fixed image \(f\) and moved image \(m\circ\phi\). Therefore, \(-1\ \leq\Delta I\leq 1\) holds. We define \(\overline{\Delta I}\) as the mean of absolute grayscale intensity differences \(\Delta I\).
* Mean deformation vector lengths \(\bar{l}\) of the registration field \(\phi\), where \(l\) measures the pixel-wise deformation (in pixels) that is applied to the moving image \(m\).
We use \(\overline{\Delta I}\) and \(\bar{l}\) analogously to the similarity term \(\mathcal{L}_{\text{sim}}(f,m\circ\phi)\) and the deformation term \(\mathcal{L}_{\text{smooth}}(\phi)\) of the Voxelmorph loss function. Since the region around the IJV's contour is of primary importance in this work, \(\overline{\Delta I}\) and \(\bar{l}\) are only evaluated in a belt-like along the IJV contour.
In addition, significances \(\alpha\) of metric differences between the above mentioned net and image pair variants are determined with a two-tailed, paired \(t\)-test. We used a 70/30 split between training and test data, training the net's parameters on a NVIDIA GeForce RTX 2060 GPU takes 2.5 to 3 minutes, depending on net structure. Registering a single image pair takes 1 to 2 seconds.
## 4 Results
Post-registration mean of absolute intensity differences \(\overline{\Delta I}\) for all net structures and image types are shown in Fig. 5. With original images, an increase in the mean \(\overline{\Delta I}\) of around 12% can be observed, when using the reduced net instead of the full net (mean \(\overline{\Delta I}\) : 0.078 vs. 0.070, \(\alpha=0.027\)). When registering PCA-approximated images however, a 17% decrease was measured when using the reduced over the full net structure (mean \(\overline{\Delta I}\) : 0.079 vs. 0.094, \(\alpha=0.007\)). Comparing the full net structure to their respective 16 filters version showed no significant change in mean \(\overline{\Delta I}\).
Looking at mean deformation vector lengths \(\bar{l}\) (Fig. 5), networks trained with PCA-approximations showed decreases in mean \(\bar{l}\) vs. their original image counterpart of 24% for the full, 18% for the reduced and 66% for the 16 filters net structure. In addition, registrations with PCA-approximated images display the expected smoothing and noise reducing properties, removing unwanted artifacts from the vessel lumen (Fig. 6).
Figure 5: **Results of registrations with different net structures, using original and PCA-approximated images. Mean of differences of absolute grayscale intensities \(\overline{\Delta I}\) is shown on the left, mean deformation vector length \(\bar{l}\) on the right.**
Figure 6: **Example registration results of original image (top) and PCA-approximation (bottom), using the full net structure. The registration field \(\phi\) is warped with a regular square grid and superimposed over the moving image \(m\), to show the extent and direction of local deformation which is applied to individual image parts. In addition, values of the pixel-wise pre- and post-registration grayscale difference \(\Delta I\) are displayed color-coded, to illustrate the effect of registration on image similarities (colors ranging from dark red for \(\Delta I=1\) to dark blue for \(\Delta I=-1\)).**
Figure 7: **Illustration of negative original image features being transferred to PCA-approximations. Reverberation artifacts of original image of subject A (left) appear in the PCA-approximation of subject B (right), even though no such artifacts are present in the original image of subject B (middle).**
## 5 Conclusion
Despite a reduction in net parameters of up to 70% compared to the originally proposed full net and reducing the mean deformation vector lengths \(\bar{l}\) by 18% - 66%, no overall reduction in registration quality was measurable in the downscaled net structures. Specifically, for the combination of reduced net structure with PCA-approximated images, a significant decrease of \(\bar{l}\) (\(\bar{l}=2.32\) vs. \(2.85,\alpha=0.045\)) vs. original images was observed, while \(\overline{\Delta I}\) remained nearly unchanged (\(\overline{\Delta I}=0.079\) vs. \(0.078\)). This confirms the hypothesis described in the introduction section, and leads to the conclusion that the full net structure is unnecessarily oversized for the problem at hand.
The net structure can be reduced in size to diminish problems like overfitting, while also running up to 15% faster during training compared to the full net structure. In case of images which contain similar, regularly shaped structures, it is recommended to pre-process them with the proposed PCA procedure and employ reduced net structures, to reduce mean deformations and yield more regular registration fields. Since PCA is based on variances, it is highly sensitive to outliers. Thus, noisy images (outliers) in the original data set negatively affect the quality of the principal components, which then in turn affect the approximated PCA images (Fig. 7).
## 6 Acknowledgments
We thank the Federal Ministry of Education and Research (BMBF) Germany, which funded this work in the program "Grundungen: Innovative Start-ups fur Mensch-Technik-Interaktion", grant no. 16SV8153.
|
2307.05070 | A Logic-Based Analysis of Responsibility | This paper presents a logic-based framework to analyze responsibility, which
I refer to as intentional epistemic act-utilitarian stit theory (IEAUST). To be
precise, IEAUST is used to model and syntactically characterize various modes
of responsibility, where by 'modes of responsibility' I mean instances of
Broersen's three categories of responsibility (causal, informational, and
motivational responsibility), cast against the background of particular deontic
contexts. IEAUST is obtained by integrating a modal language to express the
following components of responsibility on stit models: agency, epistemic
notions, intentionality, and different senses of obligation. With such a
language, I characterize the components of responsibility using particular
formulas. Then, adopting a compositional approach -- where complex modalities
are built out of more basic ones -- these characterizations of the components
are used to formalize the aforementioned modes of responsibility. | Aldo Ivńn Ramírez Abarca | 2023-07-11T07:14:11Z | http://arxiv.org/abs/2307.05070v1 | # A Logic-Based Analysis of Responsibility
###### Abstract
This paper presents a logic-based framework to analyze responsibility, which I refer to as intentional epistemic act-utilitarian stit theory (IEAUST). To be precise, IEAUST is used to model and syntactically characterize various modes of responsibility, where by'modes of responsibility' I mean instances of Broersen's three categories of responsibility (causal, informational, and motivational responsibility), cast against the background of particular deontic contexts. IEAUST is obtained by integrating a modal language to express the following components of responsibility on stit models: agency, epistemic notions, intentionality, and different senses of obligation. With such a language, I characterize the components of responsibility using particular formulas. Then, adopting a compositional approach--where complex modalities are built out of more basic ones--these characterizations of the components are used to formalize the aforementioned modes of responsibility.
## 1 Introduction
The study of responsibility is a complicated matter. The term is used in different ways in different fields, and it is easy to engage in everyday discussions as to why someone should be considered responsible for something. Typically, the backdrop of these discussions involves social, legal, moral, or philosophical problems, each with slightly different meanings for expressions like _being responsible for..._, _being held responsible for..._, or _having the responsibility of..._, among others. Therefore--to approach such problems efficiently--there is a demand for clear, taxonomical definitions of responsibility.
For instance, suppose that you are a judge in Texas. You are presiding over a trial where the defendant is being charged with first-degree murder. The alleged crime is horrible, and the prosecution seeks capital punishment. The case is as follows: driving her car, the defendant ran over a traffic officer that was holding a stop-sign at a crossing walk, while school children were crossing the street. The traffic officer was killed, and some of the children were severely injured. A highly complicated case, the possibility of a death-penalty sentence means that the life of the defendant is at stake. More than ever, due process is imperative. As the presiding judge, you must abide by the prevailing definitions of criminal liability with precision. In other words, there is little to no room for ambiguity in the ruling, and your handling of the notions associated with responsibility in criminal law should be impeccable.
As this example suggests, a framework with intelligible, realistically applicable definitions of responsibility is paramount in the field of law. However, responsibility-related problems arise across many other disciplines--social psychology, philosophy of emotion, legal theory, and ethics, to name a few [18, 25]. A clear pattern in all these is the intent of issuing standards for when--and to what extent--an agent should be held responsible for a state of affairs.
This is where Logic lends a hand. The development of expressive logics--to reason about agents' decisions in situations with moral consequences--involves devising unequivocal representations of components of behavior that are highly relevant to systematic responsibility attribution and to systematic
blame-or-praise assignment. To put it plainly, expressive syntactic-and-semantic frameworks help us analyze responsibility-related problems in a methodical way.1
Footnote 1: Most likely, this is why the logic-based formalization of responsibility has become such an important topic in, for instance, normative multi-agent systems, responsible autonomous agents, and machine ethics for AI [22, 7]
The main goal of this paper is to present a proposal for a formal theory of responsibility. Such a proposal relies on (a) a _decomposition_ of responsibility into specific components and (b) a functional _classification_ of responsibility, where the different categories directly correlate with the components of the decomposition. As for the decomposition, it is given by the following list:
* **Agency**: the process by which agents bring about states of affairs in the environment. In other words, the phenomenon by which agents choose and perform actions, with accompanying mental states, that change the environment.
* **Knowledge and belief**: mental states that concern the information available in the environment and that explain agents' particular choices of action.
* **Intentions**: mental states that determine whether an action was done with the purpose of bringing about its effects.
* **Ought-to-do's**: the actions that agents should perform, complying to the codes of a normative system. Oughts-to-do's make up contexts that provide a criterion for deciding whether an agent should be blamed or praised. I refer to these contexts as the _deontic contexts_ of responsibility.
As for the classification, it is a refinement of Broersen's three categories of responsibility: _causal, informational_, and _motivational_ responsibility [5, 15, 16]. I will discuss these categories at length in Section 2. On the basis of both the decomposition and the classification, here I introduce a very rich stit logic to analyze responsibility, which I refer to as _intentional epistemic act-utilitarian stit theory_ (_IEAUST_). More precisely, I use _IEAUST_ to model and syntactically characterize various modes of responsibility. By'modes of responsibility' I mean combinations of sub-categories of the three ones mentioned above, cast against the background of particular deontic contexts. On the one hand, the sub-categories correspond to the different versions of responsibility that one can consider according to the _active_ and _passive_ forms of the notion: while the active form involves contributions--in terms of explicitly bringing about outcomes--the passive form involves omissions--which are interpreted as the processes by which agents allow that an outcome happens while being able, to some extent, to prevent it. On the other hand, the deontic context of a mode establishes whether and to what degree the combination of sub-categories involves either blameworthiness or praiseworthiness.
The logic _IEAUST_ includes a language that expresses agency, epistemic notions, intentionality, and different senses of obligation. With this language, I characterize the components of responsibility using particular formulas. Then, adopting a compositional approach--where complex modalities are built out of more basic ones--I use these characterizations of the components to formalize the aforementioned modes of responsibility. An outline of the paper is included below.
* Section 2 presents an operational definition for responsibility and addresses the philosophical perspective adopted in my study of the notion.
* Section 3 introduces _IEAUST_ and uses this logic to provide stit-theoretic characterizations of different modes of responsibility.
* Section 4 presents Hilbert-style proof systems both for _IEAUST_ and for a technical extension, addressing the status of their soundness & completeness results.
## 2 Categories of Responsibility
To make a start on formally analyzing responsibility, I identify (a) two _viewpoints_ for the philosophical study of responsibility, (b) three main _categories_ for the viewpoint that I focus on, and (c) two _forms_ in which the elements of the categories can be interpreted.
As for point (a), the philosophical literature on responsibility usually distinguishes two _viewpoints_ on the notion [23]: _backward-looking responsibility_ and _forward-looking responsibility_. By backward-looking responsibility one refers to the viewpoint according to which an agent is considered to have produced a state of affairs that has already ensued and lies in the past. This is the viewpoint taken by a judge when, while trying a murder case, she wants to get to the bottom of things and find out who is responsible for doing the killing. In contrast, by forward-looking responsibility one refers to the viewpoint according to which which an agent is expected to comply with the duty of bringing about a state of affairs in the future. When one thinks of a student that has to write an essay before its due date, for instance, this is the view that is being used. In other words, the writing and the handing in of the essay before the deadline are seen as responsibilities of the student.
From here on, I will focus on backward-looking responsibility. I work with the following operational definition: _responsibility_ is a relation between the agents and the states of affairs of an environment, such that an agent is responsible for a state of affairs iff the agent's degree of involvement in the realization of that state of affairs warrants blame or praise (in light of a given normative system). As for point (b), I follow [11] and [15] and distinguish three main _categories_ of responsibility, where each category can be correlated with the components of responsibility that it involves:2
Footnote 2: These categories extend the literature’s common distinction between _causal_ and _agentive_ responsibility [18, 24, 14], and they were derived by [11] on the basis of his analysis of the modes of _mens rea_.
1. _Causal responsibility_: an agent is causally responsible for a state of affairs iff the agent is the material author of such a state of affairs. The component that this category involves is agency.
2. _Informational responsibility_: an agent is informationally responsible for a state of affairs iff the agent is the material author and it behaved knowingly, or consciously, while bringing about the state of affairs. The components that this category involves are agency, knowledge, and belief.
3. _Motivational responsibility_: an agent is motivationally responsible for a state of affairs iff the agent is the material author and it behaved knowingly and intentionally while bringing about the state of affairs. The components that this category involves are agency, knowledge, and intentions.
Finally, as for point (c), the two _forms_ of responsibility are the _active_ form and the _passive_ form. The active form of responsibility concerns contributions, and the passive form of responsibility concerns omissions.
Now, key elements in my operational definition of responsibility are the notions of blame and praise. Intuitively, responsibility can be measured by how much blame or how much praise an agent gets for its participation in bringing about a state of affairs. As mentioned before, _ought-to-do's_ can provide a criterion for deciding when agents should be blamed and when agents should be praised. The main idea is as follows: if agent \(\alpha\) ought to have done \(\phi\), then having seen to it that \(\phi\) makes \(\alpha\) praiseworthy, while having refrained from seeing to it that \(\phi\) makes \(\alpha\) blameworthy. For a given \(\phi\), then, the degrees of \(\alpha\)'s praiseworthiness/blameworthiness correspond to the possible combinations between (a) an agent's ought-to-do's and (b) the active/passive forms of the three categories of responsibility.
A Logic of Responsibility
We are ready to introduce _intentional epistemic act-utilitarian stit theory_ (_IEAUST_), a stit-theoretic logic of responsibility. Without further ado, let me address the syntax and semantics of this expressive framework.
### Syntax & Semantics
**Definition 3.1** (Syntax of intentional epistemic act-utilitarian stit theory).: Given a finite set \(Ags\) of agent names and a countable set of propositions \(P\), the grammar for the formal language \(\mathcal{L}_{\mathsf{R}}\) is given by
\[\phi::=p\mid\neg\phi\mid\phi\wedge\phi\mid\Box\phi\mid[\alpha]\phi\mid K_{ \alpha}\phi\mid I_{\alpha}\phi\mid\odot_{\alpha}\phi\mid\odot_{\alpha}^{ \mathscr{S}}\phi,\]
where \(p\) ranges over \(P\) and \(\alpha\) ranges over \(Ags\).
In this language, \(\Box\varphi\) is meant to express the historical necessity of \(\varphi\) (\(\Diamond\varphi\) abbreviates \(\neg\Box\neg\varphi\)); \([\alpha]\varphi\) expresses that 'agent \(\alpha\) has seen to it that \(\varphi\)'; \(K_{\alpha}\phi\) expresses that '\(\alpha\) knew \(\varphi\)'; \(I_{\alpha}\phi\) expresses that '\(\alpha\) had a present-directed intention toward the realization of \(\varphi\)'; \(\odot_{\alpha}\phi\) expresses that '\(\alpha\) objectively ought to have seen to it that \(\phi\)'; and \(\odot_{\alpha}^{\mathscr{S}}\phi\) expresses that '\(\alpha\) subjectively ought to have seen to it that \(\phi\)'. As for the semantics, the structures on which the formulas of \(\mathcal{L}_{\mathsf{R}}\) are evaluated are based on what I call _knowledge-intentions-oughts branching-time frames_. Let me first present the formal definition of these frames and then review the intuitions behind the extensions.
**Definition 3.2** (_Kibot_-frames & models).: A tuple \(\left\langle M,\sqsubseteq,Ags,\mathbf{Choice},\{\sim_{\alpha}\}_{\alpha\in Ags },\tau,\mathbf{Value}\right\rangle\) is called a _knowledge-intention-oughts branching-time frame_ (_kibot_-frame for short) iff
* \(M\) is a non-empty set of moments and \(\sqsubseteq\) is a strict partial ordering on \(M\) satisfying 'no backward branching.' Each maximal \(\sqsubset\)-chain of moments is called a history, where each history represents a complete temporal evolution of the world. \(H\) denotes the set of all histories, and for each \(m\in M\), \(H_{m}:=\{h\in H;m\in h\}\). Tuples \(\left\langle m,h\right\rangle\) such that \(m\in M\), \(h\in H\), and \(m\in h\), are called _indices_, and the set of indices is denoted by \(I(M\times H)\). \(\mathbf{Choice}\) is a function that maps each agent \(\alpha\) and moment \(m\) to a partition \(\mathbf{Choice}_{\alpha}^{m}\) of \(H_{m}\), where the cells of such a partition represent \(\alpha\)'s available actions at \(m\). For \(m\in M\) and \(h\in H_{m}\), we denote the equivalence class of \(h\) in \(\mathbf{Choice}_{\alpha}^{m}\) by \(\mathbf{Choice}_{\alpha}^{m}(h)\). \(\mathbf{Choice}\) satisfies two constraints:
* _No choice between undivided histories_: For all \(h,h^{\prime}\in H_{m}\), if \(m^{\prime}\in h\cap h^{\prime}\) for some \(m^{\prime}\sqsupset m\), then \(h\in L\) iff \(h^{\prime}\in L\) for every \(L\in\mathbf{Choice}_{\alpha}^{m}\).
* _Independence of agency_: A function \(s\) on \(Ags\) is called a _selection function_ at \(m\) if it assigns to each \(\alpha\) a member of \(\mathbf{Choice}_{\alpha}^{m}\). If we denote by \(\mathbf{Select}^{m}\) the set of all selection functions at \(m\), then we have that for every \(m\in M\) and \(s\in\mathbf{Select}^{m}\), \(\bigcap_{\alpha\in Ags}s(\alpha)\neq\emptyset\) (see [9] for a discussion of the property).
* For \(\alpha\in Ags\), \(\sim_{\alpha}\) is the epistemic indistinguishability equivalence relation for agent \(\alpha\), which satisfies the following constraints:
* \((\mathtt{OAC})\)_Own action condition_: if \(\left\langle m_{*},h_{*}\right\rangle\sim_{\alpha}\left\langle m,h\right\rangle\), then \(\left\langle m_{*},h^{\prime}_{*}\right\rangle\sim_{\alpha}\left\langle m,h\right\rangle\) for every \(h^{\prime}_{*}\in\mathbf{Choice}_{\alpha}^{m_{*}}(h_{*})\). We refer to this constraint as the 'own action condition' because it implies that agents do not know more than what they perform.
* \((\mathtt{Unif-H})\)_Uniformity of historical possibility_: if \(\left\langle m_{*},h_{*}\right\rangle\sim_{\alpha}\left\langle m,h\right\rangle\), then for every \(h^{\prime}_{*}\in H_{m_{*}}\) there exists \(h^{\prime}\in H_{m}\) such that \(\left\langle m_{*},h^{\prime}_{*}\right\rangle\sim_{\alpha}\left\langle m,h^{ \prime}\right\rangle\). Combined with \((\mathtt{OAC})\), this constraint is meant to capture a notion of uniformity of strategies, where epistemically indistinguishable indices should have the same available actions for the agent to choose upon.
For \(\langle m,h\rangle\) and \(\alpha\in\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{ A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\
Therefore, _kiobt_-frames allow us to represent the components of responsibility discussed in the introduction: agency, knowledge, intentions, and ought-to-do's. More precisely, they allow us to provide semantics for the modalities of \(\mathcal{L}_{\mathsf{R}}\):
**Definition 3.3** (Evaluation rules for _Ieaust_).: Let \(\mathcal{M}\) be a finite-choice _kiobt_-model.4 The semantics on \(\mathcal{M}\) for the formulas of \(\mathcal{L}_{\mathsf{R}}\) are recursively defined by the following truth conditions:
Footnote 4: Finite-choice \(bt\)-models are those for which function **Choice** is such that \(\textbf{Choice}^{m}_{\alpha}\) is finite for every \(\alpha\in Ags\) and \(m\in M\). I focus on finite-choice models to simplify the evaluation rules for objective and subjective ought-to-do’s. The reader is referred to [2] for the evaluation rules in the case of infinite-choice models.
\[\begin{array}{lcl}\mathcal{M},\langle m,h\rangle\models p&\text{iff}& \langle m,h\rangle\in\mathcal{V}(p)\\ \mathcal{M},\langle m,h\rangle\models\neg\phi&\text{iff}&\mathcal{M},\langle m,h\rangle\not\models\phi\\ \mathcal{M},\langle m,h\rangle\models\phi\wedge\psi&\text{iff}&\mathcal{M}, \langle m,h\rangle\models\phi\text{ and }\mathcal{M},\langle m,h\rangle\models\psi\\ \mathcal{M},\langle m,h\rangle\models\square\phi&\text{iff}&\text{for all }h^{\prime}\in H _{m},\mathcal{M},\langle m,h^{\prime}\rangle\models\phi\\ \mathcal{M},\langle m,h\rangle\models[\alpha]\phi&\text{iff}&\text{for all }h^{\prime}\in \textbf{Choice}^{m}_{\alpha}(h),\mathcal{M},\langle m,h^{\prime}\rangle \models\phi\\ \mathcal{M},\langle m,h\rangle\models K_{\alpha}\phi&\text{iff}&\text{for all }\langle m^{\prime},h\rangle\text{ s. t. }\langle m,h\rangle\sim_{\alpha}\langle m^{\prime},h^{\prime}\rangle,\\ &&\mathcal{M},\langle m^{\prime},h^{\prime}\rangle\models\phi\\ \mathcal{M},\langle m,h\rangle\models I_{\alpha}\phi&\text{iff}&\text{there exists }U\in\tau^{\langle m,h\rangle}_{\alpha}\text{ s. t. }U\subseteq\|\phi\|\\ \mathcal{M},\langle m,h\rangle\models\odot_{\alpha}\phi&\text{iff}&\text{for all }L\in\textbf{Optimal}^{m}_{\alpha},\mathcal{M},\langle m,h^{\prime}\rangle \models\varphi\\ &&\text{for every }h^{\prime}\in L\\ \mathcal{M},\langle m,h\rangle\models\odot^{\mathcal{S}}_{\alpha}\varphi& \text{iff}&\text{for all }L\in\textbf{SOptimal}^{m}_{\alpha},\mathcal{M},\langle m^{\prime},h^{ \prime}\rangle\models\varphi\\ &&\text{for every }m^{\prime}\text{ s. t. }m\sim_{\alpha}m^{\prime}\text{ and every }h^{\prime}\in[L]^{m^{\prime\prime}}_{\alpha}.\end{array}\]
where \(\|\phi\|\) refers to the set \(\{\langle m,h\rangle\in I(M\times H);\mathcal{M},\langle m,h\rangle\models\phi\}\).
### Formalization of Sub-Categories of Responsibility
The logic introduced in the previous subsection allows us to formalize different modes of responsibility by means of formulas of \(\mathcal{L}_{\mathsf{R}}\). Before diving into the formulas, let me present an operational definition for the expression'mode of responsibility.' For \(\alpha\in Ags\), index \(\langle m,h\rangle\), and \(\phi\) of \(\mathcal{L}_{\mathsf{R}}\), a _mode of \(\alpha\)'s responsibility with respect to \(\phi\) at \(\langle m,h\rangle\)_ is a tuple consisting of three constituents: (1) a set of categories, taken from Broersen's three categories of responsibility, that applies to the relation between \(\alpha\) and \(\phi\) at \(\langle m,h\rangle\), (2) the forms of responsibility--active or passive--that apply to the categories in said set, and (3) a deontic context, determining whether the forms of the categories are either blameworthy, praiseworthy, or neutral. As for constituents (1) and (2), observe that the active and passive forms of the three categories of responsibility lead to sub-categories of the notion. For clarity, first I will introduce the stit-theoretic characterizations of these sub-categories; afterwards, in Subsection 3.3, these sub-categories will be discussed against the backdrop of the deontic contexts that will decide their degree of blameworthiness or praiseworthiness (constituent (3) in a given mode).
A maxim usually endorsed in the philosophical literature on moral responsibility is the _principle of alternate possibilities_. According to this principle, "a person is morally responsible for what he has done only if he could have done otherwise" [16]. Following the example of [18], then, I adopt the intuitions behind deliberative agency and restrict my view on responsibility to situations where agents can be said to actually have had a hand in bringing about states of affairs. Therefore, each sub-category of \(\alpha\)'s responsibility with respect to \(\phi\) at \(\langle m,h\rangle\) will include a positive condition--concerning the realization of \(\phi\)--and a negative condition--concerning the realization of \(\neg\phi\). For \(\alpha\in Ags\) and \(\phi\) of \(\mathcal{L}_{\mathsf{R}}\), the main sub-categories of \(\alpha\)'s responsibility with respect to \(\phi\) are displayed in Table 1.
Let me explain and discuss Table 1. Let \(\mathcal{M}\) be a _kiobt_-model. For \(\alpha\in Ags\) and index \(\langle m,h\rangle\), the sub-categories of \(\alpha\)'s responsibility with respect to \(\phi\) at \(\langle m,h\rangle\) are defined as follows:
* \(\alpha\) was _causal-active responsible_ for \(\phi\) at \(\langle m,h\rangle\) iff at \(\langle m,h\rangle\)\(\alpha\) has seen to it that \(\phi\) (the positive condition) and it was possible for \(\alpha\) to prevent \(\phi\) (the negative condition). As such, I refer to state of affairs \(\phi\) as a causal contribution of \(\alpha\) at \(\langle m,h\rangle\). \(\alpha\) was _causal-passive responsible_ for \(\phi\) at \(\langle m,h\rangle\) iff at \(\langle m,h\rangle\)\(\phi\) was the case (the positive condition), and \(\alpha\) refrained from preventing \(\phi\) while it was possible for \(\alpha\) to prevent \(\phi\) (the negative conditions). To clarify, formula \(\phi\rightarrow\neg[\alpha]\neg\phi\) is valid, so that if \(\phi\) was the case then \(\alpha\) refrained from preventing \(\phi\). I refer to \(\neg\phi\) as a causal omission of \(\alpha\) at \(\langle m,h\rangle\).
* \(\alpha\) was _informational-active responsible_ for \(\phi\) at \(\langle m,h\rangle\) iff at \(\langle m,h\rangle\)\(\alpha\) has knowingly seen to it that \(\phi\) (the positive condition) and it was possible for \(\alpha\) to knowingly prevent \(\phi\) (the negative condition). I refer to \(\phi\) as a conscious contribution of \(\alpha\) at \(\langle m,h\rangle\). \(\alpha\) was _informational-passive responsible_ for \(\phi\) at \(\langle m,h\rangle\) iff at \(\langle m,h\rangle\)\(\phi\) was the case (the positive condition), and \(\alpha\) knowingly refrained from preventing \(\phi\) while it was possible for \(\alpha\) to knowingly prevent \(\phi\) (the negative conditions). I refer to \(\neg\phi\) as a conscious omission of \(\alpha\) at \(\langle m,h\rangle\).
* \(\alpha\) was _motivational-active responsible_ for \(\phi\) at \(\langle m,h\rangle\) iff at \(\langle m,h\rangle\)\(\alpha\) has both knowingly and intentionally seen to it that \(\phi\) (the positive conditions) and it was possible for \(\alpha\) to knowingly prevent \(\phi\) (the negative condition). I refer to \(\phi\) as a motivational contribution of \(\alpha\) at \(\langle m,h\rangle\). \(\alpha\) was _motivational-passive responsible_ for \(\phi\) at \(\langle m,h\rangle\) iff at \(\langle m,h\rangle\)\(\phi\) was the case (the positive condition), and \(\alpha\) both knowingly and intentionally refrained from preventing \(\phi\) while it was possible for \(\alpha\) to knowingly prevent \(\phi\) (the negative conditions). I refer to \(\neg\phi\) as a motivational omission of \(\alpha\) at \(\langle m,h\rangle\).
The main reason for setting the negative conditions as stated in Table 1 is that it greatly simplifies the relation between the active and the passive forms of responsibility. That said, it is important to mention that these negative conditions lead to a policy that I call _leniency on blameworthy agents_.
Two important observations concerning the relations between these sub-categories are the following:
1. If \(\alpha\) was informational-active, resp. informational-passive, responsible for \(\phi\) at \(\langle m,h\rangle\), then \(\alpha\) was causal-active, resp. causal-passive, responsible for \(\phi\) at \(\langle m,h\rangle\); the converse is not true. 2. If \(\alpha\) was motivational-active, resp. motivational-passive, responsible for \(\phi\) at \(\langle m,h\rangle\), then \(\alpha\) was informational-active, resp. informational-passive, responsible for \(\phi\) at \(\langle m,h\rangle\); the converse is not true.
2. For all three categories, the active form of responsibility with respect to \(\phi\) implies the passive form.
\begin{table}
\begin{tabular}{|l|l|l|} \hline _CategoryForm_ & Active (contributions) & Passive (omissions) \\ \hline Causal & \([\alpha]\phi\wedge\Diamond[\alpha]\neg\phi\) & \(\phi\wedge\Diamond[\alpha]\neg\phi\) \\ \hline Informational & \(K_{\alpha}[\alpha]\phi\wedge\Diamond K_{\alpha}[\alpha]\neg\phi\) & \(\phi\wedge K_{\alpha}\neg[\alpha]\neg\phi\wedge\Diamond K_{\alpha}[\alpha]\neg\phi\) \\ \hline Motivational & \(K_{\alpha}[\alpha]\phi\wedge I_{\alpha}[\alpha]\phi\wedge\) & \(\phi\wedge K_{\alpha}\neg[\alpha]\neg\phi\wedge\Diamond K_{\alpha}[\alpha]\neg\phi\) \\ \(\Diamond K_{\alpha}[\alpha]\neg\phi\) & \(I_{\alpha}\neg[\alpha]\neg\phi\wedge\Diamond K_{\alpha}[\alpha]\neg\phi\) \\ \hline \end{tabular}
\end{table}
Table 1: Main sub-categories.
### Formalization of Modes of Responsibility
In Section 2 I explained that obligations provide the deontic contexts of responsibility, which in turn determine degrees of praiseworthiness/blameworthiness for instances of the notion. Let \(\mathcal{M}\) be a _kobt_-model. Take \(\alpha\in\mathit{A}\mathit{g}\)s, and let \(\phi\) be a formula of \(\mathcal{L}_{\mathsf{R}}\). For each index \(\langle m,h\rangle\), there are 4 main possibilities for conjunctions of deontic modalities holding at \(\langle m,h\rangle\), according to whether \(\Delta\phi\) or \(\neg\Delta\phi\) is satisfied at the index, where \(\Delta\in\left\{\odot_{\alpha},\odot_{\alpha}^{\mathcal{S}}\right\}\). I refer to any such conjunction as a _deontic context for \(\alpha\)'s responsibility with respect to \(\phi\) at \(\langle m,h\rangle\)_. Thus, these contexts render 4 main levels of praiseworthiness, resp. blameworthiness, under the premise that bringing about \(\phi\) is praiseworthy and refraining from bringing about \(\phi\) is blameworthy. I use numbers 1-4 to refer to these levels, so that _Level_ 1 corresponds the highest level of praiseworthiness, resp. blameworthiness, and _Level_ 4 corresponds to the lowest level.
_Level 1_: when deontic context \(\odot_{\alpha}\phi\wedge\odot_{\alpha}^{\mathcal{S}}\phi\) holds at \(\langle m,h\rangle\), which occurs iff at \(\langle m,h\rangle\)\(\alpha\) objectively and subjectively ought to have seen to it that \(\phi\). _Level 2_: when deontic context \(\neg\odot_{\alpha}\phi\wedge\odot_{\alpha}^{\mathcal{S}}\phi\) holds at \(\langle m,h\rangle\), which occurs iff at \(\langle m,h\rangle\)\(\alpha\) subjectively ought to have seen to it that \(\phi\), but \(\alpha\) did not objectively ought to have seen to it that \(\phi\). _Level 3_: when deontic context \(\odot_{\alpha}\phi\wedge\neg\odot_{\alpha}^{\mathcal{S}}\phi\) holds at \(\langle m,h\rangle\), which occurs iff at \(\langle m,h\rangle\)\(\alpha\) objectively ought to have seen to it that \(\phi\), but \(\alpha\) did not subjectively ought to have seen to it that \(\phi\). _Level 4_: when deontic context \(\neg\odot_{\alpha}\phi\wedge\neg\odot_{\alpha}^{\mathcal{S}}\phi\) holds at \(\langle m,h\rangle\), where, unless \(\alpha\) either objectively or subjectively ought have seen to it that \(\neg\phi\) at \(\langle m,h\rangle\) (which would imply that a deontic context of the previous levels holds with respect to \(\neg\phi\)), neither bringing about \(\phi\) nor refraining from doing so elicits any interest in terms of blame-or-praise assignment.
For each of these deontic contexts, the _basic modes of \(\alpha\)'s active responsibility with respect to \(\phi\) at \(\langle m,h\rangle\)_ are displayed in Table 2, and the _basic modes of \(\alpha\)'s passive responsibility_ are obtained by substituting the term 'passive' for 'active' in such a table.
## 4 Axiomatization
This section is devoted to introducing proof systems for _IEAUST_. More precisely, I present two systems:
* A sound system for _IEAUST_, for which achieving a completeness result is still an open problem.
\begin{table}
\begin{tabular}{|l|l|l|} \hline _Deg._ & **_Att._** & **Praiseworthiness** & **Blameworthiness** \\ \hline \multirow{3}{*}{\(Low_{A}\)} & Causal-active for \(\phi\)\(\checkmark\) & Causal-active for \(\neg\phi\)\(\checkmark\) \\ \cline{2-3} & Infor.-active for \(\phi\)\(\checkmark\) & Infor.-active for \(\neg\phi\)\(\checkmark\) \\ \cline{2-3} & Motiv.-active for \(\phi\)\(\checkmark\) & Motiv.-active for \(\neg\phi\)\(\checkmark\) \\ \hline \multirow{3}{*}{\(Middle_{A}\)} & Causal-active for \(\phi\)\(\checkmark\) & Causal-active for \(\neg\phi\)\(\checkmark\) \\ \cline{2-3} & Infor.-active for \(\phi\)\(\checkmark\) & Infor.-active for \(\neg\phi\)\(\checkmark\) \\ \cline{2-3} & Motiv.-active for \(\phi\)\(\checkmark\) & Motiv.-active for \(\neg\phi\)\(\checkmark\) \\ \hline \multirow{3}{*}{\(High_{A}\)} & Causal-active for \(\phi\)\(\checkmark\) & Causal-active for \(\neg\phi\)\(\checkmark\) \\ \cline{2-3} & Infor.-active for \(\phi\)\(\checkmark\) & Infor.-active for \(\neg\phi\)\(\checkmark\) \\ \cline{1-1} \cline{2-3} & Motiv.-active for \(\phi\)\(\checkmark\) & Motiv.-active for \(\neg\phi\)\(\checkmark\) \\ \hline \end{tabular}
\end{table}
Table 2: Modes of \(\alpha\)’s active responsibility with respect to \(\phi\).
* A sound and complete system for a technical extension of _IEAUST_ that I refer to as _bi-valued IEAUST_. Bi-valued _IEAUST_ was devised with the aim of having a completeness result for a logic that would be reasonably similar to the one presented in Section 3.
As for the first bullet point, a proof system for _IEAUST_ is defined as follows:
**Definition 4.1** (Proof system for _IEAUST_).: Let \(\Lambda_{R}\) be the proof system defined by the following axioms and rules of inference:
* _(Axioms)_ All classical tautologies from propositional logic; the **S5** schemata for \(\square\), \([\alpha]\), and \(K_{\alpha}\); the **KD** schemata for \(I_{\alpha}\); and the schemata given in Table 3. __
* _(Rules of inference) Modus Ponens_, Substitution, and Necessitation for all modal operators.
For a discussion of all these axioms and schemas, the reader is referred to [2, 19, 4]. An important result for \(\Lambda_{R}\), then, is the following proposition, whose proof is relegated to Appendix A.
**Proposition 4.2** (Soundness of \(\Lambda_{R}\)).: _The proof system \(\Lambda_{R}\) is sound with respect to the class of kibot-models._
Unfortunately, the question of whether \(\Lambda_{R}\) is complete with respect to the class of _kibot_-models is still an open problem. Now, in the search for a complete proof system for _IEAUST_, and following a strategy found in my joint works with Jan Broersen [2, 3], I tried to first prove completeness of \(\Lambda_{R}\) with respect to a class of more general models, that I refer to as _bi-valued kibot_-models (Definition 4.3 below). This strategy led to the need of dropping one of the schemata in \(\Lambda_{R}\): (_ConSO_). More precisely, if \(\Lambda^{\prime}_{R}\) is obtained from \(\Lambda_{R}\) by eliminating (_ConSO_) in Definition 4.1, then \(\Lambda^{\prime}_{R}\) turns out to be sound and complete with respect to the class of _bi-valued kibot_-models. The formal statements are included below.
**Definition 4.3** (Bi-valued _kibot_-frames & models).: \(\left\langle M,\sqsubset,\text{Ags},\mathbf{Choice},\{\sim_{\alpha}\}_{\alpha \in\text{Ags}},\tau,\mathbf{Value}_{\mathcal{O}},\mathbf{Value}_{\mathcal{S}}\right\rangle\) is called a _bi-valued kibot_-frame iff
* \(M,\sqsubset,\text{Ags},\mathbf{Choice}\), \(\{\sim_{\alpha}\}_{\alpha\in\text{Ags}}\), and \(\tau\) are defined just as in Definition 3.2.
* \(\mathbf{Value}_{\mathcal{O}}\) and \(\mathbf{Value}_{\mathcal{S}}\) are functions that independently assign to each history \(h\in H\) a real number.
\begin{table}
\begin{tabular}{|c|c|} \hline _Basic-stit-theory schemata_: & _Schemata for knowledge:_ \\ \(\square\phi\rightarrow[\alpha]\phi\) & \((SET)\) & \(K_{\alpha}\phi\rightarrow[\alpha]\phi\) & \((OAC)\) \\ For distinct \(\alpha_{1},\ldots,\alpha_{m}\), & \(\Diamond K_{\alpha}\phi\to K_{\alpha}\Diamond\phi\) & \((Unif-H)\) \\ \(\bigwedge\limits_{1<k<m}\Diamond[\alpha]\phi_{i}\rightarrow\Diamond\left( \bigwedge\limits_{1<k<m}[\alpha]\phi_{i}\right)\) & \((IA)\) & \\ \hline _Schemata for objective ought-to-do’s_: & _Schemata for subjective ought-to-do’s_: \\ \(\Diamond_{\alpha}(\phi\rightarrow\psi)\rightarrow(\Diamond_{\alpha}\phi \rightarrow\Diamond_{\alpha}\psi)\) & \((A1)\) & \(\Diamond_{\alpha}^{\mathcal{S}}(\phi\rightarrow\psi)\rightarrow(\Diamond_{ \alpha}^{\mathcal{S}}\phi\rightarrow\Diamond_{\alpha}^{\mathcal{S}}\psi)\) & \((A5)\) \\ \(\square\phi\rightarrow\Diamond_{\alpha}\phi\) & \((A2)\) & \(\Diamond_{\alpha}^{\mathcal{S}}\phi\rightarrow\Diamond_{\alpha}^{\mathcal{S}}(K _{\alpha}\phi)\) & \((A6)\) \\ \(\Diamond_{\alpha}\phi\rightarrow\square\Diamond_{\alpha}\phi\) & \((A3)\) & \(K_{\alpha}\square\phi\rightarrow\Diamond_{\alpha}^{\mathcal{S}}\phi\) & \((SuN)\) \\ \(\Diamond_{\alpha}\phi\rightarrow\Diamond_{\alpha}([\alpha]\phi)\) & \((A4)\) & \(\Diamond_{\alpha}^{\mathcal{S}}\phi\rightarrow\Diamond K_{\alpha}\phi\) & \((s.Oic)\) \\ \(\Diamond_{\alpha}\phi\rightarrow\Diamond[\alpha]\phi\) & \((Oic)\) & \(\Diamond_{\alpha}^{\mathcal{S}}\phi\to K_{\alpha}\square\Diamond_{ \alpha}^{\mathcal{S}}\phi\) & \((s.Cl)\) \\ & & \(\Diamond_{\alpha}^{\mathcal{S}}\phi\rightarrow\Diamond_{\alpha}\neg\phi\) & \((ConSO)\) \\ \hline _Schemata for intentionality:_ & \(\square K_{\alpha}\phi\rightarrow{I_{\alpha}}\phi\) & \((InN)\) & \\ \({I_{\alpha}}\phi\rightarrow\square K_{\alpha}{I_{\alpha}}\phi\) & \((KI)\) & & \\ \hline \end{tabular}
\end{table}
Table 3: Axioms for the modalities’ interactions.
A _bi-valued kibot_-model \(\mathcal{M}\), then, results from adding a valuation function \(\mathcal{V}\) to a bi-valued _kibot_-frame, where \(\mathcal{V}:P\to 2^{I(M\times H)}\) assigns to each atomic proposition of \(\mathcal{L}_{\text{R}}\) a set of indices (recall that \(P\) is the set of propositions in \(\mathcal{L}_{\text{R}}\)).
The two value functions in bi-valued _kibot_-frames allow us to redefine the dominance orderings so that they are independent from one another, something that proves useful in achieving a completeness result in the style of [2]. For \(\alpha\in\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A }\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A}\mathit{A} \mathit{A}\mathit{A}\mathit{A |
2310.19096 | Circuit Width Estimation via Effect Typing and Linear Dependency (Long
Version) | Circuit description languages are a class of quantum programming languages in
which programs are classical and produce a description of a quantum
computation, in the form of a quantum circuit. Since these programs can
leverage all the expressive power of high-level classical languages, circuit
description languages have been successfully used to describe complex and
practical quantum algorithms, whose circuits, however, may involve many more
qubits and gate applications than current quantum architectures can actually
muster. In this paper, we present Proto-Quipper-R, a circuit description
language endowed with a linear dependent type-and-effect system capable of
deriving parametric upper bounds on the width of the circuits produced by a
program. We prove both the standard type safety results and that the resulting
resource analysis is correct with respect to a big-step operational semantics.
We also show that our approach is expressive enough to verify realistic quantum
algorithms. | Andrea Colledan, Ugo Dal Lago | 2023-10-29T18:10:31Z | http://arxiv.org/abs/2310.19096v2 | # Circuit Width Estimation via Effect Typing
###### Abstract
Circuit description languages are a class of quantum programming languages in which programs are classical and produce a _description_ of a quantum computation, in the form of a _quantum circuit_. Since these programs can leverage all the expressive power of high-level classical languages, circuit description languages have been successfully used to describe complex and practical quantum algorithms, whose circuits, however, may involve many more qubits and gate applications than current quantum architectures can actually muster. In this paper, we present Proto-Quipper-R, a circuit description language endowed with a linear dependent type-and-effect system capable of deriving parametric upper bounds on the width of the circuits produced by a program. We prove both the standard type safety results and that the resulting resource analysis is correct with respect to a big-step operational semantics. We also show that our approach is expressive enough to verify realistic quantum algorithms.
## 1 Introduction
With the promise of providing efficient algorithmic solutions to many problems [28, 32, 12], some of which are traditionally believed to be intractable [54], quantum computing is the subject of intense investigation by various research communities within computer science, not least that of programming language theory [43, 25, 51]. Various proposals for idioms capable of tapping into this new computing paradigm have appeared in the literature since the late 1990s. Some of these approaches turn out to be fundamentally new [52, 1, 49], while many others are strongly inspired by classical languages and traditional programming paradigms [61, 53, 48, 44].
One of the major obstacles to the practical adoption of quantum algorithmic solutions is the fact that despite huge efforts by scientists and engineers alike, it seems that reliable quantum hardware, contrary to classical one, does not scale too easily: although quantum architectures with up to a couple hundred qubits have recently seen the light [10, 39, 11], it is not yet clear whether the so-called quantum advantage [45] is a concrete possibility, given the tremendous challenges posed by the quantum decoherence problem [50].
This entails that software which makes use of quantum hardware must be designed with great care: whenever part of a computation has to be run on quantum hardware, the amount of resources it needs, and in particular the amount of qubits it uses, should be kept to a minimum. More generally, a fine control over the low-level aspects of the computation, something that we willingly abstract from in most cases when dealing with classical computations, should be exposed to the programmer in the quantum case. This, in turn, has led to the development and adoption of many domain-specific programming languages and libraries in which the programmer _explicitly_ manipulates qubits and quantum circuits, while still making use of all the features of a high-level classical programming language. This is the case of the Qiskit and Cirq libraries [18], but also of the Quipper language [26, 27].
At the fundamental level, Quipper is a circuit description language embedded in Haskell. Because of this, Quipper inherits all the expressiveness of the high level, higher-order functional
programming language that is its host, but for the same reason it also lacks a formal semantics. Nonetheless, over the past few years, a number of calculi, collectively known as the Proto-Quipper language family, have been developed to formalize interesting fragments and extensions of Quipper and its type system [48, 46]. Extensions include, among others, dynamic lifting [36, 22, 9] and dependent types [23, 21], but resource analysis is still a rather unexplored research direction in the Proto-Quipper community [56].
The goal of this work is to show that type systems indeed enable the possibility of reasoning about the size of the circuits produced by a Proto-Quipper program. Specifically, we show how linear dependent types in the form given by Gaboardi and Dal Lago [13, 24, 15, 16] can be adapted to Proto-Quipper, allowing to derive upper bounds on circuit widths that are parametric on the size of the problem. This enables a form of static analysis of the resource consumption of circuit families and, consequently, of the quantum algorithms described in the language. Technically, a key ingredient of this analysis, besides linear dependency, is a novel form of effect typing in which the quantitative information coming from linear dependency informs the effect system and allows it to keep circuit widths under control.
The rest of the paper is organized as follows. Section 2 informally explores the problem of estimating the width of circuits produced by Quipper, while also introducing the language. Section 3 provides a more formal definition of the Proto-Quipper language. In particular, it gives an overview of the system of simple types due to Selinger and Rios [46], which however is not designed to reason about the size of circuits. We then move on to the most important technical contribution of this work, namely the linear dependent and effectful type system, which is introduced in Section 4 and proven to guarantee both type safety and a form of total correctness in Section 5. Section 6 is dedicated to an example of a practical application of our type and effect system, that is, a program that builds the Quantum Fourier Transform (QFT) circuit [12, 40] and which is verified to do so without any ancillary qubits.
## 2 An Overview on Circuit Width Estimation
Quipper allows programmers to describe quantum circuits in a high-level and elegant way, using both gate-by-gate and circuit transformation approaches. Quipper also supports hierarchical and parametric circuits, thus promoting a view in which circuits become first-class citizens. Quipper has been shown to be scalable, in the sense that it has been effectively used to describe complex quantum algorithms that easily translate to circuits involving trillions of gates applied to millions of qubits. The language allows the programmer to optimize the circuit, e.g. by using ancilla qubits for the sake of reducing the circuit depth, or recycling qubits that are no longer needed.
One feature that Quipper lacks is a methodology for _statically_ proving that important parameters -- such as the the width -- of the underlying circuit are below certain limits, which of course can be parametric on the input size of the circuit. If this kind of analysis were available, then it would be possible to derive bounds on the number of qubits needed to solve any instance of a problem, and ultimately to know in advance how big of an instance can be _possibly_ solved given a fixed amount of qubits.
In order to illustrate the kind of scenario we are reasoning about, this section offers some simple examples of Quipper programs, showing in what sense we can think of capturing the quantitative information that we are interested in through types and effect systems and linear dependency. We proceed at a very high level for now, without any ambition of formality.
Let us start with the example of Figure 1. The Quipper function on the left builds the quantum circuit on the right: an (admittedly contrived) implementation of the quantum not operation. The dumbNot function implements negation using a controlled not gate and an ancillary qubit a, which is initialized and discarded within the body of the function. This qubit does not appear in the interface of the circuit, but it clearly adds to its overall width, which is 2.
Consider now the higher-order function in Figure 2. This function takes as input a circuit building function f, an integer n and describes the circuit obtained by applying f's circuit n times to the input qubit q. It is easy to see that the width of the circuit produced in output by
iter dumbNot n is equal to 2, even though, overall, the number of qubits initialized during the computation is equal to \(n\). The point is that each ancilla is created only _after_ the previous one has been discarded, thus enabling a form of qubit recycling.
Is it possible to statically analyze the width of the circuit produced in output by iter dumbNot n so as to conclude that it is constant and equal to 2? What techniques can we use? Certainly, the presence of higher-order types complicates the problem, already in itself non-trivial. The approach we propose in this paper is based on two ingredients. The first is the so-called effect typing [41]. In this context, the effect produced by the program is nothing more than the circuit and therefore it is natural to think of an effect system in which the width of such circuit, and only that, is exposed. Therefore, the arrow type \(A\to B\) should be decorated with an expression indicating the width of the circuit produced by the corresponding function when applied to an argument of type \(A\). Of course, the width of an individual circuit is a natural number, so it would make sense to annotate the arrow with a natural number. For technical reasons, however, it will also be necessary to keep track of another natural number, corresponding to the amount of wire resources that the function captures from the surrounding environment. This necessity stems from a need to keep close track of wires even in the presence of data hiding, and will be explained in further detail in Section 4.
Under these premises, the dumbNot function would receive type Qubit \(\rightarrow_{2,0}\) Qubit, meaning that it takes as input a qubit and produces a circuit of width 2 which outputs a qubit. Note that the second annotation is 0, since we do not capture anything from the function's environment, let alone a wire. Consequently, because iter iterates in sequence and because the ancillary qubit in dumbNot can be reused, the type of iter dumbNot n would also be Qubit \(\rightarrow_{2,0}\) Qubit.
Let us now consider a slightly different situation, in which the width of the produced circuit
Figure 1: An implementation of the quantum not operation using an ancilla.
Figure 3: The hadamardN function implements a circuit family where circuits have width linear in their input size.
Figure 2: A higher-order function which iterates a circuit-building function f on an input qubit q and the result of its application to the dumbNot function from Figure 1.
is not constant, but rather increases proportionally to the circuit's input size. Figure 3 shows a Quipper function that returns a circuit on \(n\) qubits in which the Hadamard gate is applied to each qubit. This simple circuit represents the preprocessing phase of many quantum algorithms, including Deutsch-Josza [8] and Grover's [28]. It is obvious that this function works on inputs of arbitrary size, and therefore we can interpret it as a circuit family, parametrized on the length of the input list of qubits. This quantity, although certainly a natural number, is unknown statically and corresponds precisely to the width of the produced circuit. A question therefore arises as to whether the kind of effect typing we briefly hinted at in the previous paragraph is capable of dealing with such a function. Certainly, the expressions used to annotate arrows cannot be, like in the previous case, mere _constants_, as they clearly depend on the size of the input list. Is there a way to reflect this dependency in types? Certainly, one could go towards a fully-fledged notion of dependent types, like the ones proposed in [23], but a simpler approach, in the style of Dal Lago and Gaboardi's linear dependent types [13, 24, 15, 16] turns out to be enough for this purpose. This is precisely the route that we follow in this paper. In this approach, terms can indeed appear in types, but that is only true for a very restricted class of terms, disjoint from the ordinary ones, called _index terms_. As an example, the type of the function hadamard above could become \(\mathsf{List}^{i}\,\mathsf{Qubit}\to_{i,0}\mathsf{List}^{i}\,\mathsf{Qubit}\), where \(i\) is an _index variable_. The meaning of the type would thus be that hadamard takes as input any list of qubits of length \(i\) and produces a circuit of width at most \(i\) which outputs \(i\) qubits. The language of indices is better explained in Section 4, but in general we can say that indices are arithmetical expressions over natural numbers and index variables, and can thus express non-trivial dependencies between input sizes and corresponding circuit widths.
## 3 The Proto-Quipper Language
This section aims at introducing the Proto-Quipper family of calculi to the non-specialist, without any form of resource analysis. At its core, Proto-Quipper is a linear lambda calculus with bespoke constructs to build and manipulate circuits. Circuits are built as the side-effect of a computation, behind the scenes, but they can also appear and be manipulated as data in the language.
The types of Proto-Quipper are given in Figure 4. Speaking at a high level, we can say that Proto-Quipper types are generally linear. In particular, \(w\in\{\mathsf{Bit},\mathsf{Qubit}\}\) is a _wire type_ and is linear, while \(\multimap\) is the linear arrow constructor. A subset of types, called _parameter types_, represent the values of the language that are _not_ linear and that can therefore be copied. Any term of type \(A\) can be _lifted_ into a duplicable parameter of type \(!A\) if its type derivation does not require the use of linear resources.
Now, let us informally dissect the language as presented in Figure 5, starting with the language of values. The main constructs of interest are _labels_ and _boxed circuits_. A label \(\ell\) represents
Figure 4: Proto-Quipper types.
Figure 5: Proto-Quipper syntax.
a reference to a free wire of the underlying circuit being built and is attributed a wire type \(w\in\{\mathsf{Bit},\mathsf{Qubit}\}\). Labels have to be treated linearly due to the no-cloning property of quantum states [40]. Arbitrary structures of labels form a subset of values which we call _wire bundles_ and which are given _bundle types_. On the other hand, a boxed circuit \((\bar{\ell},\mathcal{C},\bar{k})\) represents a circuit object \(\mathcal{C}\) as a datum within the language, together with its input and output interfaces, given as wire bundles \(\bar{\ell}\) and \(\bar{k}\). Such a value is given type \(\mathsf{Circ}(T,U)\), where bundle types \(T\) and \(U\) are the input and output types of the circuit, respectively. Boxed circuits can be copied, manipulated by primitive functions and, more importantly, applied to the underlying circuit. This last operation, which lies at the core of Proto-Quipper's circuit-building capabilities, is possible thanks to the apply operator. This operator takes as first argument a boxed circuit \((\bar{\ell},\mathcal{C},\bar{k})\) and appends \(\mathcal{C}\) to the underlying circuit \(\mathcal{D}\). How does apply know _where_ exactly in \(\mathcal{D}\) to apply \(\mathcal{C}\)? Thanks to a second argument: a bundle of wires \(\bar{t}\) coming from the free output wires of \(\mathcal{D}\), which identifies the exact location where \(\mathcal{C}\) is supposed to be appended.
The language is expected to be endowed with constant boxed circuits corresponding to fundamental gates (e.g. Hadamard, controlled not, etc.), but the programmer can also introduce their own boxed circuits via the box operator. Intuitively, box takes as input a circuit-building function and evaluates it in a sandboxed environment, on dummy arguments, in a way that leaves the underlying circuit unchanged. This evaluation produces a standalone circuit \(\mathcal{C}\), which is then returned by the box operator as a boxed circuit \((\bar{\ell},\mathcal{C},\bar{k})\).
Figure 6 shows the Proto-Quipper term corresponding to the Quipper program in Figure 1, as an example of the use of the language. Note that let\(\langle x,y\rangle=M\) in \(N\) is syntactic sugar for let\(z=M\) in let\(\langle x,y\rangle=z\) in \(N\). The _dumbNot_ function is given type Qubit\(\multimap\) Qubit and builds the circuit shown in Figure 1 when applied to an argument.
On the classical side of things, it is worth mentioning that Proto-Quipper as presented in this section does _not_ support general recursion. A limited form of recursion on lists is instead provided via a primitive fold constructor, which takes as argument a (copiable) step function of type \(\mathsf{l}((B\otimes A)\multimap B)\), an initial value of type \(B\), and constructs a function of type \(\mathsf{List}\,A\multimap B\) implementing the fold of the step function over the input list. Although this workaround is not enough to recover the full power of general recursion, it appears that it is enough to describe many quantum algorithms. Figure 7 shows an example of the use of fold to reverse a list. Note that \(\lambda\langle x,y\rangle_{A\otimes B}.M\) is syntactic sugar for \(\lambda z_{A\otimes B}.\mathsf{let}\,\langle x,y\rangle=z\) in \(M\).
To conclude this section, we just remark how all of the Quipper programs shown in Section 2 can be encoded in Proto-Quipper. However, Proto-Quipper's system of simple types in unable to tell us anything about the resource consumption of these programs. Of course, one could run hadamardN on a concrete input and examine the size of the circuit produced at run-time, but this
Figure 6: An example Proto-Quipper program. \(\mathsf{INIT}_{1},\mathsf{CNOT}\) and DISCARD are primitive boxed circuits implementing the corresponding elementary operations.
Figure 7: Function _rev_ reverses a list of qubits.
amounts to _testing_, not _verifying_ the program, and lacks the qualities of staticity and parametricity that we seek.
## 4 Incepting Linear Dependency and Effect Typing
We are now ready to expand on the informal definition of the Proto-Quipper language given in Section 3, to reach a formal definition of Proto-Quipper-R: a linearly and dependently typed language whose type system supports the derivation of upper bounds on the width of the circuits produced by programs.
### Types and Syntax of Proto-Quipper-R
The types and syntax of Proto-Quipper-R are given in Figure 8. As we mentioned, one of the key ingredients of our type system are the index terms which we annotate standard Proto-Quipper types with. These indices provide quantitative information about the elements of the resulting types, in a manner reminiscent of refinement types [19, 47]. In our case, we are primarily concerned with circuit width, which means that the natural starting point of our extension of Proto-Quipper is precisely the circuit type \(\mathsf{Circ}(T,U)\): \(\mathsf{Circ}^{I}(T,U)\) has elements the boxed circuits of input type \(T\), output type \(U\), _and width bounded by \(I\)_. Term \(I\) is precisely what we call an index, that is, an arithmetical expression denoting a natural number. Looking at the grammar for indices, their interpretation is fairly straightforward: \(n\) is a natural number, \(i\) is an index variable, \(I+J,I\times J\) and \(\mathsf{max}(I,J)\) have their intuitive meaning, \(I-J\) denotes _natural_ subtraction, and \(\mathsf{max}_{i<I}\,J\) is the maximum for \(i\) going from \(0\) (included) to \(I\) (excluded) of \(J\), where \(i\) can occur free in \(J\).
Let \(\Theta\) be a set of index variable names, which we call an _index context_. An index \(I\) is _well-formed under context_\(\Theta\), and we write \(\Theta\vdash I\), when all its free index variables are in \(\Theta\). Figure 9 provides a more formal interpretation of well-formed indices.
While the index in a circuit type denotes an upper bound, the index in a type of the form \(\mathsf{List}^{I}\,A\) denotes the _exact_ length of the lists of that type. While this refinement is quite restrictive in a generic scenario, it allows us to include lists of labels among wire bundles, since they are effectively isomorphic to finite tensors of labels and therefore represent wire bundles of known size. Lastly, as we anticipated in Section 2, an arrow type \(A\multimap_{I,J}\ B\) is annotated with _two_ indices: \(I\) is an upper bound to the width of the circuit built by the function once it is applied to an argument of type \(A\), while \(J\) describes the exact number of wire resources captured in the function's closure. The utility of this last annotation will be clearer in Section 4.3.
The languages for terms and values are almost the same as in Proto-Quipper, with the minor difference that the fold operator now binds the index variable name \(i\) within the scope of its first argument. This variable appears locally in the type of the step function, in such a way as to allow each iteration of the fold to contribute to the overall circuit width in a _different_ way.
Figure 8: Proto-Quipper-R syntax and types.
### A Formal Language for Circuits
The type system of Proto-Quipper-R is designed to reason about the width of circuits. Therefore, before we formally introduce the type system in Section 4.3, we ought to introduce circuits themselves in a formal way. So far, we have only spoken of circuits at a very high and intuitive level, and we have represented them only graphically. Looking at the circuits in Section 2, what do they have in common? At the fundamental level, they are made up of elementary operations applied to specific wires. Of course, the order of these operations matters, as does the order of wires that they are applied to (e.g. a controlled not operation does not have the same semantics if we switch the target and control qubits).
In the existing literature on Proto-Quipper, circuits are usually interpreted as morphisms in a symmetric monoidal category [46], but this approach makes it particularly hard to reason about their intensional properties, such as width. For this reason, we opt for a _concrete_ model of wires and circuits, rather than an abstract one.
Luckily, we already have a datatype modeling ordered structures of wires, that is, the wire bundles that we introduced in the previous sections. We use them as the foundation upon which we build circuits.
That being said, Figure 10 introduces the Circuit Representation Language (CRL) which we use as the target for circuit building in Proto-Quipper-R. Wire bundles are exactly as in Figure 8 and represent arbitrary structures of wires, while circuits themselves are defined very simply as a sequence of elementary operations applied to said structures. We call \(Q\) a _label context_ and define it as a partial mapping from label names to wire types. We use label contexts as a mean to keep track of the set of labels available at any point during a computation, alongside their respective types. Circuit \(id_{Q}\) represents the identity circuit taking as input the labels in \(Q\) and returning them unchanged, while \(\mathcal{C};g(\bar{\ell})\rightarrow\bar{k}\) represents the application of the elementary operation \(g\) to the wires identified by \(\bar{\ell}\) among the outputs of \(\mathcal{C}\). Operation \(g\) outputs the wire bundle \(\bar{k}\), whose labels become part of the outputs of the circuit as a whole. Note that an "elementary operation" is usually the application of a gate, but it could also be a measurement, or the initialization or discarding of a wire. Although semantically very different, from the perspective of circuit building these
Figure 10: CRL syntax and types.
Figure 9: Interpretation of well-formed indices.
operations are just elementary building blocks in the construction of a more complex structure, and it makes no sense to distinguish between them syntactically. Circuits are amenable to a form of concatenation, defined as follows.
**Definition 1** (Circuit Concatenation).: _We define the concatenation of \(\mathsf{CRL}\) circuits \(\mathcal{C}\) and \(\mathcal{D}\), written \(\mathcal{C}::\mathcal{D}\), as follows_
\[\mathcal{C}::id_{Q} =\mathcal{C} \tag{1}\] \[\mathcal{C}::(\mathcal{D};g(\bar{\ell})\to\bar{k}) =(\mathcal{C}::\mathcal{D});g(\bar{\ell})\to\bar{k} \tag{2}\]
#### 4.2.1 Circuit Typing
Naturally, not all circuits built from the \(\mathsf{CRL}\) grammar make sense. For example \(id_{(\ell:\mathsf{Qubit})};H(k)\to k\) and \(id_{(\ell:\mathsf{Qubit})};\mathit{CNOT}(\langle\ell,\ell\rangle)\to\langle k,t\rangle\) are both syntactically correct, but the first applies a gate to a non-existing wire, while the second violates the no-cloning theorem by duplicating \(\ell\). To rule out such ill-formed circuits, we employ a rudimentary type system for circuits which allows us to derive judgments of the form \(\mathcal{C}:Q\to L\), which informally read "circuit \(\mathcal{C}\) is well-typed with input label context \(Q\) and output label context \(L\)".
The typing rules for \(\mathsf{CRL}\) are given in Figure 11. We call \(Q\vdash_{w}\bar{\ell}:T\) a _wire judgment_, and we use it to give a structured type \(T\) to an otherwise unordered label context \(Q\), by means of a wire bundle \(\bar{\ell}\). Most rules are straightforward, except those for lists, which rely on a judgment of the form \(\vDash I=J\). This is to be intended as a semantic judgment asserting that \(I\) and \(J\) are closed and equal when interpreted as natural numbers. Within the typing rules for lists, this judgment reflects the idea that there are many ways to syntactically represent the length of a list. For example, \(\mathsf{nil}\) can be given type \(\mathsf{List}^{0}\,T\), but also \(\mathsf{List}^{1-1}\,T\) or \(\mathsf{List}^{0\times 5}\,T\). This kind of flexibility might seem unwarranted for such a simple language, but it is useful to effectively interface \(\mathsf{CRL}\) and the more complex \(\mathsf{Proto-Quipper-R}\). Speaking of the actual circuit judgments, the _seq_ rule tells us that the the application of an elementary operation \(g\) is well-typed whenever \(g\) only acts on labels occurring in the outputs of \(\mathcal{C}\) (those in \(\bar{\ell}\), or equivalently in \(H\)), produces in output labels that do not clash with the remaining outputs of \(\mathcal{C}\) (since \(L,K\) denotes the union of two label contexts with disjoint domains) and has the right signature. This last requirement is expressed as \(g\in\mathscr{G}(T,U)\), where \(\mathscr{G}(T,U)\) is the subset of elementary operations that can be applied to an input of type \(T\) to obtain an output of type \(U\). For example, the Hadamard gate, which acts on a single qubit, is in \(\mathscr{G}(\mathsf{Qubit},\mathsf{Qubit})\), the controlled not gate is in \(\mathscr{G}(\mathsf{Qubit}\otimes\mathsf{Qubit},\mathsf{Qubit}\otimes \mathsf{Qubit})\) and the single-qubit measurement is in \(\mathscr{G}(\mathsf{Qubit},\mathsf{Bit})\).
#### 4.2.2 Circuit Width
Among the many properties of circuits, we are interested in width, so we conclude this section by giving a formal status to this quantity. As we saw in Section 2, when we initialize a new wire, we can reuse previously discarded wires in such a way that the width of a circuit is not always equal to the number of wires that are initialized. We formalize this intuition in the following definition.
Figure 11: \(\mathsf{CRL}\) type system.
**Definition 2** (Circuit Width).: _We define the width of a CRL circuit \(\mathcal{C}\), written \(\operatorname{width}(\mathcal{C})\), as follows_
\[\operatorname{width}(id_{Q}) =|Q| \tag{3}\] \[\operatorname{width}(\mathcal{C};g(\bar{\ell})\to\bar{k}) =\operatorname{width}(\mathcal{C})+\max(0,\operatorname{new}(g)- \operatorname{discarded}(\mathcal{C})) \tag{4}\]
_where \(\operatorname{discarded}(\mathcal{C})=\operatorname{width}(\mathcal{C})- \operatorname{outputs}(\mathcal{C})\) and_
\[\operatorname{outputs}(id_{Q}) =|Q| \tag{5}\] \[\operatorname{outputs}(\mathcal{C};g(\bar{\ell})\to\bar{k}) =\operatorname{outputs}(\mathcal{C})+\operatorname{new}(g) \tag{6}\]
In the definition above, \(|Q|\) is the number of labels in \(Q\) and \(\operatorname{new}(g)\) represents the net number of new wires initialized by \(g\). If \(g\) outputs less wires than it consumes, then \(\operatorname{new}(g)\) is negative. The idea is that whenever we require a new wire in our computation, first we try to reuse a previously discarded wire, in which case the initialization does not add to the total width of the circuit (\(\operatorname{new}(g)\leq\operatorname{discarded}(\mathcal{C})\)), and _only if we cannot do so_ we actually create a new wire, increasing the overall width of the circuit (\(\operatorname{new}(g)>\operatorname{discarded}(\mathcal{C})\)).
Now that we have a formal definition of circuit types and width, we can state a fundamental property of the concatenation of well-typed circuits, which is illustrated in Figure 12 and proven in Theorem 1. We use this result pervasively in proving the correctness of Proto-Quipper-R in section 5.
**Theorem 1** (Crl).: _Given \(\mathcal{C}:Q\to L,H\) and \(\mathcal{D}:H\to K\) such that the labels shared by \(\mathcal{C}\) and \(\mathcal{D}\) are all and only those in \(H\), we have_
1. \(\mathcal{C}::\mathcal{D}:Q\to L,K,\)__
2. \(\operatorname{width}(\mathcal{C}::\mathcal{D})\leq\max(\operatorname{width}( \mathcal{C}),\operatorname{width}(\mathcal{D})+|L|).\)__
Proof.: By induction of the derivation of \(\mathcal{D}:H\to K\).
### Typing Programs
Going back to Proto-Quipper-R, we have already seen how the standard Proto-Quipper types are refined with quantitative information. However, decorating types is not enough for the purposes of width estimation. Recall that, in general, a Proto-Quipper program produces a circuit as a _side effect_ of its evaluation. If we want to reason about the width of said circuit, it is not enough to rely on a regular linear type system, although dependent. Rather, we have to introduce the second ingredient of our analysis and turn to a _type-and-effect system_[41], revolving around a type judgment of the form
\[\Theta;\Gamma;Q\vdash_{c}M:A;I, \tag{7}\]
which intuitively reads "for all values of the index variables in \(\Theta\), under typing context \(\Gamma\) and label context \(Q\), term \(M\) has type \(A\) and produces a circuit of width at most \(I\)". Therefore, the index variables in \(\Theta\) are universally quantified in the rest of the judgment. Context \(\Gamma\) is a typing context for parameter and linear variables alike. When a typing context contains exclusively parameter variables, we write it as \(\Phi\). In this judgment, index \(I\) plays the role of an _effect annotation_,
Figure 12: The concatenation of well-typed circuits \(\mathcal{C}\) and \(\mathcal{D}\).
describing a relevant aspect of the side effect produced by the evaluation of \(M\) (i.e. the width of the produced circuit). The attentive reader might wonder why this annotation consists only of one index, whereas when we discussed arrow types in previous sections we needed two. The reason is that the second index, which we use to keep track of the number of wires captured by a function, is redundant in a typing judgment where the same quantity can be inferred directly from the environments \(\Gamma\) and \(Q\). A similar typing judgment is introduced for values, which are effect-less:
\[\Theta;\Gamma;Q\vdash_{v}V:A. \tag{8}\]
The rules for deriving typing judgments are those in Figure 13, where \(\Gamma_{1},\Gamma_{2}\) and \(Q_{1},Q_{2}\) denote the union of two contexts with disjoint domains. The well-formedness judgment \(\Theta\vdash I\) is extended to types as shown in Figure 14 and then lifted to typing contexts in the natural way. Among interesting typing rules, we can see how the _circ_ rule bridges between CRL and Proto-Quipper-R. A boxed circuit \((\bar{\ell},\mathcal{C},\bar{k})\) is well typed with type \(\mathsf{Circ}^{I}(T,U)\) when \(\mathcal{C}\) is no wider than the quantity denoted by \(I\), \(\mathcal{C}:Q\to L\) and \(\bar{\ell},\bar{k}\) contain all and only the labels in \(Q\) and \(L\), respectively, acting as a language-level interface to \(\mathcal{C}\).
The two main constructs that interact with circuits are apply and box. The _apply_ rule is the foremost place where effects enter the type derivation:
\[\text{\emph{apply}}\,\frac{\Theta;\Phi,\Gamma_{1};Q_{1}\vdash_{v}V:\mathsf{ Circ}^{I}(T,U)\ \ \ \ \ \Theta;\Phi,\Gamma_{2};Q_{2}\vdash_{v}W:T}{\Theta;\Phi,\Gamma_{1}, \Gamma_{2};Q_{1},Q_{2}\vdash_{c}\text{\emph{apply}}(V,W):U;I}\]
Since \(V\) represents some boxed circuit of width at most \(I\), its application to an appropriate wire bundle \(W\) produces exactly a circuit of width at most \(I\). The _box_ rule, on the other hand, works more or less in the opposite direction:
\[\text{\emph{box}}\,\frac{\Theta;\Phi;\emptyset\vdash_{v}V:\mathsf{l}(T\multimap _{I,J}U)}{\Theta;\Phi;\emptyset\vdash_{c}\text{\emph{box}}_{T}V:\mathsf{Circ} ^{I}(T,U);0}\]
If \(V\) is a circuit building function that, once applied to an input of type \(T\), would build a circuit of output type \(U\) and width at most \(I\), then boxing it means turning it into a boxed circuit with the same characteristics. Note that the _box_ rule requires that the typing context be devoid of linear variables. This reflects the idea that \(V\) is meant to be executed in complete isolation, to build a standalone, replicable circuit, and therefore it should not capture any linear resource (e.g. a label) from the surrounding environment.
#### 4.3.1 Wire Count
Notice that many rules rely on an operator written \(\#(\cdot)\), which we call the _wire count_ operator. Intuitively, this operator returns the number of wire resources (in our case, bits or qubits) represented by a type or context. To understand how this is important, consider the _return_ rule:
\[\text{\emph{return}}\,\frac{\Theta;\Gamma;Q\vdash_{v}V:A}{\Theta;\Gamma;Q \vdash_{c}\text{\emph{return}}\,V:A;\#(\Gamma;Q)}\]
The return operator turns a value \(V\) into a trivial computation that evaluates immediately to \(V\), and therefore it would be tempting to give it an effect annotation of \(0\). However, \(V\) is not necessarily a closed value. In fact, it might very well contain many bits and qubits, coming both from the typing context \(\Gamma\) and the label context \(Q\). Although nothing happens to these bits and qubits, they still corresponds to wires in the underlying circuit, and these wires have a width which must be accounted for in the judgment for the otherwise trivial computation. The _return_ rule therefore produces an effect annotation of the form \(\#(\Gamma;Q)\), which corresponds exactly to this quantity. A formal description of the wire count operator on types is given in the following Definition 3.
Figure 13: Proto-Quipper-R type system.
**Definition 3** (Wire Count).: _We define the wire count of a type \(A\), written \(\#(A)\), as a function \(\#(\cdot):\text{TYPE}\to\text{INDEX}\) such that_
\[\#(1)=\#(!A)=\#(\mathsf{Circ}^{I}(T,U))=0 \#(w)=1 \#(A\otimes B)=\#(A)+\#(B)\] \[\#(A\mathbin{\rightharpoonup}_{\circ I,J}B)=J \#(\mathsf{List}^{I}\,A)=I\times\#(A)\]
This definition is lifted to typing and label contexts in the natural way. Note that, for any label context \(Q\), we have \(\#(Q)=|Q|\). Annotation \(\#(\Gamma;Q)\) is then shorthand for \(\#(\Gamma)+|Q|\). This definition is fairly straightforward, except for the arrow case. By itself, an arrow type does not give us any information about the amount of qubits or bits captured in the corresponding closure. This is precisely where the second index \(J\), which keeps track exactly of this quantity, comes into play. This annotation is introduced by the _abs_ rule and allows our analysis to circumvent data hiding.
The _let_ rule is another rule in which wire counts are essential:
\[\begin{array}{c}\text{let}\,\frac{\Theta;\Phi,\Gamma_{1};Q\mathbin{\vdash }_{c}M:A;I\ \ \ \ \Theta;\Phi,\Gamma_{2},x:A;Q\mathbin{\vdash}_{c}N:B;J}{\Theta;\Phi,\Gamma_{1}, \Gamma_{2};Q_{1},Q_{2}\mathbin{\vdash}_{c}\text{let}\ x=M\ \text{in}\ N:B;\mathsf{ max}(I+\#(\Gamma_{2};Q_{2}),J)}\end{array}\]
The two terms \(M\) and \(N\) build the circuits \(\mathcal{C}_{M}\) and \(\mathcal{C}_{N}\), whose widths are bounded by \(I\) and \(J\), respectively. Once again, it might be tempting to conclude that the overall circuit built by the let construct has width bounded by \(\mathsf{max}(I,J)\), but this fails to take into account the fact that while \(M\) is building \(\mathcal{C}_{M}\) starting from the wires contained in \(\Gamma_{1}\) and \(Q_{1}\), we must keep aside the wires contained in \(\Gamma_{2}\) and \(Q_{2}\), which will be used by \(N\) to build \(\mathcal{C}_{N}\). These wires must flow alongside \(\mathcal{C}_{M}\) and their width, i.e. \(\#(\Gamma_{2};Q_{2})\), adds up to the total width of the left-hand side of the let construct, leading to an overall width upper bound of \(\mathsf{max}(I+\#(\Gamma_{2};Q_{2}),J)\). This situation is better illustrated in Figure 15.
The last rule that makes substantial use of wire counts is _fold_, arguably the most complex rule in the system:
\[\begin{array}{c}\Theta;\Phi,\Gamma;Q\mathbin{\vdash}_{v}W:B\{0/i\}\ \ \ \ \ \Theta,i;\Phi;\emptyset\mathbin{\vdash}_{v}V:l((B\otimes A)\mathbin{\rightharpoonup}_{J,J ^{\prime}}B\{i+1/i\})\\ \text{\small{fold}}\,\frac{\Theta\mathbin{\vdash}I\ \ \ \ \Theta\mathbin{\vdash}A\
of the value \(V\) for the variable \(x\). Intuitively, if the accumulator has initially type \(B\{0/i\}\) and each application of the step function increases \(i\) by one, then when we fold over a list of length \(I\) we get an output of type \(B\{I/i\}\). Index \(E\) is the upper bound to the width of the overall circuit built by the fold: if the input list is empty, then the width of the circuit is just the number of wires contained in the initial accumulator, that is, \(\#(\Gamma;Q)\). If the input list is non-empty, on the other hand, things get slightly more complicated. At each step \(i\), the step function builds a circuit \(\mathcal{C}_{i}\) of width bounded by \(J\), where \(J\) might depend on \(i\). This circuit takes as input all the wires in the accumulator, as well as the wires contained in the first element of the input list, which are \(\#(A)\). The wires contained in remaining \(I-1-i\) elements have to flow alongside \(\mathcal{C}_{i}\), giving a width upper bound of \(J+(I-1-i)\times\#(A)\) at each step \(i\). The overall width upper bound is then the maximum for \(i\) going from \(0\) to \(I-1\) of this quantity, i.e. precisely \(\mathsf{max}_{i<I}\,J+(I-1-i)\times\#(A)\). Once again, a graphical representation of this scenario is given in Figure 16.
#### 4.3.2 Subtyping
Notice that Proto-Quipper-R's type system includes two rules for subtyping, which are effectively the same rule for terms and values, respectively: _csub_ and _vsub_. We mentioned that our type system resembles a refinement type system, and all such systems induce a subtyping relation between types, where \(A\) is a subtype of \(B\) whenever the former is "at least as refined" as the latter. In our case, a subtyping judgment such as \(\Theta\vdash_{s}A<:B\) means that for all natural values of the index variables in \(\Theta\), \(A\) is a subtype of \(B\).
We derive this kind of judgments by the rules in Figure 17. Note that \(\Theta\vdash_{s}A<:>B\) is shorthand for "\(\Theta\vdash_{s}A<:B\) and \(\Theta\vdash_{s}B<:A\)". Subtyping relies in turn on a judgment of the form \(\Theta\vDash I\leq J\), which is a generalization of the semantic judgment that we used in the CRL type system in Section 4.2. Such a judgment asserts that for all values of the index variables in
Figure 16: The shape of a circuit built by a fold applied to an input list of type \(\mathsf{List}^{I}\,A\).
Figure 15: The shape of a circuit built by a let construct.
\(\Theta\), \(I\) is lesser or equal than \(J\). More formally, the meaning of \(\Theta\vDash I\leq J\) is that \(\Theta\vDash I,\Theta\vDash J\) and for all \(n_{1},\ldots,n_{|\Theta|}\): \([\![\Theta\vDash I]\!](n_{1},\ldots,n_{|\Theta|})\leq[\![\Theta\vDash J]\!](n_{1}, \ldots,n_{|\Theta|})\). Consequently, \(\vDash I=J\) is just shorthand for \(\emptyset\vDash I=J\), which in turn is shorthand for "\(\Theta\vDash I\leq J\) and \(\Theta\vDash J\leq I\)". We purposefully leave the decision procedure of this kind of judgments unspecified, with the prospect that, in a practical scenario, they could be delegated to an SMT solver [7].
### Operational Semantics
Operationally speaking, it does not make sense, in the Proto-Quipper languages, to speak of the semantics of a term _in isolation_: a term is always evaluated in the context of an underlying circuit that supplies all of the term's free labels. We therefore define the operational semantics of Proto-Quipper-R as a big-step evaluation relation \(\Downarrow\) on _configurations_, i.e. circuits paired with either terms or values. Intuitively, \((\mathcal{C},M)\Downarrow(\mathcal{D},V)\) means that \(M\) evaluates to \(V\) and updates \(\mathcal{C}\) to \(\mathcal{D}\) as a side effect.
The rules for evaluating configurations are given in Figure 18, where \(\mathcal{C},\mathcal{D}\) and \(\mathcal{E}\) are circuits, \(M\) and \(N\) are terms, while \(V,W,X,Y\) and \(Z\) are values. Most evaluation rules are straightforward, with the exception perhaps of _apply, box_ and _fold-step_. Being the fundamental block of circuit-building, the semantics of apply lies almost entirely in the way it updates the underlying circuit:
Figure 17: Proto-Quipper-R subtyping rules.
Figure 18: Proto-Quipper-R big-step operational semantics.
\[\mbox{\scriptsize\emph{apply}}\,\frac{(\mathcal{E},\bar{q})=\mbox{\scriptsize append }(\mathcal{C},\bar{t},(\bar{\ell},\mathcal{D},\bar{k}))}{(\mathcal{C},\mbox{ \scriptsize\emph{apply}}((\bar{\ell},\mathcal{D},\bar{k}),\bar{t}))\Downarrow( \mathcal{E},\bar{q})}\]
The concatenation of the underlying circuit \(\mathcal{C}\) and the applicand \(\mathcal{D}\) is delegated entirely to the append function, which is defined as follows.
**Definition 4** (append).: _We define the append of \((\bar{\ell},\mathcal{D},\bar{k})\) to \(\mathcal{C}\) on \(\bar{t}\), written \(\mbox{\scriptsize append}(\mathcal{C},\bar{t},(\bar{\ell},\mathcal{D},\bar{k}))\), as the function that performs the following steps:_
1. _Finds_ \((\bar{t},\mathcal{D}^{\prime},\bar{q})\) _equivalent to_ \((\bar{\ell},\mathcal{D},\bar{k})\) _such that the labels shared by_ \(\mathcal{C}\) _and_ \(\mathcal{D}^{\prime}\) _are all and only those in_ \(\bar{t}\)_,_
2. _Computes_ \(\mathcal{E}=\mathcal{C}::\mathcal{D}^{\prime}\)_,_
3. _Returns_ \((\mathcal{E},\bar{q})\)_._
Note that two circuits are _equivalent_ when they only differ by a renaming of labels, that is, when they have the same fundamental structure. What the renaming does, in this case, is instantiate the generic input interface \(\bar{\ell}\) of circuit \(\mathcal{D}\) with the actual labels that it is going to be appended to, namely \(\bar{t}\), and ensure that there are no name clashes between the labels occurring in the resulting \(\mathcal{D}^{\prime}\) and those occurring in \(\mathcal{C}\).
On the other hand, the semantics of a term of the form \(\mbox{\scriptsize\emph{box}}_{T}(\mbox{\scriptsize\emph{lift}}\,M)\) relies on the freshlabels function:
\[\mbox{\scriptsize\emph{box}}\,\frac{(Q,\bar{\ell})=\mbox{\scriptsize freshlabels }(T)\quad\left(id_{Q},M\right)\Downarrow(id_{Q},V)\quad\left(id_{Q},V\,\bar{ \ell}\right)\Downarrow(\mathcal{D},\bar{k})}{(\mathcal{C},\mbox{\scriptsize \emph{box}}_{T}(\mbox{\scriptsize\emph{lift}}\,M))\Downarrow(\mathcal{C},( \bar{\ell},\mathcal{D},\bar{k}))}\]
What freshlabels does is take as input a bundle type \(T\) and instantiate fresh \(Q,\bar{\ell}\) such that \(Q\vdash_{w}\bar{\ell}:T\). The wire bundle \(\bar{\ell}\) is then used as a dummy argument to \(V\), the circuit-building function resulting from the evaluation of \(M\). This function application is evaluated in the context of the identity circuit \(id_{Q}\) and eventually produces a circuit \(\mathcal{D}\), together with its output labels \(\bar{k}\). Finally, \(\bar{\ell}\) and \(\bar{k}\) become respectively the input and output interfaces of the resulting boxed circuit \((\bar{\ell},\mathcal{D},\bar{k})\). Note, at this point, that \(T\) controls how many labels are initialized by the freshlabels function. Because \(T\) can contain indices (e.g. it could be that \(T\equiv\mbox{\scriptsize\emph{List}}^{3}\,\mbox{\scriptsize\emph{Qubit}}\)), it follows that in Proto-Quipper-R indices are not only relevant to typing, but they also have operational value. For this reason, the semantics of Proto-Quipper-R is well-defined only on terms closed both in the sense of regular variables _and_ index variables, since a circuit-building function of input type, e.g., \(\mbox{\scriptsize\emph{List}}^{i}\,\mbox{\scriptsize\emph{Qubit}}\) does not correspond to any individual circuit, and therefore it makes no sense to try and box it.
The operational significance of indices is also apparent in the _fold-step_ rule:
\[\mbox{\scriptsize\emph{fold-step}}\,\frac{(\mathcal{C},M\{0/i\})\Downarrow( \mathcal{C},Y)\quad\left(\mathcal{C},Y\left\langle V,W\right\rangle\right) \Downarrow(\mathcal{E},Z)}{(\mathcal{E},(\mbox{\scriptsize\emph{fold}}_{i} \,\left(\mbox{\scriptsize\emph{lift}}\,M\{i+1/i\}\right)\,Z)\,W^{\prime}) \Downarrow(\mathcal{D},X)}{(\mathcal{C},(\mbox{\scriptsize\emph{fold}}_{i} \,\left(\mbox{\scriptsize\emph{lift}}\,M\right)\,V)\,(\mbox{\scriptsize \emph{cons}}\,W\,W^{\prime}))\Downarrow(\mathcal{D},X)}\]
Here, the index variable \(i\) occurring free in \(M\) is instantiated to \(0\) before evaluating \(M\) to obtain the step function \(Y\). Next, the new accumulator \(Z\) is computed. Then, before evaluating the next iteration, \(i\) is replaced with \(i+1\) in \(M\). This way, each time \(M\) is evaluated, \(i\) is equal to the number of the current iteration, and the evaluation can result in a function \(Y\) which is operationally distinct for each iteration.
## 5 Type Safety and Correctness
Because the operational semantics of Proto-Quipper-R is based on configurations, we ought to adopt a notion of well-typedness which is also based on configurations. The following definition of _well-typed configuration_ is thus central to our type-safety analysis.
**Definition 5** (Well-typed Configuration).: _We say that configuration \((\mathcal{C},M)\) is well-typed with input \(Q\), type \(A\), width \(I\) and output \(L\), and we write \(Q\vdash(\mathcal{C},M):A;I;L\), whenever \(\mathcal{C}:Q\to L,H\) for some \(H\) such that \(\emptyset;\emptyset;H\vdash_{c}M:A;I\). We write \(Q\vdash(\mathcal{C},V):A;L\) whenever \(\mathcal{C}:Q\to L,H\) for some \(H\) such that \(\emptyset;\emptyset;H\vdash_{v}V:A\)._
The three results that we want to show in this section are that any well-typed term configuration \(Q\vdash(\mathcal{C},M):A;I;L\) evaluates to some configuration \((\mathcal{D},V)\), that \(Q\vdash(\mathcal{D},V):A;L\) and that \(\mathcal{D}\) is obtained from \(\mathcal{C}\) by extending it with a sub-circuit of width at most \(I\). These claims correspond to the _subject reduction_ and _total correctness_ properties that we will prove at the end of this section. However, both these results rely on a central lemma and on the mutual notions of _realization_ and _reducibility_, which we first give formally.
**Definition 6** (Realization).: _We define \(V\Vdash_{Q}A\), which reads \(V\) realizes \(A\) under \(Q\), as the smallest relation such that_
\[\begin{array}{l}\ast\Vdash_{\emptyset}\mathbbm{1}\\ \ell\Vdash_{\ell:w}w\\ V\Vdash_{Q}A\multimap{\lnot}I\ \text{iff}\ \Vdash J=|Q|\text{ and }\forall W:W\Vdash_{L}A\implies V W\Vdash_{Q,L}^{I}B\\ \text{lift}M\Vdash_{\emptyset}!A\text{ iff }M\Vdash_{\emptyset}^{0}A\\ \langle V,W\rangle\Vdash_{Q,L}A\otimes B\text{ iff }V\Vdash_{Q}A\text{ and }W\Vdash_{L}B\\ \text{nil}\Vdash_{\emptyset}\text{List}^{I}A\text{ iff }\vvdash I=0\\ \text{cons }V\ W\Vdash_{Q,L}\text{List}^{I}A\text{ iff }\vvdash I=J+1\text{ and }V\Vdash_{Q}A\text{ and }W\Vdash_{L}\text{List}^{J}A\\ (\bar{\ell},\mathcal{C},\bar{k})\Vdash_{\emptyset}\text{Circ}^{I}(T,U)\text{ iff }\mathcal{C}:Q\to L\text{ and }Q\vdash_{w}\bar{\ell}:T\text{ and }L\vdash_{w}\bar{k}:U\text{ and }\vvdash\text{width}(\mathcal{C})\leq I \end{array}\]
**Definition 7** (Reducibility).: _We say that \(M\) is reducible under \(Q\) with type \(A\) and width \(I\), and we write \(M\Vdash_{Q}^{I}A\), if, for all \(\mathcal{C}\) such that \(\mathcal{C}:L\to Q,H\), there exist \(\mathcal{D},V\) such that_
1. \((\mathcal{C},M)\Downarrow(\mathcal{C}::\mathcal{D},V)\)_,_
2. \(\vvertvert\equiv\text{width}(\mathcal{D})\leq I\)__
3. \(\mathcal{D}:Q\to K\) _for some_ \(K\) _such that_ \(V\Vdash_{K}A\)_._
Both relations, and in particular reducibility, are given in the form of unary logical relations [55]. The intuition is pretty straightforward: a term is reducible with width \(I\) if it evaluates correctly when paired with any circuit \(\mathcal{C}\) which provides its free labels and if it extends \(\mathcal{C}\) with a sub-circuit \(\mathcal{D}\) whose width is bounded by \(I\). Realization, on the other hand, is less immediate. For most cases, realizing type \(A\) loosely corresponds to being closed and well-typed with type \(A\), but a value realizes an arrow type \(A\multimap_{I,J}B\) when its application to a value realizing \(A\) is reducible with type \(B\) and width \(I\).
By themselves, realization and reducibility are defined only on terms and values closed in the sense both of regular and index variables. To extend these notions to open terms and values, we adopt the standard approach of reasoning explicitly about the substitutions that could render them closed.
**Definition 8** (Closing Substitution).: _We define the set VSUB of closing value substitutions as the smallest subset of \(\text{VAL}\cup\text{TERM}\rightarrow\text{VAL}\cup\text{TERM}\) such that_
* \(\emptyset\in\text{VSUB}\) _with_ \(\emptyset(M)=M\)_._
* _If_ \(\gamma\in\text{VSUB}\)_,_ \(x\) _is a variable name and_ \(V\in\text{VAL}\) _is closed, then_ \(\gamma[x\mapsto V]\in\text{VSUB}\) _with_ \(\gamma[x\mapsto V](M)=\gamma(M[V/x])\)_._
_We define the set ISUB of closing index substitutions as the smallest subset of \(\text{INDEX}\cup\text{TYPE}\cup\text{VAL}\cup\text{TERM}\rightarrow\text{INDEX}\cup\text{TYPE}\cup\text{VAL}\cup\text{TERM}\) such that_
* \(\emptyset\in\) _ISUB with_ \(\emptyset(M)=M\)_._
* _If_ \(\theta\in\) _ISUB,_ \(i\) _is an index variable name and_ \(I\in\) _INDEX is closed, then_ \(\theta[i\mapsto I]\in\) _ISUB with_ \(\theta[i\mapsto I](M)=\theta(M\{I/i\})\)_._
We say that \(\gamma\)_implements_ a typing context \(\Gamma\) using label context \(Q\), and we write \(\gamma\vDash_{Q}\Gamma\), when it replaces every variable \(x_{i}\) in the domain of \(\Gamma\) with a value \(V_{i}\) such that \(V_{i}\vDash_{Q_{i}}\Gamma(x_{i})\) and \(Q=\biguplus_{x_{i}\in\operatorname{dom}(\Gamma)}Q_{i}\). Similarly, we say that \(\theta\) implements an index context \(\Theta\), and we write \(\theta\vDash\Theta\), when it replaces every index variable in \(\Theta\) with a closed index term. This allows us to give the following fundamental lemma, which will be used while proving all other claims.
**Lemma 1** (Core Correctness).: _Let \(\Pi\) be a type derivation. For all \(\theta\vDash\Theta\) and \(\gamma\vDash_{Q}\theta(\Gamma)\), we have that_
\[\Pi\vDash\Theta;\Gamma;L\vdash_{c}M:A;I \implies\gamma(\theta(M))\vDash_{Q,L}^{\theta(I)}\theta(A)\] \[\Pi\vDash\Theta;\Gamma;L\vdash_{v}V:A \implies\gamma(\theta(V))\vDash_{Q,L}\theta(A)\]
Proof.: By induction on the size of \(\Pi\), making use of Theorem 1.
Lemma 1 tells us that any well-typed term (resp. value) is reducible (resp. realizes its type) when we instantiate its free variables according to its contexts. Now that we have Lemma 1, we can proceed to proving the aforementioned results of subject reduction and total correctness. We start with the former, which unsurprisingly requires the following substitution lemmata.
**Lemma 2** (Index Substitution).: _Let \(\Pi\) be a type derivation and let \(I\) be an index such that \(\Theta\vDash I\). We have that_
\[\Pi\vDash\Theta,i;\Gamma;Q\vdash_{c}M:A;J \implies\Theta;\Gamma\{I/i\};Q\vdash_{c}M\{I/i\}:A\{I/i\};J\{I/i\},\] \[\Pi\vDash\Theta,i;\Gamma;Q\vdash_{v}V:A \implies\Theta;\Gamma\{I/i\};Q\vdash_{v}V\{I/i\}:A\{I/i\}.\]
Proof.: By induction on the size of \(\Pi\).
**Lemma 3** (Value Substitution).: _Let \(\Pi\) be a type derivation and let \(V\) be a value such that \(\Theta;\Phi,\Gamma_{1};Q_{1}\vDash_{v}V:A\). We have that_
\[\Pi\vDash\Theta;\Phi,\Gamma_{2},x:A;Q_{2}\vdash_{c}M:B;I \implies\Theta;\Phi,\Gamma_{1},\Gamma_{2};Q_{1},Q_{2}\vdash_{c}M[V/x]:B;I,\] \[\Pi\vDash\Theta;\Phi,\Gamma_{2},x:A;Q_{2}\vdash_{v}W:B \implies\Theta;\Phi,\Gamma_{1},\Gamma_{2};Q_{1},Q_{2}\vdash_{v}W[V/x]:B.\]
Proof.: By induction on the size of \(\Pi\).
The main result is then stated in the following theorem.
**Theorem 2** (Subject Reduction).: _If \(Q\vdash(\mathcal{C},M):A;I;L\) and \((\mathcal{C},M)\Downarrow(\mathcal{D},V)\), then \(Q\vdash(\mathcal{D},V):A;L\)._
Proof.: By induction on the derivation of \((\mathcal{C},M)\Downarrow(\mathcal{D},V)\) and case analysis on the last rule used in its derivation. Lemma 3 is essential to the _app,dest_ and _let_ cases, while Lemma 2 is used in the _fold-step_ case. Lemma 1 is essential to the _box_ case, as it is the only case in which the side effect of the evaluation (the circuit built by the function being boxed), whose preservation is the a matter of correctness, becomes a value (the resulting boxed circuit).
Of course, type soundness is not enough: we also want the resource analysis carried out by our type system to be correct, as stated in the following theorem.
**Theorem 3** (Total Correctness).: _If \(Q\vdash(\mathcal{C},M):A;I;L\), then there exist \(\mathcal{D},V\) such that \((\mathcal{C},M)\Downarrow(\mathcal{C}::\mathcal{D},V)\) and \(\vDash\operatorname{width}(\mathcal{D})\leq I\)._
Proof.: By definition, \(Q\vdash(\mathcal{C},M):A;I;L\) entails that \(\mathcal{C}:Q\to L,H\) and \(\emptyset;\emptyset;H\vdash_{c}M:A;I\). Since an empty context is trivially implemented by an empty closing substitution, by Lemma 1 we get \(M\vDash_{H}^{L}A\), which by definition entails that there exist \(\mathcal{D},V\) such that \((\mathcal{C},M)\Downarrow(\mathcal{C}::\mathcal{D},V)\) and \(\vDash\operatorname{width}(\mathcal{D})\leq I\).
A Practical Example
This section provides an example of how Proto-Quipper-R can be used to verify the resource usage of realistic quantum algorithms. In particular, we use our language to implement the QFT algorithm [12, 40] and verify that the circuits it produces have width no greater than the size of their input, i.e. that the QFT algorithm does not overall use additional ancillary qubits.
The Proto-Quipper-R implementation of the QFT algorithm is given in Figure 19. As we walk through the various parts of the program, be aware that we will focus on the resource aspects of the algorithm, ignoring much of its actual meaning. Starting bottom-up, we assume that we have an encoding of naturals in the language and that we can perform arithmetic on them. We also assume some primitive gates and gate families: \(\mathsf{H}\) is the boxed circuit corresponding to the Hadamard gate and has type \(\mathsf{Circ}^{1}(\mathsf{Qubit},\mathsf{Qubit})\), whereas the makeRGate function has type \(\mathsf{Nat}\multimap_{0,0}\,\mathsf{Circ}^{2}(\mathsf{Qubit}\otimes\mathsf{ Qubit},\mathsf{Qubit}\otimes\mathsf{Qubit})\) and produces instances of the parametric controlled \(R_{n}\) gate.
On the other hand, _qlen_ and _rev_ stand for regular language terms which implement respectively the linear list length and reverse functions. Their implementation is given in Figure 20 and they have types \(\textit{qlen}::\mathsf{List}^{i}\,\mathsf{Qubit}\multimap_{i,0}\,(\mathsf{Nat }\otimes\mathsf{List}^{i}\,\mathsf{Qubit})\) and \(\textit{rev}:\mathsf{List}^{i}\,\mathsf{Qubit}\multimap_{i,0}\,\mathsf{List}^ {i}\,\mathsf{Qubit}\) in our type system. We now turn our attention to the actual QFT algorithm. Function _qftStep_ builds a single step of the QFT circuit. The width of the circuit produced at step \(j\) is dominated by the folding of the _rotate_\(n\) function, which applies controlled rotations between appropriate pairs of qubits and has type
\[(\mathsf{Qubit}\otimes\mathsf{List}^{e}\,\mathsf{Qubit})\otimes\mathsf{Qubit }\multimap_{e+2,0}\,\mathsf{Qubit}\otimes\mathsf{List}^{e+1}\,\mathsf{Qubit}, \tag{9}\]
meaning that _rotate_\(n\) rearranges the structure of its inputs, but overall does not introduce any new wire. We fold this function starting from an accumulator \(\langle q,\mathsf{nil}\rangle\), meaning that we can give \(\mathsf{fold}_{j}\) (\(\mathsf{lift}(\textit{rotate}\,n)\)) \(\langle q,\mathsf{nil}\rangle\) type as follows:
\[i,j,e;n:\mathsf{Nat};\emptyset\vdash_{v}\mathsf{lift}(\textit{ rotate}\,n):\mathsf{!}((\mathsf{Q}\otimes\mathsf{List}^{e}\,\mathsf{Q}) \otimes\mathsf{Q}\multimap_{e+2,0}\,\mathsf{Q}\otimes\mathsf{List}^{e+1}\, \mathsf{Q})\] \[\textit{fold}\,\frac{i,j;q:\mathsf{Q};\emptyset\vdash_{v}\langle q,\mathsf{nil}\rangle:\mathsf{Q}\otimes\mathsf{List}^{0}\,\mathsf{Q}}{i,j \vdash j}\quad i,j\vdash\mathsf{Q}\] \[i,j;n:\mathsf{Nat},q:\mathsf{Q};\emptyset\vdash_{v}\mathsf{ fold}_{e}\,\mathsf{lift}(\textit{rotate}\,n)\,\langle q,\mathsf{nil}\rangle: \mathsf{List}^{j}\,\mathsf{Q}\multimap_{j+1,1}\,\mathsf{Q}\otimes\mathsf{ List}^{j}\,\mathsf{Q} \tag{10}\]
Figure 19: A Proto-Quipper-R implementation of the Quantum Fourier Transform circuit family. The usual syntactic sugar is employed.
where \(\mathsf{Q}\) is shorthand for \(\mathsf{Qubit}\) and where we implicitly use the fact that \(i,j\vDash\mathsf{max}(1,\mathsf{max}_{e<j}\,e+2+(j-1-e)\times 1)=j+1\) to simplify the arrow's width annotation using _vsub_ and the _arrow_ subtyping rule. Next, we fold over _revQs_, which has the same elements as _qs_ and thus has length \(j\), and we obtain that the fold produces a circuit whose width is bounded by \(j+1\). Therefore, _qftStep_ has type
\[!((\mathsf{List}^{j}\,\mathsf{Qubit}\otimes\mathsf{Qubit})\mathrel{\lnot_{j+1, 0}}\mathsf{List}^{j+1}\,\mathsf{Qubit}), \tag{11}\]
which entails that when we pass it as an argument to the topmost \(\mathsf{fold}\) together with \(\mathsf{nil}\) we can conclude that the type of the _qft_ function is
\[\begin{array}{c}\mbox{\it fold}\ \frac{i,j;\emptyset;\emptyset\vdash_{v}\mbox{ \it qftStep}:!((\mathsf{List}^{j}\,\mathsf{Qubit}\otimes\mathsf{Qubit}) \mathrel{\lnot_{j+1,0}}\mathsf{List}^{j+1}\,\mathsf{Qubit})}{i;\emptyset; \emptyset\vdash_{v}\mbox{\it nil}:\mathsf{List}^{0}\,\mathsf{Qubit}}\ \ i\vDash i\vDash i\vDash i\vDash i\vDash\mathsf{Qubit}}\end{array} \tag{12}\]
where we once again implicitly simplify the arrow type using the fact that \(i\vDash\mathsf{max}(0,\mathsf{max}_{j<i}\,j+1+(i-1-j)\times 1)=i\). This concludes our analysis and the resulting type tells us that _qft_ produces a circuit of width at most \(i\) on inputs of size \(i\), without overall using any additional wires. If we instantiate \(i\) to \(3\), for example, we can apply _qft_ to a list of \(3\) qubits to obtain the circuit shown in Figure 21, whose width is exactly \(3\).
To conclude this section, note that for ease of exposition _qft_ actually produces the _reversed_ QFT circuit. This is not a problem, since the two circuits are equivalent resource-wise and the
Figure 21: The circuit of input size \(3\) produced by _qft_ (\(\mathsf{cons}\,q_{1}\,\mathsf{cons}\,q_{2}\,\mathsf{cons}\,q_{3}\,\mathsf{nil}\)).
Figure 20: The implementation of the auxiliary functions _qlen_ and _rev_.
actual QFT circuit can be recovered by boxing the result of _qft_ and reversing it via a primitive operator. Besides, note that Quipper's internal implementation of the QFT is also reversed [17].
## 7 Related Work
The metatheory of quantum circuit description languages, and in particular of Quipper-style languages, has been the subject of quite some work in recent years, starting with Ross's thesis on Proto-Quipper-S[48] and going forward with Selinger and Rios's Proto-Quipper-M[46]. In the last five years, some proposals have also appeared for more expressive type systems or for language extensions that can handle non-standard language features, such as the so-called _dynamic lifting_[36, 22, 9], available in the Quipper language, or dependent types [23]. Although some embryonic contributions in the direction of analyzing the size of circuits produced using Quipper have been given [56], no contribution tackles the problem of deriving resource bounds _parametric_ on the size of the input. In this, the ability to have types which depend on the input, certainly a feature of Proto-Quipper-D[23], is not useful for the analysis of intensional attributes of the underlying circuit, simply because such attributes are not visible in types.
If we broaden the horizon to quantum programming languages other than Quipper, we come across, for example, the recent works of Avanzini et al. [5] and Liu et al. [37] on adapting the classic weakest precondition technique to the cost analysis of quantum programs, which however focus on programs in an imperative language. The work of Dal Lago et al. [14] on a quantum language which characterizes complexity classes for quantum polynomial time should certainly be remembered: even though the language allows the use of higher-order functions, the manipulation of quantum data occurs directly and not through circuits. Similar considerations hold for the recent work of Hainry et al. [30] and Yamakami's algebra of functions [59] in the style of Bellantoni and Cook [6], both characterizing quantum polynomial time.
If we broaden our scope further and become interested in the analysis of the cost of classical or probabilistic programs, we face a vast literature, with contributions employing a variety of techniques on heterogeneous languages and calculi: from functional programs [2, 34, 33] and term rewriting systems [3, 4, 42] to probabilistic [35] and object-oriented programs [29, 20]. In this context, the resource under analysis is often assumed to be computation _time_, which is relatively easy to analyze given its strictly monotonic nature. Circuit width, although monotonically non-decreasing, evolves in a way that depends on a non-monotonic quantity, i.e. the number of wires discarded by a circuit. As a result, width has the flavor of space and its analysis is less straightforward.
It is also worth mentioning that linear dependent types can be seen as a specialized version of refinement types [19], which have been used extensively in the literature to automatically verify interesting properties of programs [60, 38]. In particular, the work of Vazou et al. on Liquid Haskell[58, 57] has been of particular inspiration, on account of Quipper being embedded precisely in Haskell. The liquid type system [47] of Liquid Haskell relies on SMT solvers to discharge proof obligations and has been used fruitfully to reason about both the correctness and the resource consumption (mainly time complexity) of concrete, practical programs [31].
## 8 Generalization to Other Resource Types
This work focuses on estimating the _width_ of the circuits produced by Quipper programs. This choice is dictated by the fact that the width of a circuit corresponds to the maximum number of distinct wires, and therefore individual qubits, required to execute it. Nowadays, this is considered as one of the most precious resources in quantum computing, and as such must be kept under control. However, this does not mean that our system could not be adapted to the estimation of other parameters. This section outlines how this may be possible.
First, estimating strictly monotonic resources, such as the total _number of gates_ in a circuit, is possible and in fact simpler than estimating width. A _single_ index term \(I\) that measures the
number of gates in the circuit built by a computation would be enough to carry out this analysis. This index would be appropriately increased any time an apply instruction is executed, while sequencing two terms via let would simply add together the respective indices.
If the parameter of interest were instead the _depth_ of the circuit, then the approach would have to be slightly different. Although in principle it would be possible to still rely on a single index \(I\), this would give rise to a very coarse approximation, effectively collapsing the analysis of depth to a gate count analysis. A more precise approximation could instead be obtained by keeping track of depth _locally_ rather than _globally_. More specifically, it would be sufficient to decorate each occurrence of a wire type \(w\) with an index term \(I\) so that if a label \(\ell\) were typed with \(w^{I}\), it would mean that the sub-circuit rooted in \(\ell\) has a depth at most equal to \(I\).
Finally, it should be mentioned that the resources considered, i.e. the depth, width, and gate count of a circuit, can be further refined so as to take into account only _some_ kinds of wires and gates. For instance, one could want to keep track of the maximum number of _qubits_ needed, ignoring the number of classical bits, or at least distinguishing the two parameters, which of course have distinct levels of criticality in current quantum hardware.
## 9 Conclusion and Future Work
In this paper we introduced a linear dependent type system based on index refinements and effect typing for the paradigmatic calculus Proto-Quipper, with the purpose of using it to derive upper bounds on the width of the circuits produced by programs. We proved not only the classic type safety properties, but also that the upper bounds derived via the system are correct. We also showed how our system can verify a realistic quantum algorithm and elaborated on some ideas on how our technique could be adapted to other crucial resources types, like gate count and circuit depth. Ours is the first type system designed specifically for the purpose of resource analysis to target circuit description languages such as Quipper. Technically, the main novelties are the smooth combination of effect typing and index refinements, but also the proof of correctness, in which reducibility and effects are shown to play well together.
Among topics for further work, we can identify three main research directions. First and foremost, it would be valuable to investigate the ideas presented in this paper from a more practical perspective, that is, to provide a prototype implementation of the language and, more importantly, of the type-checking procedure. This would require understanding the role that SMT solvers may play in discharging the semantic judgments which we use pervasively in our approach.
Staying instead on the theoretical side of things, on the one hand we have the prospect of denotational semantics: most incarnations of Proto-Quipper are endowed with categorical semantics that model both circuits and the terms of the language that build them [46, 36, 23, 22]. We already mentioned how the intensional nature of the quantity under analysis renders the formulation of an abstract categorical semantics for Proto-Quipper-R and its circuits a nontrivial task, but we believe that one such semantics would help Proto-Quipper-R fit better in the Proto-Quipper landscape.
On the other hand, in Section 8 we briefly discussed how our system could be modified to handle the analysis of different resource types. It would be interesting to test this path and to investigate the possibility of _actually generalizing_ our resource analysis, that is, of making it parametric on the kind of resource being analyzed. This would allow for the same program in the same language to be amenable to different forms of verification, in a very flexible fashion.
|
2304.05520 | Analyzing the Impact of Elusive Faults on Blockchain Reliability | Blockchain recently became very popular due to its use in cryptocurrencies
and potential application in various domains (e.g., retail, healthcare,
insurance). The smart contract is a key part of blockchain systems and
specifies an agreement between transaction participants. Nowadays, smart
contracts are being deployed carrying residual faults, including severe
vulnerabilities that lead to different types of failures at runtime. Fault
detection tools can be used to detect faults that may then be removed from the
code before deployment. However, in the case of smart contracts, the common
opinion is that tools are immature and ineffective. In this work, we carry out
a fault injection campaign to empirically analyze the runtime impact that
realistic faults present in smart contracts may have on the reliability of
blockchain systems. We place particular attention on the faults that elude
popular smart contract verification tools and show if and in which ways the
faults lead the blockchain system to fail at runtime. Results show general poor
detection and, to some extent, complementary performance by the three tools
used. The results also show that several elusive faults are responsible for
severe blockchain failures. | Fernando Richter Vidal, Naghmeh Ivaki, Nuno Laranjeiro | 2023-04-11T22:01:27Z | http://arxiv.org/abs/2304.05520v1 | # Analyzing the Impact of Elusive Faults on Blockchain Reliability
###### Abstract
Blockchain recently became very popular due to its use in cryptocurrencies and potential application in various domains (e.g., retail, healthcare, insurance). The smart contract is a key part of blockchain systems and specifies an agreement between transaction participants. Nowadays, smart contracts are being deployed carrying residual faults, including severe vulnerabilities that lead to different types of failures at runtime. Fault detection tools can be used to detect faults that may then be removed from the code before deployment. However, in the case of smart contracts, the common opinion is that tools are immature and ineffective. In this work, we carry out a fault injection campaign to empirically analyze the runtime impact that realistic faults present in smart contracts may have on the reliability of blockchain systems. We place particular attention on the faults that elude popular smart contract verification tools and show if and in which ways the faults lead the blockchain system to fail at runtime. Results show general poor detection and, to some extent, complementary performance by the three tools used. The results also show that several elusive faults are responsible for severe blockchain failures.
Blockchain, Smart Contract, Software Faults, Security Vulnerability, Fault Injection, Verification Tools
## 1 Introduction
Blockchain systems can be described as an implementation of a distributed ledger, in which transaction records are stored and linked together using a cryptography method. Such systems are supported by a peer to peer network, in which peers can join and obtain a copy of the state of the blockchain. When a new transaction arrives, a consensus protocol is used among the system peers that, by consensus, will accept or reject the transaction [51].
At the core of a blockchain system, we find the smart contract, a programmed application stored and executed by the blockchain middleware that contains the logic pertaining to a certain transaction, including pre-conditions that should be fulfilled so that the transaction concludes successfully [3]. Once a smart contract is deployed on the blockchain, it cannot be modified. Indeed, a faulty contract can only be terminated, and a new one must be put in place, which may have serious consequences (e.g., security, financial) for the involved parties [38].
Smart contracts are being deployed carrying residual bugs, including severe security vulnerabilities. This is due to several reasons, including a lack of developer expertise on the blockchain, the use of new programming languages (e.g., Solidity is a popular choice for programming smart contracts), the use of new coding tools, and the lack of mature defect verification tools. All of these factors combine and lead to the deployment of faulty contracts, which, at some point in time, see their faults being activated with diverse consequences, ranging from performance, ledger integrity violation, increased resource usage leading to gas depletion, among many others.
In this paper, we carry out a fault injection campaign to show the impact realistic faults can have on the reliability of a blockchain system, with particular attention to the types of faults that may elude popular smart contract verification tools. Notice that we use the terms _blockchain reliability_ to generally refer to issues that represent a deviation from the correct execution of transactions. In particular, we refer to the occurrence of the following types of issues and failures that : _revert_, _abort_, _out-of-gas_, _correctness_, _integrity_ and _latent integrity_. By _elusive_ we refer to the particular cases of faults that escape detection by smart contract verification tools and whose impact we analyze by the end of this paper. In summary, our proposal includes going through the following three steps:
1. _Step 1:_ We begin by resorting to an existing fault model that we have created in a previous study [15], based on faults observed in real smart contracts in the field (i.e., in this sense, representative of issues occurring in the context of blockchain), and implement injectors for a total of 36 faults.
2. _Step 2:_ We inject faults in a set of 400 smart contracts, which have been randomly extracted from [18], to achieve a total of 15,494 faulty contracts (Injection of each fault may result in more than one faulty contract as some faults could
be injected in several points in a contract.). We then deploy the original contracts and the faulty ones and run them against user transactions (a total of \(10,925,749\) transactions are executed in this study over several months) so that we respectively understand expected/normal behavior and possible deviations (abnormal behavior), namely failures (e.g., aborted transactions, ledger integrity corruption) and performance degradation (e.g., high memory consumption and high CPU usage).
3. _Step 3:_ We map the observations to the fault detection capabilities of three state of the art fault detection tools, namely Mythril version v0.22.19 [13], Slither v0.8.0 [17], and Securify v0.0.1 [46]. The goal is to illustrate the overall effectiveness of the tools and especially signal the types of faults that tend to not be detected by the verification tools and their impact on the blockchain system.
The main results of this work show that: i) the injected faults are capable of generating diverse types of failures at runtime; ii) the verification tools show low effectiveness, even in faults that seem to be easy to detect (e.g., _Missing Compiler Version_) and, to some extent, their complementary nature; iii) there are indeed elusive faults, of which some are strongly connected to severe types of failures in blockchain systems, including _latent integrity_, _integrity_, and _correctness_ failures. The contributions of this paper are as follows:
* The implementation of a set composed of 36 faults that are specific of smart contracts, made available via a fault injection tool at [47].
* A large-scale analysis of the impact of different types of realistic faults injected on an initial set of 400 real smart contracts and resulting in 15,494 mutated contracts (i.e., For every single fault, each contract has at least one faulty mutant).
* The evaluation of the fault detection effectiveness of three state of the art smart contract verification tools (i.e., Securify2 [46], Mythril [13], and Slither [17]), in presence of the different types of faults.
* An analysis of the impact of faults that tend to elude verification tools on the reliability of the blockchain system.
This paper is organized as follows. Section II presents background and related work and Section III presents the design of our experimental study. Section IV discusses the results and Section V presents the main threats to the validity of this work. Finally, Section VI concludes this paper.
## 2 Background and Related Work
This section first presents background on smart contracts security issues. It then reviews the security and verification tools and techniques. Finally, it presents the work evaluating the security and verification tools.
### _Smart Contracts' Security Vulnerabilities_
To characterize the vulnerabilities in smart contracts, there are mostly two classifications, namely Decentralized Application Security Project (DASP) [35] and Smart Contract Weakness Classification (SWC) [33]. DASP is a collaborative project of the NCC Group [35] to characterize the smart contract vulnerabilities. It includes 9 classes of vulnerabilities. SWC is a classification supported by the Mythx team to characterize the smart contract vulnerabilities and is based on Common Weakness Enumeration (CWE). It includes 37 classes of vulnerabilities.
In addition to these two classifications, we can find specific sets of vulnerabilities (including other types) in some research papers. Harz and Knottenbelt [22] present a survey focused on the security aspect of smart contract programming languages. It provides an overview of the current programming languages for implementing smart contracts (a total of 19 programming languages), their security features, and other information like the paradigm, instruction set, semantics, and metering. Praitheeshan et al. [38] present a list of 16 Ethereum smart contracts vulnerabilities and 19 software security issues. Peng et al. [37] review the use of smart contracts in the context of IoT applications, mentioning the main security challenges due to the vulnerabilities identified in the programs. The findings reported by the authors is that real-world applications are still in their infancy, and despite the security auditing tools that exist, they can only detect a fraction of known vulnerabilities.
### _Smart Contract Security Assessment Tools and Techniques_
In this section, we analyze the techniques and tools identified during our review of state of the art in smart contract security verification and security assessment. The identified works fall into four main categories: **formal verification techniques**[6], **static analysis techniques**[10], **software testing techniques**, and **machine learning-based techniques**[31].
The Formal Verification category includes methods based on formal proofs or abstract mathematical models of a certain system or part of the system to prove its correctness (e.g., mainly functional correctness) [43]. The specific techniques in this category are Model Checking [25, 27] and Theorem Proving [42].
The Static Analysis category includes methods that do not require the code to be executed and rely on the inspection of code by various means (e.g., pattern recognition, taint analysis) to discover software defects [41]. Specific techniques in this category are Abstract Interpretation [14], Taint Analysis [50], Pattern Recognition [54].
The Software Testing category includes methods that rely on software execution with the intention of finding defects [32]. The specific techniques in this category are Fuzzing [8], Mutation Testing [49], or Symbolic Execution [5, 26, 44].
The Machine Learning category includes artificial intelligence-based techniques that focus on the study of algorithms that can learn from experience and historical data to detect anomalies or predict new output values (i.e., vulnerabilities). In this context, most techniques are based on supervised learning, which involves using labeled datasets to train algorithms that classify data or predict outcomes for a particular output. The specific techniques in this category are Classical Machine Learning Models: classical machine learning models [48], Deep Learning Models [55], and Ensemble Learning Models [53]. Notice that some tools use more than one technique to achieve their goals. For instance, Smartian [11] uses a combination of static and dynamic analysis techniques for fuzzing smart contracts.
At this point, we highlight three verification tools, namely Slither, Security, and Mythril, which are subject of analysis in this work. **Slither**[17] is a static analysis tool that uses taint analysis. The tool compiles the Solidity smart contract source code into a Solidity Abstract Syntact Tree (AST) to extract the contract's inheritance graph, the control flow graph (CFG), and the list of expressions. Then it transforms the code of the contract to SlithIR, its internal representation language, that uses static single assessment (SSA) to facilitate the computation of a variety of code analyses. It also includes a graph for code understanding and assisted code review. The authors evaluated the tool in 1,000 most used smart contracts (that had the largest number of transactions) to find that it outperforms three other popular tools (Solhint [39], SmartCheck [45], Securify [46]) in terms of performance, robustness (i.e., whether the tool completed the execution), and accuracy (i.e., false positives reported).
**Securify**[46] is a static analysis tool that uses Souffle, a scalable Datalog solver, to symbolically analyze the EVM bytecode of the smart contract and extract semantic facts, and then checks those semantics against violation patterns. Thus, the tool was developed based on a set of compliance and violation security patterns that capture sufficient conditions to prove and disprove practical security properties. To foster extensibility, the patterns are specified in a language that is domain-specific.
**Mythril** is an open-source tool for analyzing the security of smart contracts, based on symbolic execution, SMT solving and taint analysis. It is able to detect software defects, namely various security vulnerabilities in smart contracts implemented for Ethereum and several other EVM-compatible blockchains. The tool is often found being used in research related with smart contract evaluation, e.g., [20, 36, 2].
### _Verification Tools Assessment_
In [36] the authors evaluate Oyente, Securify, Mythril and Smartcheck using ten contracts and express the tools' effectiveness by performing a Receiver Operating Characteristic (ROC) analysis and also analyze accuracy, revealing differences and gaps in the tools' effectiveness. A framework for analyzing and testing smart contracts is presented in [2]. The authors evaluate their proposal with Oyente, Securify, Maian, SmartCheck and Mythril against 1,838 contracts and 8 faults which are used to produce 12,866 mutated contracts. Precision and recall are used to characterized the tools detection capabilities.
A bug benchmark is proposed in [52], which is then demonstrated by running Oyente and Slither against 1,010 contracts randomly selected from etherscan.io, highlighting the detection deficiencies of the tools when in presence of well-known vulnerabilities. The authors in [20] evaluate the bug detection effectiveness of several static analysis tools, namely Oyente, Securify, Mythril, Smartcheck, Manticore, and Slither. The approach is based on the injection of bugs in contracts, based on known bug patterns. Injection is performed with types of bugs that the tools indicate are able to detect and use typical metrics, such as false-negative and false-positive rates to assess the tools' performance.
Nine analysis tools for smart contracts are evaluated in [16]. The authors use 47,587 Ethereum smart contracts, highlighting clear deficiencies in the tools detection capabilities, including the tool marked as most accurate tool (Mythril), able to detect only 27% of the vulnerabilities. In [40], the authors present an empirical evaluation of 9 contract verification against 46,186 smart contracts. Main findings include the recommendation of a set of diverse test suites; a unified execution environment with suitable runtime parameters; and more quantitative and multi-dimensional performance metrics.
We, in a previous work [15], presented a fault-injection approach to analyze the effectiveness of three static verification tools, Mythril, Securify, and Slither. However, the work was a proof of concept, and the study was limited to a small number of contracts and a small number of vulnerabilities. In this work, we consider a large number of smart contracts with diverse types of vulnerabilities.
Moreover, the works mentioned above for evaluation of verification tools generally share the view that smart contract verification tools are immature, reflected in their detection capabilities. However, such a vision is generally not put in perspective with the runtime effect of smart-contract specific faults. We, in another previous work [21], tried to analyze the effect of faults on blockchain systems, but it was done on a very small scale (with only 5 contracts). The other studies tend not to analyze in depth the faults (and their effects) that elude smart contract verification tools, which is one of the main objectives of this work.
## 3 Approach
This section presents the approach followed in this work. The next subsections go through the following main steps:
1. The fault injection approach, which consists of the injection of faults in a set of smart contracts;
2. The procedure for analyzing the impact the injected faults have on each contract;
3. The procedure to analyze the effectiveness of smart contract fault detection tools;
4. The analysis of the impact of faults that tend to elude smart contract verification tools.
### _Fault Injection Approach_
The starting point for this work is the ability to inject faults in smart contracts. For this purpose, we follow the long-established tradition of software fault injection in which, based on a model that represents real faults (i.e., faults observed in real systems in the field), 'probable' software faults are artificially introduced in a certain component of a larger system [34]. This allows to understand the effect that a certain type of fault can have on a system, once it is activated (e.g., does the system fail catastrophically, does it have its performance degraded) [30]. This type of technique can be towards various goals, namely for evaluating systems behavior [34], test suite effectiveness (i.e., in the case of mutation testing approaches) [28], tools that act over the systems, e.g., vulnerability scanners [19] or even for failure prediction [24]. We apply a software-implemented fault injection (SWIFI) technique [34], which we have succesfully used in the past [21], although in a much narrower scope (a different set of faults was used in 5 contracts). For completeness, we conceptually overview the technique in the next paragraphs.
Figure 1 presents our fault injection process. The first phase (on the left side of the image) consists of transforming the original code (i.e., solidity format) into AST format. To perform this task, we use native solidity compiler functions (i.e., _solic_ with -_ast-json_ argument) to first compile the code (step 1) and then transform it into AST (step 2). This way, AST can be generated only for the contracts that are successfully compiled.
The second phase (in the middle) transforms the previously created ASTs into faulty smart contracts (step 3). For each fault existing in our fault model, at least one faulty contract will be generated for a given AST. This is because a fault can be injected in different forms (e.g., a wrong arithmetic expression may take various forms) or injected in different places within the AST. Step 3 results in a set of faulty ASTs. In the next step, step 4, all faulty ASTs are transformed back into their original format (i.e., solidity code). To achieve this we implemented the necessary code for applying the transformation and verified its correctness by manually inspecting the resulting file and comparing it to the original code. Two Early State Researchers were involved in this verification process. This final step ends up in a set of faulty contracts called mutants.
The last phase is focused on the deployment of the mutants. In step 5, we verify whether the mutants are still valid executable programs (i.e., by compiling them). The successfully compiled programs are then deployed into a Hyperledger fabric (step 6), and the remaining contracts are removed from the analysis (step 7).
Figure 2 shows an example with a contract (on the left-hand side, named original) and the corresponding faulty version (on the right-hand side). The faulty version represents a mutation of the original contract that holds a specific vulnerability named _Uminitialized Storage Pointer_ (SWC-109) [33]. This vulnerability refers to a situation where a local storage variable remains uninitialized and may be used to point to an unexpected storage location in the contract. This may lead to an unexpected behavior either intentionally caused by an attacker or unintentionally. The program allows authorized people to transfer money to suppliers and has been inspired by the examples in [4]. The concurrent payments are controlled by the _unlocked_ variable and _require_ instruction that only allows payments when the contract is unlocked. In the original contract, there is a local variable, namely _newTransfer_, that is appropriately initialized by marking the variable's storing location explicitly with the _memory_ attribute (line 8). In contrast, the faulty contract does not explicitly mark its storing location, and the variable remains uninitialized as a local variable (it is instead a global storage variable).
As discussed, in order to inject the SWC-109 vulnerability, we convert the original code into an AST representation and then inject the vulnerability. Figure 3 shows an illustrative Python example of the injector of this vulnerability. The injector searches the AST for a condition (implemented in function condition). When the condition matches, the node/attribute of the tree is localized, then the changes are applied (see function changeTre, resulting in a faulty AST. In this specific example, the injector looks for the _memory_ attribute used for initialization of a local storage variable and replaces it with "storage". At the end, the faulty AST is transformed back into the faulty contract code. The implementation of the whole set of 36 faults is available at [47].
Fig. 1: Fault injection process.
After injecting the faults, we will have a list of fault-free smart contracts (i.e., the original contracts, without known faults) and their corresponding faulty smart contracts to be used in our evaluation, which is composed of the following studies:
* Faults' Impact**: we evaluate the behavior of the blockchain system in presence of the injected faults. For this, we execute both fault-free and their respective faulty versions individually on an isolated environment of the blockchain. We compare the outcome of the fault-free runs and the faulty runs. By having the fault-free runs we have a reference to evaluate the impact of each fault.
* Effectiveness of Verification Tools**: we evaluate the effectiveness of smart contract verification tools, namely the tools fault detection capabilities against a set of faulty contracts generated by the fault injection tool, based on the faults described on our fault model.
* Impact of Elusive Faults**: We analyze the impact of the faults that tend to escape detection by the verification tools.
#### 3.1.1 Fault Model
In this work, we opted to resort to an existing fault model created in our previous work [15], for selecting smart contract-specific faults (for implementation in the fault injection process). The defects are organized based on the Orthogonal Defect Classification (ODC) defect classification scheme [23] and we tried to implement at least one fault for each ODC class and each defect type, balancing, at the same time, the effort required (some faults of the same class and type are very similar, and, in this sense, trivial to implement). We reached a total of 36 implemented faults, which we present in Table I.
The set of 36 implemented faults is a subset of all faults that may affect a blockchain system (e.g., for the time being we did not implement reentrancy faults). Despite this, it is important to mention that the focus of this work is not on the definition of a fault model but is instead on the overall method proposed that ends up in the analysis of the effect of a subset of faults that tend to escape detection tools. In this sense, the specific faults used may vary as well as the specific tools used for this purpose.
Fig. 3: Code example for the injection of the SWC-109 vulnerability.
Fig. 2: A faulty contract example.
#### 3.1.2 Smart Contracts Dataset
In order to **identify a set of contracts** that could be used as input for these experiments, we randomly selected 400 of the contracts used in the work by Durieux et al. [16] and that are available at [47]. Next, each contract is passed to the fault injection tool, which determines which of the 36 faults can be injected and in how many code locations of that contract. Then the tool iteratively generates the respective fault contracts. This process resulted in a total of 15,494 faulty contracts (each faulty contract carries exactly one fault).
### _Study 1 - Faults' Impact_
This section presents the approach followed to study the effect that our injected faults may have in smart contracts. Figure 4 illustrates the approach, which in practice, consists of the following steps:
1. Generation of the smart contracts' workload.
2. Execution of both fault-free and faulty smart contracts in a private network.
3. Analysis of the results, based on a set of metrics of interest.
_Step 1_) refers to the **smart contract workload generation**, which will allow activating the injected faults during the calls to the smart contract operations (i.e., the execution of transactions). We follow a simple workload generation procedure, with the goal of being able to execute most of the contracts and knowing that, in some cases, the generation of calls would require a more complex implementation of this procedure. We emphasize that the goal is to generate valid calls (as opposed by generating invalid or malicious calls, as for instance, a fuzzer would do). In practice, we search for functions in the smart contracts and generate values for the corresponding input parameters based on their _type_, _literals_ that appear in the code, and _randomly_, as follows:
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Defect Class** & **Defect Nature** & **Defect Name** & **Defect Identifier** \\ \hline \hline Assignment & Missing & Initialization of Storage Variables/Pointers (Uninitialized Storage Pointer) **(MISP)** & A\_MISP \\ \cline{3-4} & & Initialization of Local Variable **(MILV)** & A\_MILV \\ \cline{3-4} & & Initialization of State Variables **(MISV)** & A\_MISV \\ \cline{3-4} & & Constructor **(MC)** & A\_MC \\ \cline{2-4} & & Compiler Version **(MCV)** & A\_MCV \\ \cline{2-4} & Wrong & Arithmetic Expression Used In Assignment **(WVAE)** & A\_WVAE \\ \cline{3-4} & & Integer Sign **(WIS)** & A\_WIS \\ \cline{3-4} & & Integer Truncation **(WIT)** & A\_WIT \\ \cline{3-4} & & Value Assignment With Too Many Digits **(WVATMD)** & A\_WVATMD \\ \cline{3-4} & & Value Assigned To Contract Address **(WVAA)** & A\_WVAAA \\ \cline{3-4} & & Constructor Name **(WCN)** & A\_WCN \\ \cline{3-4} & & Variable Type (e.g., byte[]) **(WVT)** & A\_WVT \\ \cline{3-4} & & Declaration Of Invariant State Variable **(WDISV)** & A\_WDISV \\ \cline{3-4} & & Variable Name (Variable Shadowing) **(WVN)** & A\_WVN \\ \hline Checking & Missing & “require” On Transactioned **(MRTS)** & CH\_MRTS \\ \cline{3-4} & & “require” On Input Variable(s) **(MRIV)** & CH\_MRIV \\ \cline{3-4} & & “require” OR Subexpression On Transactioned **(MRCTS)** & CH\_MROTS \\ \cline{3-4} & & “require” OR Subexpression On Input Variable(s) **(MROIV)** & CH\_MROIV \\ \cline{3-4} & & “require” AND Subexpression On Transactioned **(MRATS)** & CH\_MRATS \\ \cline{3-4} & & “require” AND Subexpression On Input Variable(s) **(MRAV)** & CH\_MRAV \\ \cline{3-4} & & Check On Gas Limit **(MCHGL)** & CH\_MCHGL \\ \cline{3-4} & & Check On Arithmetic Operation **(MCHAO)** & CH\_MCHAO \\ \cline{3-4} & & Check On Suicide Functionality **(MCHSF)** & CH\_MCHSF \\ \cline{3-4} & Wrong & “require” For Authorization (Authorization Through k.origin) **(WRA)** & CH\_WRA \\ \hline Interface & Missing & Visibility modifier of state variables (implicit visibility) **(MVMVSV)** & L\_MVSV \\ \cline{3-4} & & Function Visibility Modifier **(MFVM)** & L\_MFVM \\ \cline{3-4} & Wrong & Visibility (public) for private/internal function **(WVPF)** & L\_WVPF \\ \hline Algorithm & Missing & “if” construct on transaction sender plus statements **(MITSS)** & AL\_MITS \\ \cline{3-4} & & “if” construct on input variable(s) plus statements **(MIIVS)** & AL\_MIVS \\ \cline{3-4} & & W wrong & Use of require, assert, and revert **(WRAR)** & AL\_WRAR \\ \cline{3-4} & & Exception Handling **(WEH)** & AL\_WEH \\ \cline{3-4} & & Extraneous & Continue-statements in do-while-statements or for **(ECSWS)** & AL\_ECSWS \\ \hline Function & Missing & Withdraw function **(MWF)** & F\_MWF \\ \cline{3-4} & & Inheritance **(MINERITANCE)** & F\_MINERITANCE \\ \cline{3-4} & Wrong & Inheritance and inheritance Order **(WIO)** & F\_WIO \\ \cline{3-4} & & Extraneous & Inheritance **(EINERITANCE)** & F\_EINERITANCE \\ \hline \end{tabular}
\end{table} TABLE I: Smart contract defect classification [15]
1. _Type-based_: We generate values according to the data type of the argument, e.g., true and false for booleans, minimum, maximum or zero for integers. Regarding arrays and strings, the values are recursively generated considering various lengths, including zero.
2. _Literal-based_: Literals present in the function are used as input for any arguments that match the literal type. This is done, as inputs are many times compared with literals, which then determines the flow inside the function code.
3. _Randomly_: Random values are also generated, namely for arguments of type Integer (within the minimum and maximum range) and Strings (with random characters and at random sizes). Arrays are generated with random elements of their type and also using different lengths.
We currently allow for up to 1500 function calls per function, which we found to be generally sufficient in terms of transaction diversity while maintaining the total number of transactions generated at reasonable levels (for data analysis). This workload generation process only takes place for the set of original smart contracts, and then each generated faulty contract is executed against the workload generated for the original contract. This way, we are able to compare the runtime behavior during the execution of the faulty contracts in contrast with its corresponding fault-free run (i.e., a golden run). The metrics considered for the comparison are discussed later in this subsection.
In _Step 2_, we **execute the smart contracts in a private network**. In terms of environment we resort to a deployment of Hyperledger Fabric, with the Ethereum Virtual Machine (EVM) version of Hyperledger Burrow. We then use Hyperledger Caliper to perform the test runs, which is a blockchain benchmarking tool that allows users to measure the performance of a blockchain implementation against some predefined use cases [1]. The test runs are carried out by executing the respective workload in both the fault-free contracts and the corresponding generated faulty versions. The execution of a test run follows the next order:
1. Hyperledger is set in a clean initial state, which means that the respective nodes (orderers and endorsers) are set (or reset) to an empty blockchain.
2. The contract under evaluation is deployed onto the blockchain (in this case, onto the endorsing peers).
3. The workload generated for the contract is executed and metrics about each transaction are collected.
During the test runs, Caliper provides multiple transaction details, such as the timing of each transaction phase, side effects returned by the platform and other status information. In the end, the collected data is post-processed to match and compare the information of each transaction occurred in the faulty contracts with the corresponding transaction in the reference contracts.
The choice of the Hyperledger platform for the experimental setup is mostly related with the fact that it offers easy means to collect various metrics needed (e.g., transaction execution time, reverted transactions, CPU/memory usage). In
Fig. 4: Approach for analyzing the faults’ impact on the blockchain system.
what concerns performance metrics, notice that the goal is not to obtain absolute performance values, but to understand what is the relative impact in realistic conditions. Thus, the setup is similar to other studies where performance has been studied [12, 29]. As we had the goal of creating an injector that is independent of the programming language, and due to the fact that Hyperledger includes modular blockchain frameworks, this decision of using Hyperledger is beneficial for future work, where the infrastructure may easily be reused to run programs in different blockchains.
After finishing the runs, in _Step 3_ we **analyze the results**. We compare the outcome of both the reference data (i.e., the outcome of the fault-free runs) and mutation data (i.e., the outcome of faulty smart contract runs). For this, we consider the successful commit of the transactions performed in the test cases, as well as the differences and failures that arise in the transactions. In each execution, a _transaction is only deemed successful_ if i) all of its endorsements are successful and matching and ii) it is successfully ordered and reported as committed by all endorsing peers. In previous work we identified several different types of blockchain failures [21], which also fit the types of failures discussed in related work, e.g., [7][9]. Based on this, and in our own empirical analysis of the different failures during the experiments, we match our observations to following failure modes:
* **Revert failure**: When Revert occurs, the execution of the transaction is stopped, and all state changes are rolled back. The reverted transaction consumes the gas used up to the point where the transaction is reverted. This failure mode, in our context, indicates whether there was at least one transaction in the faulty contract that was reverted while its reference instance did not.
* **Abort failure**:Like _Revert Failure_, when an abort occurs, the execution of the transaction is stopped, and all state changes are rolled back. The difference is that the aborted transaction consumes all gas up to the maximum allowance of the transaction. This failure mode indicates whether at least one transaction in the faulty contract was aborted while its reference instance did not fail.
* **Out-of-Gas failure**: Indicates whether there was at least one transaction in the faulty contract that failed due to gas depletion, while in its reference case it did not happen.
* **Correctness failure**: Indicates whether there was at least one transaction in the faulty contract that outputted a different result or return value than its reference fault-free contract. Correctness allows us to observe failures that can be seen by the client during output invariant checks.
* **Integrity failure**: Indicates whether there was at least one transaction in the faulty contract that outputted a different result or return value, and also a different read-write set than the reference fault-free contract (i.e., that transaction modified the state of the blockchain differently than the one from the reference contract). Integrity allows us to observe failures to the ledger integrity that can be seen by the client.
* **Latent integrity failure**: Indicates whether there was at least one transaction in the faulty contract that outputted the same result or return value than its reference fault-free contract, but with a different read-write set than the reference contract. The aim here is to observe errors that stay hidden and cannot be directly seen by the client, as it receives the expected result or return value.
Table II overviews the failure model considered in this work for analysis of the results.
In order to characterize the failures, we see whether the transaction is concluded, whether the result of a transaction (return value) is correct, and finally, whether the ledger state is correct. As shown in the table, when _Abort and Revert failures_ occur, neither a value (transaction result) is returned to the client nor any changes are made to the state of the ledger. The transaction fails, and some error or exception is delivered to the client. The only difference between the _Abort Failure_ and _Revert Failure_ is related to the gas consumption. A reverted transaction consumes the gas used up to the point where the transaction is reverted, while an aborted transaction consumes all gas up to the maximum allowance of the transaction.
In the case of _Out-of-Gas failures_, similar to the previous cases, no value is returned, and no changes to the ledger state are made. However, the transaction is not concluded due to gas depletion (e.g., a fault may cause spending more resources). In a _Correctness failure_, the transaction is successfully concluded, but the transaction result is different from the reference run. In this case, the state of the ledger remains intact. In contrast, in the case of _Integrity failure_, in addition to having incorrect returned values, the integrity of the ledger's state is disrupted too. Finally, _Latent integrity failure_ relates to changes in the integrity of the ledger state, although the transaction result (values returned to the client) is correct (which means that a
\begin{table}
\begin{tabular}{|p{85.4pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Failure Modes** & **Transaction not concluded** & **Incorrect return value or transaction result** & **Incorrect ledger state** \\ \hline \hline Abort & \(\bullet\) & & \\ \hline Revert & \(\bullet\) & & \\ \hline Out-of-gas & \(\bullet\) & & \\ \hline Correctness & & \(\bullet\) & \\ \hline Integrity & & \(\bullet\) & \(\bullet\) \\ \hline Latent integrity & & & \(\bullet\) \\ \hline \end{tabular}
\end{table} TABLE II: Failure modes and its characteristics
client cannot detect the problem). Although _Correctness_ and _Integrity_ failures are both severe, being undetectable makes the _Latent Integrity Failure_ the most severe failure in our failure model.
### _Study 2 - Effectiveness of Verification Tools_
This section presents the study for assessing the detection capabilities of the verification tools (i.e., Mythril, Securify2 and Slither), which is depicted in Figure 5. In practice, we go through the following steps:
1. Selection of smart contract verification tools;
2. Execution of the tools against faulty smart contracts (generated by fault injection);
3. Results analysis, based on a set of metrics of interest.
_Step 1_) involves the **selection of smart contract verification tools**. We aimed at popular tools, actively maintained, and of different operational nature. Namely, we selected an abstract interpretation tool (Securify2 version 0.0.1), a static analysis tool (Slither version 0.8.0), and a tool that uses symbolic execution (Mythril version 0.22.19). In the perspective of our approach, this is a variable set of tools and, at this point, other tools could be used (e.g., Zeus [25], Oyente [26], Smartest [44], Smartian [11]).The specific selection of tools will depend on various factors, such as the available time to run the tools and to analyze results (e.g., some tools require more time to execute, other tools have high false positive rates), computational resources, and the overall requirements of the user executing the approach.
In _Step 2_), **we execute the tools** against the generated set of 15,494 faulty smart contracts, collect their output and then store and process the results produced by the tools, mapping the detected vulnerabilities to the analyzed contracts. The tools are run using their default parameters, with no particular configuration towards specific types of faults.
In _Step 3_), **we analyze the results** but right before that, the tools' output reports are converted into unified data format through a _merge function_ we implemented. This way, no changes in the analysis process are required for a new tool. While analyzing the results, all cases of potential true-positives (i.e., software faults signaled by the tool that do exist) are manually verified to check if the signaled defect really is present in the contract (also as a way of understanding if the fault injector is correctly injecting the faults). We focus on evaluating the tools' overall effectiveness in detecting the injected faults, which should be present in all contracts under analysis. Other potential faults (i.e., previously unknown vulnerabilities) are out of the scope of this work.
### _Study 3 - Impact of Elusive Faults_
This final study focuses on the outcomes of the previous studies and analyses the consequences of the faults that elude the verification tools. The analysis is essentially carried out to understand the distribution of faulty contracts (not detected by any of the tools) per fault type; the prevalence of the different types of failures associated with the different types of faults; and, finally, the study focuses on the faults that generate the most severe failures.
## 4 Results and Discussion
This section discusses the results obtained during our experimental evaluation. All the experiments were executed on 4 virtual machines with 16 CPU, 16 memory, using Ubuntu 18.04.5 LTS. After running the fault injection process, we were able to generate **at least one** faulty contract (out of 400 smart contracts) for each of the 36 different types of faults, ending
Fig. 5: Approach for analyzing the effectiveness of verification tools.
up with a total of 15,494 (>= 400 * 36 as it is possible to inject a single fault in more than one location in the code of a certain contract) faulty smart contracts. Figure 6 overviews the distribution of the generated faulty contracts per defect type (blue bars).
As we can see in Figure 6, some faults lead to higher numbers of faulty contracts, such as _Missing visibility modifier of state variables (I_MVMSV)_ (1902 times), _Missing initialization of Local Variable (A_MILV)_ (1736 times), and _Missing require on input values (CH_MRIV)_ (1599 times). These numbers are not directly reflect their frequency in the real world, but are simply related with the number of possible locations in each of the original contract's code that met the conditions for the injection. On the opposite side we find a few faults that appear rarely, such as, _Missing Check on Gas Limit (CH_MCHGL)_ (2 times), or _Missing Check on Suicide Functionality (CH_MCHSF)_ (9 times). Notice that, although a fault may be infrequent (i.e., low probability of occurrence) the associated risk may be high, which means that tools should not disregard such cases.
Figure 6 also shows, in the orange bars, the number of faulty smart contracts that were actually executed for each defect type. As the workload generation tool (described in section 3) is currently unable to fully match types and number of parameters necessary for invoking all transactions in all 15,494 contracts, the number of executed contracts is less than the total number of contracts. Still, we were able to run 83% (12093 out of 15,494) of all generated faulty contracts, encompassing all 36 types of faults.
### _Results of Study 1 - Faults' Impact_
We ran the generated workload over the faulty smart contracts, which resulted in the execution of a total of \(10,925,749\) transactions of which \(2,782,063\) (about \(25.46\)%) were executed successfully and no effect was observed. The rest of the transactions were affected by the injected fault having resulted in a failure. Figure 7 shows an overview of the distribution of results.
Most of the failures triggered are of type _Revert Failure_ (about 53.72% of all transactions) followed by _Out-of-Gas Failure_ (about 18.35% of all transactions). The rest of the failures, which are the most critical ones (as they influence on gas consumption and correctness of results and ledger), compose less than 2.5% of the cases.
Fig. 6: Distribution of the generated faulty smart contracts per defect type.
Fig. 7: Overall view of faults’ impact.
Figure 8 shows the detailed impact results per defect type. Notice that drilling down to the fault type, the relative prevalence of the different types of failures is generally the same across all types of faults.
Figure 9 shows the detailed results of the fault types that caused severe failures namely _Correctness Failure_, _Integrity Failure_, and _Latent Integrity Failure_. Of all 36 types of faults, only 9 of them did not cause any of these failures (for instance, _CH_MCHGL_ and _CH_MCHSF_ are two of these 9 cases). Note that all of these 9 fault types, with the exception of _I_MFVM_, are the least frequent in our faulty smart contracts list (refer to Figure 6).
The results depicted in Figure 9 show that there are still many cases in which most of the defect types injected cause correctness, integrity, and especially latent integrity failures. As shown, _Missing require on input variables (CH_M_RIV_) causes most of _Latent_ failures and _Missing visibility modifier of state variables (I_MVMSV_) causes most of _Integrity_ and _Correctness_ failures. Among all, _Missing if construct on transaction sender plus statements (AL_MITSS_) and _Wrong variable name (A_WVN)_, respectively with 3.12% and 2.75%, have a higher ratio (total number of failures divided by the total number of transactions executed per defect type) of _Latent Integrity Failure_.
We have also calculated the runtime overhead (performance degradation) caused by injected faults on faulty contracts compared to fault-free runs in terms of CPU usage, Memory Usage, and Transaction time. An overview of the results is presented in Figure 10, in which we can see the distribution of the three types of overhead values for all transactions. In general, the injected faults lead to some overhead on all three metrics. There are some cases where the overhead is high, namely _Wrong arithmetic expression used in assignment (A_WVAE_) on CPU usage, _Wrong variable type (A_WVT_) on transaction time, and _Wrong value assignment with too many digits (A_WVATMD_) on memory usage. It is also worthwhile mentioning
Fig. 8: Faults’ impact per defect type.
Fig. 9: Types of faults triggering severe failures.
that some faults are associated with negative overhead values since they lead to _abort_ or _revert_ of transactions.
### _Results of Study 2 - Effectiveness of Verification Tools_
Table III shows which faults the three tools (i.e., Security, Slither, and Mythril) announce they are able to detect. It shows the original names used by the tools and maps them to our fault model.
Figure 11 shows an overview of the detection accuracy of each of the three tools used. In particular, it shows, per tool, the total number of faulty contracts generated (considering only the types of faults each tool was designed to detected) and the total number of contracts in which the tools signaled the presence of a problem in the injection location (i.e., the true positives).
As we can see in Figure 11, Slither is more effective in detecting the injected defects (detects defects in about 81% of the contracts) and is followed by Mythril with about 61% of detection accuracy. Security shows clearly lower values of detection accuracy reaching only about 6%. We emphasize that these accuracy numbers use the announced capabilities of each of the tools as reference.
It is important to mention that although Slither seems to be a more effective verification tool among the three tools evaluated in this study, the number of alerts generated by Slither is also much higher than the number of alerts generated by other tools. During the experiments and considering just the faulty contracts holding faults that each of the tools were designed to detect, Security generated a total of \(7382\) alerts, of which \(516\) were indeed correct alerts (i.e., \(6.99\)% of the alerts represented true positives). Mythril generated 55090 alerts, of which 8100 ended up being correct alerts (\(14.70\)%). Slither generated 397236 alerts, of which only 6902 were correct alerts (\(1.74\)%).
Figure 12 shows how differently the verification tools performed in detecting the faulty contracts. Figure 12.a) at the left-hand side considers all faults which the tools were designed to detect (including faults that only one or two of the tools should detect). Figure 12.b) considers only the set of faults that are common to the three tools, i.e., that all three tools announce being able to detect.
As we can see in Figure 12.a), only 161 (about 1.4%) faulty contracts out of 11799 are signaled correctly by all three tools. We can also see that 3099 faulty contracts (about 26.3%) are detected by both Slither and Mythril. The rest of the faulty
Fig. 11: Detection accuracy per tool.
Fig. 10: Overhead caused by faults on CPU, Memory and Execution time.
contracts are detected either by Slither or by Mythril, with the advantage being on the side of Mythril. Although Security has low detection effectiveness it can actually signal faults in 57 contracts that neither of the remaining tools are able to. This clearly shows the tools complementarity in fault detection. In Figure 12. b) we again observe the complementarity of the tools, although we now see that Slither actually captures most of the faults that Mythril detected (in this scenario were we reduced the faults to the set that is common to the three tools). We also see that Securify does not bring further detection value in this scenario.
We now go through a more detailed view of the tools capabilities per each of the faults. Figure 13 presents, per type of fault, the number of faulty contracts generated and the corresponding number of contracts in which the tools signaled the presence of a problem in the injection location (i.e., the tools detection accuracy).
As we can see in Figure 13, the pattern of detection seems to be similar for all fault types, with exception of a few cases, namely _A_MCV_, _CH_MRAIV_, and _CH_MRATS_ in which Mythril was able to detect more faulty contracts. In the case of _Missing Compiler Version (A_MCV)_, most of the faulty contracts have remained undetected. In contrast, defect types of _A_WIS_, _A_WVT_, _AL_WRAF_, _CH_MCHGL_, _CH_MCHSF_, _F_MINHERTANCE_ are totally detected by one or more tools.
a simple check at the beginning of the contract), however, the tools generally fail to detect it in most cases (91.9%). In the case of the other defect types, tools tend to perform better and, the injected fault is detected by at least one of the tools in at least every 9 out of 10 faulty contracts. Still, the different code locations where the fault was injected affects the detection capabilities of the tool.
To understand the impact of the undetected defects, we present a summary of the results obtained from executing the transactions of the undetected faulty contracts in Figure 15. Again, we observe that most of the failures belong to _Revert Failure_ and _Out-of-gas Failure_ followed by _Abort Failure_. When compared to the distribution of failures in all transactions, presented in Figure 7, the percentage of not affected transactions decreased among the elusive faults leading to a higher percentage of _Revert_ and _Out-of-gas_ failures. The percentage of the other failures slightly decreased as well.
A more detailed view of the results is presented in Figure 16. Among undetected defects, \(A\_MCV\) is causing most of the failures. In contrast, undetected defects types of \(I\_MFVM\), \(AL\_WEH\), \(A\_WVATMD\_2\), \(A\_MIT\), \(AL\_ECSWS\), \(CH\_MROIV\), \(F\_INHERITANCE\) are not causing any failure.
Figure 17: Grits down to the undetected faults that caused severe failures namely _Correctness Failure_, _Integrity Failure_, and _Latent Integrity Failure_. The results show that even after using the whole set of verification tools, residual faults are indeed left behind and leading to severe issues, from the blockchain point of view. The results also show that most of these severe defects are either of type _Assignment_ or _Checking_.
Among all types of faults presented in Figure 17, \(A\_MCV\), \(CH\_MROTS\), \(CH\_WRA\) are less severe as they are not causing any or a just a few latent failures. In contracts \(A\_WVN\) and \(A\_WVAE\) can be assumed as the most severe issues we may find in smart contracts, as both are causing latent failures in most cases. The former remained undetected for about 9.5% of the time, and the latter remained undetected for about 6.0% of the time (refer to Figure 14).
Fig. 12: Venn diagram of detected defects by the Tools.
Fig. 13: Defects detected by the verification tools.
### _Main Findings_
This section highlights the main findings of our experimental evaluation, as follows:
* As a general observation related with the fault injection process, we found out that a few type of faults are connected to higher likelihood of injection, namely _Missing visibility modifier of state variables (I_MVMSV)_ (1902 times), _Missing initialization of Local Variable (A_MILV)_ (1736 times), and _Missing require on input values (CH_MRIV)_ (1599 times) lead to higher numbers of faulty contracts. This means that the conditions required to inject these faults are realized more frequently.
* No failures were observed in one fourth of the faulty contracts, while in about half of the faulty contracts _Revert failures_ were detected, with _Out-of-gas failures_ being observed in nearly one fifth of the faulty contracts. These two types of failures are the most frequent ones observed in these experiments.
* The faults associated with higher chances of injection (i.e., _I_MVMSV, A_MILV, and CH_MRIV) are also the ones that lead to most of the _Revert failures_ and _Out-of-gas failures_ observed during the experiments.
* Fault _CH_MRIV_, one of the most frequent, is responsible for most _Latent failures_, which is the most severe failure mode. _CH_MRTS_ and _A_MISV_2 are not as frequent as _CH_MRIV_ but they are also the cause of a visible number of cases of _Latent failures_.
* The effectiveness of smart contract verification tools is rather low, with results showing low numbers of true positives when compared to a large number of generated alerts. This confirms similar observations in related work. Slither seems to be more effective in detecting the injected faults (it is able to detect defects in about 81% of the faulty contracts), but it also generates a huge number of alerts (the detected defects compose only 1.74% of all alerts generated). Mythril, which detects defects in about 61% of faulty contracts, is an interesting option if we consider the number of alerts
Fig. 14: Faulty Contracts not detected by any of the tools.
Fig. 15: Overall view of undetected faults’ impact.
generated (the detected defects compose only 14.70% of all alerts generated). Security showed to be able to detect about 6.4% of the faulty contracts.
* Mythril and Slither have clearly shown complementary capabilities, although they also jointly detected many of the faulty contracts. Security was able to detect faults that the remaining tools could not capture, but at a very small scale. Thus, developing a tool that makes the use of the different techniques involved is a possible path towards better detection capabilities.
* Faulty contracts generated with _A_WIS_, _A_WVT_, _AL_WRAR_, _CH_MCHGL_, _CH_MCHSF_, _F_MITHERITANCE_ are totally detected by at least one of the tools. On the opposite side, the tools mostly fail to detect _Missing Compiler Version (A_MCV)_.
* The faults generating the most severe failures either belong to _Assignment_ or _Checking_ defect types.
* The overall impact on CPU, memory and transaction time of the faults is relatively small (i.e., from 2 to 6%), although there are concerning cases with some faults significantly exceeding the normal profile, in some cases duplicating the reference values (e.g., memory overhead).
* In what concerns the _elusive faults_ (see Section 4.3), nearly three quarters of the types of faults (28 out of 36) have escaped detection and are associated with severe failures (i.e., correctness, integrity, latent).
* _A_WVN_, _A_WVAE_, and _CH_MCHAO_ are among the most severe issues, as they jointly cause about 50% of all latent
Fig. 16: Faults’ impact for undetected defects.
Fig. 17: Faults’ critical impact for undetected defects.
failures in the transactions of undetected faulty contracts.
* The impact on CPU, memory usage, and transaction time is globally not significant, although the presence of _A_MC_ and _A_WCN_ in faulty contracts respectively leads to about 3 times more and 30 times more memory usage.
* Overall, focusing on the defect types identified as elusive during this work may allow for improving the detection capabilities of future verification tools. Also, a finer analysis per fault of the reasons of why a tool cannot detect the same fault in different code locations is crucial for detection improvement.
### _Threats to Validity_
This section presents the threats to the validity of this work and discusses mitigation strategies. We start by mentioning that the _fault model used_ does not include all possible faults. For instance, we do not use reentrancy faults in this work, as well as other faults that are known to affect smart contracts. This may limit the evaluation of both impact and tools effectiveness and give a biased perception of the reality concerning impact and detection effectiveness. Anyway, the selected faults cannot be disregarded by detection tools nor their impact. Within this limitation, we did try to end up with at least one representative example of each different type of fault. Using a more complete fault model and implementing a larger number of different faults will be pursued in future work.
The process for _generating the workload_ may not be the best option, considering that certain faults may only be triggered by very specific input sequences, which might shadow some interesting failures that could have occurred. Also, the characteristics of Solidity smart contracts may lead to calls that fail by specification (e.g., only some addresses have authorization and capabilities to perform transactions in the smart contracts). Nevertheless, we only analyze and compare transactions that are deemed successful in the base reference runs to make sure that the faulty reference runs indeed caused an impact.
The set of _selected tools_ is rather small and may not provide a proper view of smart contract verification tools. Also, depending on specific goals or constraints (e.g., available time and resources for executing the approach) other tools could be used; still, we selected tools that frequently appear in the literature. The _analysis_ is also mostly limited to measuring the true positive rate of the tools in detecting the presented vulnerabilities, which may not provide an accurate view of the tools' capabilities. Nevertheless, our goal is that our results allow improving verification tools, and the focus is on the injection of smart contract faults and generated faulty contracts, regardless of the tools that is then used for fault detection.
Finally, the whole combination of selected contracts with the implemented faults and selected tools may lead to a biased view of the faults that are indeed elusive. Still, we believe that our options were reasonable given the extension of the experiments and we highlight the presence of all three components in related work, supporting their representativeness.
## 5 Conclusion
In this work we carried out an experimental campaign to show the impact that realistic faults may have on the reliability of blockchain systems. We use fault detection tools to understand which of the faults may escape detection and if or how they lead the blockchain system to fail at runtime. In future work we intend to extend the implementation of the set of faults and use a larger and more diverse (in terms of operational profile) set of verification tools. Based on the foundations set in this paper, one of our future lines of research consists of the formal definition of a benchmark that allows assessing and comparing the effectiveness of vulnerability detection tools for smart contracts.
|
2306.14254 | Mean flow modelling in high-order nonlinear Schrödinger equations | The evaluation and consideration of the mean flow in wave evolution equations
are necessary for the accurate prediction of fluid particle trajectories under
wave groups, with relevant implications in several domains, from the transport
of pollutants in the ocean, to the estimation of energy and momentum exchanges
between the waves at small scales and the ocean circulation at large scale. We
derive an expression of the mean flow at finite water depth which, in contrast
to other approximations in the literature, accurately accords with the
deep-water limit at third order in steepness, and is equivalent to second-order
formulations in intermediate water. We also provide envelope evolution
equations at fourth order in steepness for the propagation of unidirectional
wave groups either in time or space that include the respective mean flow term.
The latter, in particular, is required for accurately modelling experiments in
water wave flumes in arbitrary depths. | Alexis Gomel, Corentin Montessuit, Andrea Armaroli, Debbie Eeltink, Amin Chabchoub, Jérôme Kasparian, Maura Brunetti | 2023-06-25T14:18:36Z | http://arxiv.org/abs/2306.14254v2 | # Mean flow modelling in high-order nonlinear Schrodinger equations
###### Abstract
The evaluation and consideration of the mean flow in wave evolution equations are necessary for the accurate prediction of fluid particle trajectories under wave groups, with relevant implications in several domains, from the transport of pollutants in the ocean, to the estimation of energy and momentum exchanges between the waves at small scales and the ocean circulation at large scale. We derive an expression of the mean flow at finite water depth which, in contrast to other approximations in the literature, accurately accords with the deep-water limit at third order in steepness, and is equivalent to second-order formulations in intermediate water. We also provide envelope evolution equations at fourth order in steepness for the propagation of unidirectional wave groups either in time or space that include the respective mean flow term. The latter, in particular, is required for accurately modelling experiments in water wave flumes in arbitrary depths.
## I Introduction
As waves evolve on the ocean surface, they induce a mean flow to the fluid particles, particularly when nonlinear effects are taken into account. Near the surface, fluid particles experience a net horizontal displacement in the same direction as the water wave propagation, the so-called Stokes drift [1; 2], that decays with depth. A return flow in the fluid column along vertical and horizontal directions guarantees the conservation of the water-mass transport, causing localised variations in the mean water level under wave groups and the associated propagation of infragragravity waves [3] at finite water depth. An accurate description of the mean flow is thus necessary for the proper reconstruction of the fluid particle trajectories underneath water waves [4; 5]. Since the wave-induced mean flow is associated to the transport of energy, momentum and other tracers [6], such as pollutants like plastic [7] or offshore oil spill [8], it is relevant for environmental studies. Moreover, it has an im
pact on the general circulation of the ocean at large scales and its modelling [9; 10; 11; 12], and on the nearshore circulation [3; 13], in particular in shoaling regions [14; 15], while it is affected by the presence of background shear currents [16; 17].
Since Stokes drift and return flow are two phenomena occurring at second-order in steepness, they are taken into account in the finite water depth nonlinear Schrodinger equation (NLS), which describes the evolution of the envelope of narrow-banded wave packets and can be obtained using a multi-scale development of water surface elevation and velocity potential at third-order in steepness [18; 19; 20; 21], or taking the narrow-banded limit of the Zakharov equation using an Hamiltonian approach [22; 23]. Indeed, at third-order in steepness, it was shown [20] that the mean flow term comes into play as a modification of the nonlinear coefficient, giving rise to a transition from focusing to defocusing regimes at the critical value \(k_{0}h\approx 1.363\), where \(k_{0}\) is the carrier wavenumber and \(h\) the depth. However, this contribution to the nonlinear coefficient disappears in the deep-water limit. At fourth-order in steepness, the accurate formulation of the mean flow term needs to be accounted for, in both, finite and infinite depth waters [24].
Fourth-order terms in the NLS equation are necessary to explain features like asymmetrical evolution of spectra [25], and asymmetries in the waveform [26; 27; 28; 29; 30], especially when inevitable wave focusing is at play [31]. Several versions of high-order wave envelope evolution equations exist in the literature, that can be obtained when applying the multiple scales development using steepness and bandwidth with different orders of magnitude [32; 33; 24; 34], or the Hamiltonian approach [35; 36; 37; 38; 23]. The main difference between all these equations is the treatment and approximation of the mean flow term.
Here, we will focus on narrow-banded unidirectional wave packets where steepness and bandwidth can be considered as parameters of the same order, like in the finite-depth developments described in Refs. [39] (denoted as Sed03 in the following) and [40] (Slu05). The challenge of such developments is the fact that they do not reduce to the Dysthe equation (Ref. [24], Dys79) in the deep-water limit, which has been shown to well reproduce wave tank experiments [28]. We will show that this convergence depends on how the mean flow is approximated. Moreover, we will provide finite-depth envelope equations at fourth order in steepness in both, space-like and time-like formulations, with correct limiting expressions in deep water. In particular, such unidirectional time-like equations that allow a continuous scaling from intermediate to the deep water limit are relevant for reproducing wave tank experiments with high accuracy in arbitrary depths. In contrast, directional sea states are typically found in the open ocean, and this restricts the applicability of the the above unidirectional modelling equations.
The paper is organised as follows. In Sec. II, we will derive the expression of the mean flow to be inserted in the envelope equation at fourth order in steepness for the evolution in time. We will compare this expression with the ones already existing in the literature, and show that it indeed correctly describes the mean flow in the whole range of water depths, _i.e._, from intermediate to deep water regimes. In Sec. III, we will provide an envelope equation for the water wave evolution in space, relevant for modelling for instance water tank experiments, and the corresponding mean flow term, at fourth order in steepness. Again, we will perform the comparison for various water depth scenarios. Finally, in Sec. IV we will summarise our findings.
## II Fourth-order equation: propagation in time
The multi-scale approach has been used to derive at fourth order in steepness the following equation, that describes the evolution of the envelope \(U\) of a uni-directional progressive gravity
wave packet propagating in time on the free surface of a homogeneous liquid with depth \(h\)[39, 40, 41]
\[i\left(\frac{\partial U}{\partial t}+c_{\rm g}\frac{\partial U}{\partial X} \right)+\hat{\alpha}\frac{\partial^{2}U}{\partial X^{2}}\underbrace{-\hat{ \beta}|U|^{2}U}_{\rm incl.~{}Mean~{}Flow}=i\bigg{(}\hat{\alpha}_{3}\frac{ \partial^{3}U}{\partial X^{3}}\underbrace{-\hat{\beta}_{21}|U|^{2}\frac{ \partial U}{\partial X}-\hat{\beta}_{22}U^{2}\frac{\partial U^{*}}{\partial X }}_{\rm incl.~{}Mean~{}Flow}\bigg{)}. \tag{1}\]
The explicit formulation of the group velocity \(c_{\rm g}\), and of all dispersive and nonlinear coefficients \(\hat{\alpha}\), \(\hat{\beta}\), \(\hat{\alpha}_{3}\), \(\hat{\beta}_{21}\), and \(\hat{\beta}_{22}\) is given in App. A and B. The surface tension has been neglected and the fluid is considered as irrotational. The sign convention follows Sed03: the surface elevation is reconstructed at leading order from the envelope using[42]\(\eta(X,t)=\frac{1}{2}[U(X,t)\exp(i(k_{0}X-\omega_{0}t))+{\rm c.~{}c.}]\), where \(\omega_{0}\) and \(k_{0}\) are the angular frequency and wavenumber of the carrier wave, respectively.
The previous Eq. (1) is referred to as the'space-like equation' since dispersion is in space. It can also be written in the following equivalent form[39] in a reference frame moving with the group velocity \(x=X-c_{g}t\):
\[i\frac{\partial U}{\partial t}+\varepsilon\left(\hat{\alpha}\frac{\partial^{ 2}U}{\partial x^{2}}-\hat{\beta}_{D}|U|^{2}U\right)=i\varepsilon^{2}\left( \hat{\alpha}_{3}\frac{\partial^{3}U}{\partial x^{3}}-\omega_{0}k_{0}\tilde{Q} _{41}|U|^{2}\frac{\partial U}{\partial x}-\omega_{0}k_{0}\tilde{Q}_{42}U^{2} \frac{\partial U^{*}}{\partial x}\right)+\underbrace{\mu_{g}k_{0}}_{\rm Mean ~{}Flow}\frac{\partial\phi_{0}}{\partial x} \tag{2}\]
where \(\sigma=\tanh(k_{0}h)\), \(\mu_{g}\) is given in Eq. (A15), and the high-order nonlinear coefficients \(\tilde{Q}_{41}\) and \(\tilde{Q}_{42}\) in Eqs. (B) and (B), respectively. The nonlinear term \(\hat{\beta}_{D}\) is a positive function (see Fig. 1), given in Eq. (A9), and since the dispersion coefficient \(\hat{\alpha}\) is negative, it seems that the characterisation of the focusing and defocusing regime is somehow lost in the present formulation.
Notice that we explicitly introduce the scaling parameter \(\varepsilon\), a dummy variable which is set to 1 at the end, that is helpful for grouping terms of the same order in steepness. The last term, depending on the zero harmonic of the velocity potential \(\phi_{0}\), is the mean flow term that, from the multi-scale development, takes the following form (see Eqs. (35) and (57) in Sed03):
\[\frac{\partial\phi_{0}}{\partial x} = \varepsilon\frac{\omega_{0}}{2}\frac{\mu_{g}k_{0}}{\sigma\nu}|U|^ {2} \tag{3}\] \[- i\varepsilon^{2}\frac{4\omega_{0}\sigma}{\nu}\tilde{q}_{40S} \left(U\frac{\partial U^{*}}{\partial x}-U^{*}\frac{\partial U}{\partial x}\right)\]
with \(\nu\) given in Eq. (A16) and \(\tilde{q}_{40S}\) in Eq. (B). Its contribution can be included into the coefficients \(\hat{\beta}_{D}\), \(\tilde{Q}_{41}\), \(\tilde{Q}_{42}\) to obtain \(\hat{\beta}\), \(\hat{\beta}_{21}\), \(\hat{\beta}_{22}\), respectively, in Eq. (1) (see steps from Eq. (40) to (42) and from Eq. (64) to (66) in Sed03). Thus, the mean flow term is already taken into account in Eq. (1) at fourth order in steepness through the nonlinear coefficients.
We remark that from Eq. (3), in the deep-water limit, \(\partial\phi_{0}/\partial x\to 0\) since \(\nu\rightarrow-\infty\) while all the other coefficients are finite. This agrees with the developments at third order in steepness[43, 20]. Note that in the 1D shallow-water
case the mean flow term modifies the nonlinear term, see Eq. (2.21) in Ref. [20], while its contribution vanishes in the deep-water limit. However, the fact that the contribution of mean flow remains zero in deep water at higher-order approximation in steepness is not ideal, as we will discuss in the following subsection.
### Mean flow in the deep-water limit
In the Dysthe equation (Dys79), _i.e._, the evolution equation obtained in the deep-water limit with the multi-scale method at fourth order in steepness, an additional term corresponding to the wave-induced mean flow usually appears in the literature in both, space-like [24; 32; 44] and time-like 1D formulations [45; 46; 28; 47; 48]. Let us consider for the moment the propagation in time. The Dysthe equation reads:
\[i\frac{\partial U}{\partial t}-\frac{\omega_{0}}{8k_{0}^{2}}\frac{\partial^{2 }U}{\partial x^{2}}-\frac{\omega_{0}k_{0}^{2}}{2}|U|^{2}U=i\frac{\omega_{0}}{1 6k_{0}^{3}}\frac{\partial^{3}U}{\partial x^{3}}-i\frac{\omega_{0}k_{0}}{2} \bigg{(}3|U|^{2}\frac{\partial U}{\partial x}+\frac{1}{2}U^{2}\frac{\partial U ^{*}}{\partial x}+\underbrace{\frac{2i}{\omega_{0}}U\frac{\partial\phi_{0}}{ \partial x}}_{\text{Mean Flow}}\bigg{)} \tag{4}\]
In this deep-water limit, the mean-flow term is written as [47; 28]:
\[\frac{\partial\phi_{0}}{\partial x}|_{z=0}=\frac{\omega_{0}}{2}\mathcal{H}_ {x}\left[\frac{\partial|U|^{2}}{\partial x}\right], \tag{5}\]
where \(\mathcal{F}_{x}\) is the Fourier transform in space, and \(\mathcal{H}_{x}\) the Hilbert transform, \(\mathcal{H}_{x}[\eta]=\mathcal{F}_{x}^{-1}\left[+i\;\text{sgn}(k)\mathcal{F}_ {x}\left[\eta\right]\right]\), \(\eta\) being a function of \(x\). The role of the mean flow Hilbert term in the evolution of pulsating wave packets [25] and narrow-banded irregular waves [27; 28] has been shown in several experiments in deep water, and its presence is required for a correct modelling [26].
Janssen [32] suggests that the system can be closed by solving the equations for \(\phi_{0}\) as a function of the envelope \(U\) in Fourier space. It is instructive to report here the explicit derivation of the Hilbert term since we will use analogous developments in the following. The zero harmonic of the velocity potential \(\phi_{0}\) satisfies the Laplace equation in the entire water column with boundary conditions at the surface and at the bottom, giving a set of equations that constitutes the following Neumann problem:
\[\frac{\partial^{2}\phi_{0}}{\partial z^{2}}+\frac{\partial^{2} \phi_{0}}{\partial x^{2}} =0 -\infty<z \leq 0 \tag{6a}\] \[\frac{\partial\phi_{0}}{\partial z} =\frac{\omega_{0}}{2}\frac{\partial|U|^{2}}{\partial x} z =0\] (6b) \[\frac{\partial\phi_{0}}{\partial z} =0 z \rightarrow-\infty \tag{6c}\]
Figure 1: Third-order nonlinear coefficients \(\hat{\beta}\) and \(\hat{\beta}_{D}\) in Eqs. (1) and (2), respectively, normalised with respect to their deep-water values, which are equal. Zero crossing for \(\hat{\beta}\) is at \(k_{0}h=1.363\), shown by the dotted vertical line.
Substituting the Fourier transform of the mean flow:
\[{\cal F}_{x}\phi_{0}(x,z,t)=\hat{\phi}_{0}(k,z,t) \tag{7}\] \[= \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\phi_{0}(x,z,t)e^{ikx}dx\]
into the Laplace equation gives \(\partial^{2}\hat{\phi}_{0}/\partial z^{2}=k^{2}\hat{\phi}_{0}\) whose solution is
\[\hat{\phi}_{0}(k,z,t)=C_{1}e^{|k|z} \tag{8}\]
with \(C_{1}\) independent of \(z\). This satisfies the bottom boundary condition for \(z\to-\infty\).
Inserting this expression in Eq. (6b) gives:
\[|k|C_{1}e^{|k|z}|_{z=0}=|k|C_{1}=\frac{\omega_{0}}{2}{\cal F}_{x}\left[\frac{ \partial|U|^{2}}{\partial x}\right], \tag{9}\]
from which \(C_{1}\) is obtained, and thus:
\[\hat{\phi}_{0}(k,z,t)=\frac{1}{|k|}\frac{\omega_{0}}{2}e^{|k|z}{\cal F}_{x} \left[\frac{\partial|U|^{2}}{\partial x}\right]\,. \tag{10}\]
Now the derivative with respect to \(x\) at \(z=0\) in Fourier space is given by:
\[{\cal F}_{x}\left[\frac{\partial\phi_{0}}{\partial x}\right]|_{z =0}=ik\hat{\phi}_{0}|_{z=0} \tag{11}\] \[= i\frac{\omega_{0}}{2}{\rm sgn}(k){\cal F}_{x}\left[\frac{ \partial|U|^{2}}{\partial x}\right]\]
and finally, moving back to the direct physical space, the Hilbert term of Eq. (5) is recovered.
### Multi-scale development and Hilbert term
As discussed, the multi-scale approach of Sed03 gives Eq. (3), which in the deep-water limit reduces to
\[\frac{\partial}{\partial x}(\phi_{01}+\phi_{02})=0\,. \tag{12}\]
On the other hand, the mean flow can be written as the Hilbert term of Eq. (5) in the Dysthe equation [24; 32]. Thus, there is the need to reconcile these results. This can be done as follows. In the derivation of the Hilbert term, we use the Laplace equation for \(\phi_{0}\) that is the _complete_ mean flow, and not just its approximation at second order in steepness as in Eq. (12). Indeed, the Laplace equation at third order for the mean flow is:
\[\frac{\partial^{2}\phi_{03}}{\partial z^{2}}+\frac{\partial^{2}\phi_{01}}{ \partial x^{2}}=0. \tag{13}\]
When integrated in \(z\), this gives
\[\frac{\partial\phi_{03}}{\partial z}=-(z+h)\frac{\partial^{2}\phi_{01}}{ \partial x^{2}}, \tag{14}\]
using the fact that \(\partial\phi_{01}/\partial z=0\) and imposing the bottom boundary condition. From the multi-scale development, the following expression can be obtained at third-order in steepness (see Eqs. (12) and (14) in Ref. [20]):
\[\frac{c_{g}^{2}}{g}\frac{\partial^{2}\phi_{01}}{\partial x^{2}}+ \frac{\partial\phi_{03}}{\partial z} = \frac{\omega_{0}}{2\sigma}(1+C_{FD})\frac{\partial|U|^{2}}{ \partial x} \tag{15}\] \[= D^{\prime}\frac{\partial|U|^{2}}{\partial x},\]
with the coefficient \(C_{FD}\) defined in Eq. (100) and in Ref. [4], and where \(D^{\prime}=(1+C_{FD})\,\omega_{0}/(2\sigma)=\mu_{g}\omega_{0}/(8\sigma^{2})\) (see definitions in App. A). Using Eq. (14) at \(z=0\), Eq. (15) reduces to
\[\frac{\partial^{2}\phi_{01}}{\partial x^{2}}=-\frac{D}{h}\frac{\partial|U|^{ 2}}{\partial x}, \tag{16}\]
which corresponds to Eqs. (33)-(34) in Sed03 and where \(D\) is defined in Eq. (114). Eq. (16) can also be written, using Eq. (14), as:
\[\frac{\partial\phi_{03}}{\partial z}=D\frac{\partial|U|^{2}}{\partial x}. \tag{17}\]
Taking the deep-water limit gives Eq. (6b), _i.e._, the boundary condition at the surface in the Neumann problem. Since \(\partial\phi_{01}/\partial z=0\) and \(\partial\phi_{02}/\partial z=0\), such expression is valid at third order in steepness for the mean flow.
Thus, in the derivation of the Hilbert term, the complete mean flow (in the Laplace equation) is considered together with the mean flow at third order in steepness in the surface boundary condition. Consequently, a 'hybrid' relation as described by Eq. (5) is obtained, which is different from Eq. (12) where only the first terms in the development of the mean flow at the surface are taken into account. In other words, the expression in Eq. (5) is inherently nonlocal. The same occurs in the nonlinear terms of the supercompact model [49], from which the Dysthe equation can be derived. The terms in Sed03 and Shu05 are instead local, as they only involve the surface, and not the entire water column. This is obviously an approximation, since the mean flow does involve a body of fluid which is not immediately at the surface, as clearly shown in field measurements [3].
### Mean flow term in arbitrary depth
We now repeat the procedure used in Sec. II.1 for the case of intermediate water. We will consider two cases that differ based on the considered surface boundary condition: the Neumann problem is solved using the condition given by Eq. (17) in **Case 1**; using Eq. (15) in **Case 2**. Replacing the expression of \(\partial\phi_{0}/\partial x\) that is obtained in each case into the last term of Eq. (2) gives the final high-order NLS equation in arbitrary finite depth and space-like form.
#### Case 1
Moving as before to the Fourier space, the Laplace equation is \(\partial^{2}\hat{\phi}_{0}/\partial z^{2}=k^{2}\hat{\phi}_{0}\) and its solution is given by:
\[\hat{\phi}_{0}(k,z,t)=C_{1}e^{|k|(z+h)}+C_{2}e^{-|k|(z+h)}. \tag{18}\]
Imposing the boundary condition at the bottom, Eq. (6c), gives:
\[\frac{\partial\hat{\phi}_{0}}{\partial z}|_{z=-h}=|k|(C_{1}-C_{2})=0. \tag{19}\]
Thus, we have \(C_{1}=C_{2}=C/2\) and:
\[\hat{\phi}_{0}(k,z,t)=C\cosh\big{(}|k|(z+h)\big{)}. \tag{20}\]
Inserting this in Eq. (17) gives:
\[|k|C\sinh\big{(}|k|(z+h)\big{)}|_{z=0}=D\mathcal{F}_{x}\left[\frac{\partial|U| ^{2}}{\partial x}\right], \tag{21}\]
from which one obtains \(C\) and therefore:
\[\hat{\phi}_{0}=\frac{1}{|k|}\frac{\cosh\big{(}|k|(z+h)\big{)}}{\sinh\big{(}|k |h\big{)}}D\mathcal{F}_{x}\left[\frac{\partial|U|^{2}}{\partial x}\right]. \tag{22}\]
At \(z=0\), this gives:
\[\hat{\phi}_{0}|_{z=0}=D\frac{\coth\big{(}|k|h\big{)}}{|k|}\mathcal{F}_{x} \left[\frac{\partial|U|^{2}}{\partial x}\right]\,. \tag{23}\]
Now, the derivative with respect to \(x\) at \(z=0\) is given by:
\[\mathcal{F}_{x}\left[\frac{\partial\phi_{0}}{\partial x}\right]| _{z=0}=ik\hat{\phi}_{0}|_{z=0} \tag{24}\] \[= iD\frac{\text{sgn}(k)}{\tanh(|k|h)}\mathcal{F}_{x}\left[\frac{ \partial|U|^{2}}{\partial x}\right].\]
Moving back to the direct physical space, we finally get:
\[\frac{\partial\phi_{0}}{\partial x}=D\mathcal{F}_{x}^{-1}\Bigg{\{}\frac{i}{ \tanh(kh)}\mathcal{F}_{x}\left[\frac{\partial|U|^{2}}{\partial x}\right]\Bigg{\}}. \tag{25}\]
#### Case 2
The surface boundary condition is now Eq. (15). Note that the relation \(\phi_{03z}=-h\phi_{01xx}\) has not been used to simplify the l.h.s. of this equation and both mean flow terms are thus
considered being of the same order, see Eq. (13) in Ref. [4].
Inserting the generic form of the solution in intermediate water, _i.e._, Eq. (20), into the surface boundary condition, Eq. (15), gives:
\[C\left.\left[\frac{c_{g}^{2}}{g}(-k^{2})\cosh(|k|(z+h))+|k|\sinh(|k|(z+h))\right] \right|_{z=0}=D^{\prime}{\cal F}_{x}\left[\frac{\partial|U|^{2}}{\partial x} \right], \tag{26}\]
from which we obtain the following expression for \(C(k,t)\):
\[C(k,t)=\frac{D^{\prime}}{k\tanh(kh)\left[1-c_{g}^{2}k/(g\tanh(kh))\right]} \frac{1}{\cosh(kh)}{\cal F}_{x}\left[\frac{\partial|U|^{2}}{\partial x} \right], \tag{27}\]
where we used \(|k|\tanh(|k|h)=k\tanh(kh)\), and \(c_{g}\) is the wave group speed at the carrier wavenumber. Performing analogous steps as in the previous case, the final expression for the Euler horizontal velocity in \(z=0\) is:
\[\frac{\partial\phi_{0}}{\partial x}=D^{\prime}{\cal F}_{x}^{-1}\left\{\frac{i }{\tanh\left(kh\right)\left[1-c_{g}^{2}k/(g\tanh(kh))\right]}{\cal F}_{x} \left[\frac{\partial|U|^{2}}{\partial x}\right]\right\}. \tag{28}\]
Note that this expression coincides with Eq. (15) in Ref. [4] (apart from a sign that is a typo in that latter equation).
By performing the derivative in \(x\) on the r.h.s. of Eq. (25) or (28), considering that \(D=D^{\prime}/(1-c_{g}^{2}/(gh))\) and using \(\coth y=1/y+O(y)\), we get in both **Case 1** and **2** for small \(kh\) numbers:
\[\frac{\partial\phi_{0}}{\partial x} \sim D{\cal F}_{x}^{-1}\Bigg{[}i(ik)\coth(kh){\cal F}_{x}[|U|^{2}] \Bigg{]} \tag{29}\] \[\sim D{\cal F}_{x}^{-1}\Bigg{[}-\frac{1}{h}{\cal F}_{x}[|U|^{2}] \Bigg{]}\] \[= -\frac{D}{h}|U|^{2}=\frac{\omega_{0}}{2}\frac{\mu_{g}k_{0}}{ \sigma\nu}|U|^{2}\,.\]
Thus, we recover the first term on the r.h.s. of Eq. (3). The NLS nonlinear coefficient is also recovered when the mean flow term is added to the third-order nonlinear term in Eq. (2), since \(\hat{\beta}=\hat{\beta}_{D}+D^{2}\nu/(2h^{2}\omega_{0})\). Hence, the defocusing regime is recovered through the inclusion of the mean flow term.
### Numerical comparisons
We compare the expressions for the horizontal velocity \(\partial\phi_{0}/\partial x\) listed in Table 1 with the sub-harmonic velocity potential \(\phi_{20}\) at second-order in steepness and its horizontal derivative calculated using the Dalzell analytical method (see the original paper, Ref. [50] (Dal99), and the explicit formulae reported in the Appendix of Ref. [51]).
For benchmarking and validation purposes, we use the same parameters as in Ref. [51] (case C in their Table 2), namely a Gaussian (amplitude) spectrum \(S(k)\) with peak wavenumber \(k_{0}=0.0277\) m\({}^{-1}\), wavelength \(\lambda_{0}=2\pi/k_{0}\), standard deviation of the spectrum given by symmetrical values \(k_{w}=k_{w1}=k_{w2}=0.27k_{0}\)
steepness \(\varepsilon=0.3\) and different values of the normalised depth \(k_{0}h\) in the range \([0.5,50]\), thus, the case of finite and infinite water depth conditions. The angular frequency is calculated from \(\omega_{0}^{2}=gk_{0}\tanh(k_{0}h)\). In particular, the surface elevation at first order in steepness given by the superposition of \(N=30\) waves is used to calculate the intensity of wave envelope, _i.e._\(|U|^{2}\), and then, the horizontal velocity \(\partial\phi_{0}/\partial x\) through the different relations, as listed in Table 1. We consider a focused wave group, composed of \(N\) individual sinusoidal wave components that are in phase at a single point in time and space [52] (the origin in our case), and a random sea state by using a uniform distribution for the phases. An example of sea state realization is given in App. D.
The results are summarised in Fig. 2 for waves focusing at \(x=0\), and in Fig. 3 for the case of waves superposition with random phases. In both cases, we see that the horizontal velocity calculated by Eq. (29) (gray line), which corresponds to the second-order expression in Sed03, reproduces well the Dalzell waveform (green line) only for \(k_{0}h\leq 2\), and cannot be applied in deep water where it gives a waveform amplitude much smaller than Dysthe's result (black line). On the contrary, Dysthe's expression should not be used for \(k_{0}h<10\) as in our considered cases, especially for large amplitude waveforms, where it does not attain Dalzell's accuracy. Although an elementary consideration of the dispersion relation suggests that \(k_{0}h\approx 5\) should be indistinguishable from \(k_{0}h\to\infty\), the dynamics of the mean flow is consistently different from this limit up to \(k_{0}h=10\). The expressions for the horizontal velocity given by **Case 1** [Eq. (25)] and **Case 2** [Eq. (28)] behave in a similar way at all depths, providing in general an approximation that corresponds well to the Dalzell solution in intermediate waters, and to Dysthe in deep waters. It is also important to note that the expressions provided in Sed03 and Slu05 at third-order in steepness [Eqs. (3) and (C10), respectively], give different results and do not correspond to the other models at any depth.
More detailed features can be deduced from Fig. 4, where the deviation operator with respect to the Dysthe expression, defined as \(D_{\rm{Dysthe}}=N^{-1}\int(\phi_{0x}-\phi_{0x}^{\rm{Dysthe}})^{2}\,dx\), and to the Dalzell one, \(D_{\rm{Dalzell}}=N^{-1}\int(\phi_{0x}-\phi_{0x}^{\rm{Dalzell}})^{2}\,dx\), with \(\phi_{0x}=\partial\phi_{0}/\partial x\), are shown as a function of the non-dimensional water depth \(k_{0}h\). The integral is calculated over \(L=80\lambda_{0}\), and the normalization coefficient is \(N=(\omega_{0DW}\lambda_{0})^{2}L\), with \(\omega_{0DW}\) calculated in deep water (DW). The Stokes series expansion of the velocity potential (and thus Dalzell's model) converges in shallow water [53] if \(3\varepsilon/(2k_{0}h)^{3}\ll 1\), thus at low Ursell number, which implies in our case \(k_{0}h\gg 0.48\). This region is excluded in Fig. 4. It can be seen that the expression for the horizontal velocity given by **Case 2** [Eq. (28) is accurate at second-order at all depths, almost superposing to the second-order Dalzell solution, while the one given by **Case 1** [Eq. (25)] is the only expression that consistently converges to the Dysthe in the deep-water limit expression, and is equivalent to **Case 2** for \(k_{0}h<5\). Note that the third-order corrections provided in Sed03 [Eq. (3)] and in Slu05 [Eq. (C10)] are different, and do not extend the validity to deeper water regimes of the second-order expression [Eq. (29)], which is accurate for \(k_{0}h<2\) (see Fig. 4b). This raises a warning on the quantitative accuracy of these third-order expressions.
\begin{table}
\begin{tabular}{l l l}
**Case 1** & Eq. (25) & 3rd-order, nonlocal \\
**Case 2** & Eq. (28) & 3rd-order, nonlocal \\ Dys79 [24] & Eq. (5) & 3rd-order, nonlocal \\ & & deep water \\ Sed03 [39] & Eq. (3) & 3rd-order \\ Sed03 & Eq. (29) & 2nd-order \\ Slu05 [40] & Eq. (C10) & 3rd-order \\ \end{tabular}
\end{table}
Table 1: List of mean flow terms in space-like formulation: \(\partial\phi_{0}/\partial x\).
Figure 2: Mean flow \(\phi_{0x}\) for \(N=30\) waves focusing at \(x=0\) as described by the expressions listed in Table 1, for different values of \(k_{0}h\).
Figure 3: Same as Fig. 2 for the evolution of \(N=30\) waves with random phases.
## III Fourth-order equation: propagation in space
In order to transform Eq. (1) to an expression describing the propagation in space, that is necessary to describe the nonlinear evolution of waves in a laboratory flume, a change of variables is needed
\[\tau=\varepsilon\left(t-\frac{x}{c_{\rm g}}\right);\;\xi=\varepsilon^{2}x. \tag{30}\]
Through this transformation, the third-order terms are only subject to a rescaling,
\[\alpha=\frac{\hat{\alpha}}{c_{\rm g}^{3}};\;\beta=\frac{\hat{\beta}}{c_{\rm g}}. \tag{31}\]
The higher-order terms are instead dramatically modified. Indeed, a mixed derivative appears from the second-order dispersion term, \(\partial^{2}U/\partial x^{2}\), that reads as \(-(\omega^{\prime\prime}/c_{\rm g})\partial^{2}U/\partial x\partial t\). In the multiscale spirit, the time-like NLS (_i.e._, terms at third-order in steepness) is used to estimate this term, which results in corrections at fourth-order. The resulting time-like form of the evolution equation does not appear explicitly in the literature and is given by (setting \(\varepsilon=1\), and changing the notation as \(\tau\to t\) and \(\xi\to x\)):
\[i\frac{\partial U}{\partial x}+\alpha\frac{\partial^{2}U}{\partial t^{2}} \underbrace{-\beta|U|^{2}U}_{\text{incl. Mean Flow}}=-i\alpha_{3}\frac{ \partial^{3}U}{\partial t^{3}}\underbrace{+i\beta_{21}|U|^{2}\frac{\partial U }{\partial t}+i\beta_{22}U^{2}\frac{\partial U^{*}}{\partial t}}_{\text{incl. Mean Flow}}, \tag{32}\]
whose dispersion and nonlinear coefficients are given in App. A and B, and their dependence on \(k_{0}h\) is shown in Figs. 5 and 6. High-order dispersion terms [54] are easily included up to arbitrary order following Refs. [55] and [56], reducing the constraints in bandwidth of the original Dysthe equation.
Figure 4: Deviation operator with respect to the Dysthe expression (_a_) and to the Dalzell one (_b_) for the derivative in space \(\phi_{0x}\), as defined in the main text, for the different cases listed in Table 1.
Using the relation [39]
\[\frac{\partial\phi_{01}}{\partial t}=-c_{g}\frac{\partial\phi_{01}}{\partial x} \tag{33}\]
Eq. (32) can be rewritten in the equivalent form:
\[i\frac{\partial U}{\partial x}+\alpha\frac{\partial^{2}U}{\partial t^{2}}- \beta_{D}|U|^{2}U=-i\alpha_{3}\frac{\partial^{3}U}{\partial t^{3}}+i\mathcal{B }_{21}|U|^{2}\frac{\partial U}{\partial t}+i\mathcal{B}_{22}U^{2}\frac{ \partial U^{*}}{\partial t}\underbrace{-\frac{\mu_{g}k_{0}}{4\sigma c_{g}^{2} }U\frac{\partial\phi_{0}}{\partial t}}_{\text{Mean Flow}} \tag{34}\]
where \(\beta_{D}=\hat{\beta}_{D}/c_{g}\), \(\mathcal{B}_{21}=\omega_{0}k_{0}\tilde{Q}_{41}/c_{g}^{2}-4\alpha\beta_{D}c_{g}\), \(\mathcal{B}_{22}=\omega_{0}k_{0}\tilde{Q}_{42}/c_{\text{g}}^{2}-2\alpha\beta _{D}c_{\text{g}}\), with \(\hat{\beta}_{D}\), \(\alpha\), \(\alpha_{3}\) given in App. A, and \(\tilde{Q}_{41}\), \(\tilde{Q}_{42}\) in App. B. The dependence of the nonlinear coefficients on \(k_{0}h\) is illustrated in Fig. 6. Note that \(\beta,\beta_{21},\beta_{22},\mathcal{B}_{21}\) and \(\mathcal{B}_{22}\) diverge quite strongly for \(k_{0}h\to 0\), _i.e._ particularly in the defocusing regime. That said and as can be also analytically verified, all coefficients correctly reproduce their deep-water limit at \(k_{0}h\rightarrow\infty\).
### Mean flow in time-like equations
At main order in steepness, Eqs. (3) and (33) imply
\[\frac{\partial|U|^{2}}{\partial t}=-c_{g}\frac{\partial|U|^{2}}{\partial x} \tag{35}\]
Thus, \(\phi_{01}\) and \(|U|^{2}\) do not depend on \(x\) and \(t\) separately, but only through their combination \((x-c_{g}t)\), so that at leading order the Fourier transform in space, \(\mathcal{F}_{x}\), can be replaced by the Fourier transform in time, \(\mathcal{F}_{t}\). From Eq. (5), in the deep-water limit it is possible to simply exchange the derivative with respect to space with that with respect to time and vice versa, since the Hilbert transform of the derivative is the derivative of the Hilbert transform, _i.e._ these two linear operators commute. As such:
\[\frac{\partial\phi_{0}}{\partial t}=\frac{\omega_{0}}{2}\mathcal{H}_{t}\left[ \frac{\partial|U|^{2}}{\partial t}\right] \tag{36}\]
The time-like Dysthe equation is given by the deep-water limit of Eq. (34) and reads, using the above expression for the mean flow term [45; 46; 47; 28; 48]
\[i\frac{\partial U}{\partial x}-\frac{k_{0}}{\omega_{0}^{2}}\frac{\partial^{2 }U}{\partial t^{2}}-k_{0}^{3}|U|^{2}U=2i\frac{k_{0}^{3}}{\omega_{0}}\bigg{(}4| U|^{2}\frac{\partial U}{\partial t}+U^{2}\frac{\partial U^{*}}{\partial t}+ \underbrace{iU\mathcal{H}_{t}\left[\frac{\partial|U|^{2}}{\partial t}\right] }_{\text{Mean Flow}}\bigg{)} \tag{37}\]
Figure 5: Dispersion coefficients (for \(\omega_{0}=1\) Hz). Zero crossings are marked on the horizontal axis.
where \(\alpha_{3}\to 0\), since \(k\propto\omega^{2}\), and \(\hat{\beta}_{21}\rightarrow\frac{3}{2}\omega_{0}k_{0}\), \(\hat{\beta}_{22}\rightarrow\frac{1}{4}\omega_{0}k_{0}\), \(\beta_{21}\to 8k_{0}^{3}/\omega_{0}\), and \(\beta_{22}\to 2k_{0}^{3}/\omega_{0}\).
For the last term in Eq. (34), the Sed03 expression, as in Eq. (3), can be written in terms of the time derivative, using Eq. (33):
\[\frac{\partial\phi_{0}}{\partial t}=-c_{g}\frac{\omega_{0}}{2} \frac{k_{0}\mu_{g}}{\sigma\nu}|U|^{2} \tag{38}\] \[- i\frac{4\omega_{0}\sigma}{\nu}\tilde{q}_{40S}\left(U\frac{ \partial U^{*}}{\partial t}-U^{*}\frac{\partial U}{\partial t}\right).\]
However, this expression goes to zero in the deep-water limit, and thus it does not converge to the Dysthe mean flow given in Eq. (36).
Replacing Eq. (33) in the Laplace equation and the surface boundary conditions, and repeating the same steps of Sec. II.3, we finally get the following expressions for the derivative in time of \(\phi_{0}\) for **Case 1** and **2**, respectively, to be inserted in the evolution Eq. (34):
\[\frac{\partial\phi_{0}}{\partial t} = D{\cal F}_{t}^{-1}\Bigg{\{}\frac{i}{\tanh(\omega h/c_{g})}{\cal F }_{t}\left[\frac{\partial|U|^{2}}{\partial t}\right]\Bigg{\}},\,\mbox{for Case 1} \tag{39}\] \[\frac{\partial\phi_{0}}{\partial t} = D^{\prime}{\cal F}_{t}^{-1}\Bigg{\{}\frac{i}{\tanh(\omega h/c_{g })\left[1-c_{g}\omega/(g\tanh(\omega h/c_{g}))\right]}{\cal F}_{t}\left[\frac {\partial|U|^{2}}{\partial t}\right]\Bigg{\}},\,\mbox{for Case 2} \tag{40}\]
### Numerical comparisons
We now compare the expressions for \(\partial\phi_{0}/\partial t\) listed in Table 2 with the sub-harmonic velocity potential \(\phi_{20}\) at second-order in steepness and its time derivative calculated using the Dalzell analytical method in Refs. [50] and [51].
Figure 6: Nonlinear coefficients (_a_) in Eq. (32), and (_b_) in Eq. (34), normalized with respect to their deep-water values (in Dysthe equation, Eq. (37)). Zero crossings are marked in the insets.
Figure 7: Mean flow \(\phi_{0t}\) for N = 30 waves focusing at \(t=0\) described by the expressions listed in Table 2, for different values of \(k_{0}h\).
We use the same parameters as in Sec. II.4, starting in this case by a Gaussian (amplitude) spectrum \(S(\omega)\) peaked at \(\omega_{0}=1\) Hz.
The results for \(N=30\) waves focusing at \(t=0\) are shown in Fig. 7. We see that the second-order expression in Sed03 reproduces well the waveform only for \(k_{0}h\leq 1\), showing discrepancies with respect to Dalzell's solution at \(k_{0}h=2\) and not converging to Dysthe's solution in deep-water. As for the space-like form, the expressions for \(\phi_{0t}\) given by **Case 1** [Eq. (39)] and **Case 2** [Eq. (40)] have a similar behavior at all depths, providing in general approximations that are as accurate as the Dalzell solution in intermediate waters, and as the Dyson-the expression at deep waters. As before, the third-order expressions provided by Eqs. (38) and (C11) are different and do not correspond to the other models. Fig. 8 shows the deviation operator with respect to the Dysthe expression, defined similarly to the spatial counterpart (see Sec. II.4) as \(D_{\rm{Dysthe}}=N^{-1}\int(\phi_{0t}-\phi_{0t}^{\rm{Dysthe}})^{2}\,dt\), and to the Dalzell one, \(D_{\rm{Dalzell}}=N^{-1}\int(\phi_{0t}-\phi_{0t}^{\rm{Dalzell}})^{2}\,dt\) as a function of the non-dimensional water depth \(k_{0}h\). The integral is now calculated over \(T=80\ T_{0}\), and the normalization coefficient is \(N=(\omega_{0}\lambda_{0DW})^{4}T\), with \(\lambda_{0DW}\) calculated at deep water. As for the space-like case, it can be seen that the expression given by **Case 2** [Eq. 40)] is accurate at second-order at all depths as the second-order Dalzell solution, while the one given by **Case 1** [Eq. (39)] is the only expression that converges to the Dysthe term in the deep-water limit, and is equivalent to **Case 2** for \(k_{0}h<3\).
## IV Conclusion
During the evolution of surface gravity waves, fluid particles experience Stokes drift along the propagation direction of the waves, and a return flow in vertical and horizontal directions, which closes the water-mass transport. Both processes give rise to a mean flow at the surface that needs to be accounted for in order to accurately predict the transport of pollutants [6; 7; 8] (such as oil, microplastics, etc) and, more in general, the impact of waves evolution at small scale on the ocean circulation at large scale [9; 10; 11; 12]. We have derived nonlocal (viz. non-instantaneous) expressions of the mean flow [Eqs. (25) and (39)] that correctly converge to the deep-water limit at third-order in steepness, while being equivalent to second-order formulations in intermediate waters. We have included these expressions in an envelope evolution equation at fourth-order in steepness in both, space-like [Eq. (2)] and time-like [Eq. (34)] formulations. We emphasize that the time-like form is relevant to study the evolution of unidirectional wave groups in space, thus, for modelling experiments in water wave plumes at high accuracy in arbitrary depths. Future work will be focusing on the experimental validation of our results in different water depth regimes.
###### Acknowledgements.
A. G., J. K. and M. B. acknowledge financial support from the Swiss National Science Foundation (Project No. 200020-175697). D.E. acknowledges financial support from the Swiss National Science Foundation (Fellowship P2GEP2-191480). A. C. acknowledges support
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Case 1** & Eq. (39) & 3rd-order, \\ & & non-instantaneous \\
**Case 2** & Eq. (40) & 3rd-order, non-inst. \\ Dys79 [24] & Eq. (36) & 3rd-order, non-inst. \\ & & deep water \\ Sed03 [39] & Eq. (38) & 3rd-order \\ Sed03 & 1st term on & 2nd-order \\ & r.h.s of Eq. (38) & \\ Slu05 [40] & Eq. (C11) & 3rd-order \\ \hline \hline \end{tabular}
\end{table}
Table 2: List of mean flow terms in time-like formulation: \(\partial\phi_{0}/\partial t\).
from Kyoto University's Hakubi Center for Advanced Research.
## Author Declarations
### Conflict of Interest Statement
The authors have no conflicts to disclose.
### Author Contributions
**Alexis Gomel**: conceptualization (equal); formal analysis (equal); investigation (equal); writing/original draft (supporting); writing/review and editing (equal). **Corentin Montessuit**: formal analysis (equal); investigation (equal). **Andrea Armaroli:** conceptualization (equal); formal analysis (equal); investigation (equal); writing/review and editing (equal); writing/review and editing (equal). **Debbie Eeltink**: formal analysis (equal); writing/review and editing (equal). **Amin Chabchoub**: supervision (equal); writing/review and editing (equal). **Jerome Kasparian**: supervision (equal); funding acquisition (lead); writing/review and editing (equal). **Maura Brunetti**: conceptualization (equal); formal analysis (equal); investigation (equal); methodology (lead); supervision (equal); writing/original draft (lead); writing/review and editing (equal).
## Data Availability Statement
The data that support the findings of this study and the matlab scripts are available from the corresponding author upon reasonable request.
Figure 8: Deviation operator with respect to the Dysthe expression (_a_) and to the Dalzell one (_b_) for the derivative in time \(\phi_{0t}\), as defined in the main text, for the different cases listed in Table 2.
## Appendix A Dispersion and nonlinear coefficients
The dispersion coefficients are:
\[\hat{\alpha} = \frac{1}{2}\omega^{\prime\prime}(k_{0})=\alpha\,c_{\rm g}^{3} \tag{10}\] \[\alpha = -\frac{1}{2}k^{\prime\prime}(\omega_{0})=-\frac{1}{2\omega_{0}c_{g }}\left[1-\frac{gh}{c_{g}^{2}}(1-\kappa\sigma)(1-\sigma^{2})\right]\] (11) \[\hat{\alpha}_{3} = \frac{1}{6}\omega^{\prime\prime\prime}(k_{0})\] (12) \[= \frac{\omega_{0}}{48k_{0}^{3}\sigma}\left[3\sigma+\kappa(1- \sigma^{2})\left(-3+\kappa\left(-\frac{3}{\sigma}+\frac{3\kappa}{\sigma^{2}}+ 13\kappa-15\kappa(1-\sigma^{2})-9\sigma\right)\right)\right]\] \[\alpha_{3} = -\frac{1}{6}k^{\prime\prime\prime}(\omega_{0})=\frac{\hat{ \alpha}_{3}}{c_{\rm g}^{4}}-2\alpha^{2}c_{\rm g}\] (13) \[k^{\prime\prime\prime}(\omega_{0}) = -\frac{1}{c_{\rm g}^{4}}\frac{\partial^{2}c_{\rm g}}{\partial k^ {2}}+\frac{3}{c_{\rm g}^{5}}\left(\frac{\partial c_{\rm g}}{\partial k} \right)^{2}\] (14) \[\hat{\alpha}_{4} \equiv \frac{1}{24}\frac{\partial^{4}\omega}{\partial k^{4}}=\frac{ \omega_{0}}{384k_{0}^{4}}[-15+12\kappa^{2}+46\kappa^{4}\] (15) \[+4\kappa(3-5\kappa^{2})\coth\kappa+2\kappa^{2}(3+10\kappa^{2}) \coth^{2}\kappa+12\kappa^{3}\coth^{3}\kappa-15\kappa^{4}\coth^{4}\kappa\] \[+\kappa\sigma(-12+68\kappa^{2}+3\kappa\sigma(-6-52\kappa^{2}+5 \kappa\sigma(-4+7\kappa\sigma)))]\] \[\alpha_{4} \equiv \frac{1}{24}\frac{\partial^{4}k}{\partial\omega^{4}}=-\frac{1}{24 c_{\rm g}^{5}}\frac{\partial^{3}c_{\rm g}}{\partial k^{3}}+\frac{5}{12c_{\rm g}^{6}} \frac{\partial^{2}c_{\rm g}}{\partial k^{2}}\frac{\partial c_{\rm g}}{ \partial k}-\frac{5}{8c_{\rm g}^{7}}\left(\frac{\partial c_{\rm g}}{\partial k }\right)^{3}\] (16) \[= -\frac{\hat{\alpha}_{4}}{c_{\rm g}^{5}}+\frac{5}{c_{g}^{6}}\hat{ \alpha}_{3}\hat{\alpha}-\frac{5}{c_{\rm g}^{7}}\hat{\alpha}^{3}\]
where \(\kappa=k_{0}h\), \(\sigma=\tanh\kappa\) and \(c_{\rm g}\) is the group velocity:
\[c_{g}\equiv\frac{\partial\omega}{\partial k}=\frac{g}{2\omega_{0}}[\sigma+ \kappa(1-\sigma^{2})] \tag{17}\]
The third-order nonlinear coefficients are:
\[\hat{\beta}_{D} = -\frac{\omega_{0}k_{0}^{2}}{16\sigma^{4}}(2\sigma^{6}-13\sigma^{ 4}+12\sigma^{2}-9) \tag{18}\] \[\hat{\beta} = \beta\,c_{\rm g}=\hat{\beta}_{D}+\omega_{0}k_{0}^{2}\frac{\mu_{g }^{2}}{8\sigma^{2}\nu}=\hat{\beta}_{D}-\frac{k_{0}\mu_{g}}{4\sigma h}D\] (19) \[\beta_{D} = \frac{\hat{\beta}_{D}}{c_{g}}\] (20) \[\beta = \frac{\omega_{0}k_{0}^{2}}{16\sigma^{4}c_{g}}\left\{9-10\sigma^ {2}+9\sigma^{4}-\frac{2\sigma^{2}c_{g}^{2}}{gh-c_{g}^{2}}\left[4\frac{c_{p}^{ 2}}{c_{g}^{2}}+4\frac{c_{p}}{c_{g}}(1-\sigma^{2})+\frac{gh}{c_{g}^{2}}(1- \sigma^{2})^{2}\right]\right\} \tag{21}\]
where, interestingly, the coefficients in the curly brackets can be expressed in terms of \(\kappa\): \(c_{g}/c_{p}=(\sigma+\kappa(1-\sigma^{2}))/(2\sigma)\), \(gh/c_{g}^{2}=(c_{p}^{2}/c_{g}^{2})\kappa/\sigma\), \(2\sigma^{2}c_{g}^{2}/(gh-c_{g}^{2})=2\sigma(c_{g}^{2}/c_{p}^{2})/(\kappa/ \sigma-c_{g}^{2}/c_{p}^{2})\) and
\[D = -h\frac{\omega_{0}}{2}\frac{k_{0}\mu_{g}}{\sigma\nu}=\frac{\omega _{0}}{2}\frac{\kappa}{2\sigma}\frac{2+(1-\sigma^{2})c_{g}/c_{p}}{\kappa-\sigma c _{g}^{2}/c_{p}^{2}}=\frac{D^{\prime}}{1-c_{g}^{2}/(gh)} \tag{113}\] \[D^{\prime} = \frac{\omega_{0}}{2\sigma}(1+C_{FD})\] (114) \[\mu_{g} = \frac{2\sigma}{\omega_{0}}(2\omega-kc_{g}(\sigma^{2}-1))=(\sigma ^{2}-1)^{2}\kappa-\sigma(\sigma^{2}-5)=4\sigma(1+C_{FD})\] (115) \[\nu = \frac{4k_{0}\sigma}{g}(c_{g}^{2}-gh)=[(\sigma+1)^{2}\kappa- \sigma][(\sigma-1)^{2}\kappa-\sigma]\] (116) \[C_{FD} = \frac{\omega_{0}c_{g}}{g\sinh(2\kappa)} \tag{117}\]
The higher-order nonlinear coefficients are:
\[\beta_{21} = \frac{\hat{\beta}_{21}}{c_{\rm g}^{2}}-4\alpha\beta c_{\rm g} \tag{118}\] \[\beta_{22} = \frac{\hat{\beta}_{22}}{c_{\rm g}^{2}}-2\alpha\beta c_{\rm g}\] (119) \[\hat{\beta}_{21} = \omega_{0}k_{0}Q_{41S}\] (120) \[\hat{\beta}_{22} = \omega_{0}k_{0}Q_{42S}\] (121) \[{\cal B}_{21} = \omega_{0}k_{0}\tilde{Q}_{41}/c_{g}^{2}-4\alpha\beta_{D}c_{g}\] (122) \[{\cal B}_{22} = \omega_{0}k_{0}\tilde{Q}_{42}/c_{\rm g}^{2}-2\alpha\beta_{D}c_{ \rm g} \tag{123}\]
where \(Q_{41S}\), \(Q_{42S}\), \(\tilde{Q}_{41}\), \(\tilde{Q}_{42}\) are given in App. B.
## Appendix B Notation used in Sedletsky (2003) (Sed03)
The main coefficients in Sedletsky's notation are [39]:
\[Q_{41} = \tilde{Q}_{41}-\frac{\mu_{g}}{\nu}\tilde{q}_{40} \tag{100}\] \[Q_{42} = \tilde{Q}_{42}+\frac{\mu_{g}}{\nu}\tilde{q}_{40}\] (101) \[\tilde{q}_{40} = \frac{1}{32\sigma^{3}\nu}[(\sigma^{2}-1)^{5}\kappa^{4}-4\sigma( 2\sigma^{4}+9\sigma^{2}+5)(\sigma^{2}-1)^{2}\kappa^{3}\] (102) \[+2\sigma^{2}(9\sigma^{4}+16\sigma^{2}-9)(\sigma^{2}-1)\kappa^{2}\] \[-4\sigma^{3}(4\sigma^{4}-9\sigma^{2}-7)\kappa+5\sigma^{4}(\sigma^ {2}-5)]\] \[\tilde{Q}_{41} = \tilde{q}_{41}\] (103) \[= \frac{1}{16\sigma^{5}\nu}[(2\sigma^{6}-11\sigma^{4}-10\sigma^{2 }+27)(\sigma^{2}-1)^{3}\kappa^{3}-\sigma(6\sigma^{8}-21\sigma^{6}+9\sigma^{4}- 43\sigma^{2}+81)(\sigma^{2}-1)\kappa^{2}\] (104) \[+\sigma^{2}(6\sigma^{8}-15\sigma^{6}-77\sigma^{4}+71\sigma^{2}-8 1)\kappa-\sigma^{3}(\sigma^{2}+1)(2\sigma^{4}-7\sigma^{2}-27)]\] \[\tilde{q}_{42} = \frac{1}{32\sigma^{5}\nu}[(4\sigma^{6}-13\sigma^{4}+10\sigma^{2}- 9)(\sigma^{2}-1)^{3}\kappa^{3}-\sigma(12\sigma^{8}-51\sigma^{6}+17\sigma^{4}- \sigma^{2}-9)(\sigma^{2}-1)\kappa^{2}\] (105) \[+\sigma^{2}(12\sigma^{8}-67\sigma^{6}+33\sigma^{4}-\sigma^{2}-9) \kappa-\sigma^{3}(4\sigma^{6}-29\sigma^{4}+42\sigma^{2}-9)]\] \[q_{3} = -\frac{\hat{\beta}}{\omega_{0}k_{0}^{2}}\] (106) \[\tilde{Q}_{42} = \tilde{q}_{42}-2\frac{c_{g}}{c_{p}}q_{3} \tag{107}\]
The final expressions for \(Q_{41}\) and \(Q_{42}\) are given in Eqs. (67) and (68) in Ref. [39] and reported here for completeness:
\[{\cal Q}_{41} = \frac{1}{32\sigma^{5}\nu^{2}}\{(3\sigma^{6}-20\sigma^{4}-21\sigma ^{2}+54)(\sigma^{2}-1)^{5}\kappa^{5} \tag{108}\] \[-\sigma(11\sigma^{8}-99\sigma^{6}-61\sigma^{4}+7\sigma^{2}+270)( \sigma^{2}-1)^{3}\kappa^{4}\] \[+2\sigma^{2}(\sigma^{2}-1)(7\sigma^{10}-58\sigma^{8}+38\sigma^{6} +52\sigma^{4}-181\sigma^{2}+270)\kappa^{3}\] \[-2\sigma^{3}(3\sigma^{10}+18\sigma^{8}-146\sigma^{6}-172\sigma^{4 }+183\sigma^{2}-270)\kappa^{2}\] \[-\sigma^{4}(\sigma^{8}-109\sigma^{6}+517\sigma^{4}+217\sigma^{2}+ 270)\kappa\] \[+\sigma^{5}(\sigma^{6}-40\sigma^{4}+193\sigma^{2}+54)\}\] \[{\cal Q}_{42} = \frac{1}{32\sigma^{5}\nu^{2}}\{-(3\sigma^{6}+7\sigma^{4}-11 \sigma^{2}+9)(\sigma^{2}-1)^{5}\kappa^{5}\] (109) \[+\sigma(11\sigma^{8}-48\sigma^{6}+66\sigma^{4}+8\sigma^{2}+27)( \sigma^{2}-1)^{3}\kappa^{4}\] \[-2\sigma^{2}(\sigma^{2}-1)(7\sigma^{10}-79\sigma^{8}+282\sigma^{6 }-154\sigma^{4}-\sigma^{2}+9)\kappa^{3}\] \[+2\sigma^{3}(3\sigma^{10}-63\sigma^{8}+314\sigma^{6}-218\sigma^{4 }+19\sigma^{2}+9)\kappa^{2}\] \[+\sigma^{4}(\sigma^{8}+20\sigma^{6}-158\sigma^{4}-28\sigma^{2}-27 )\kappa-\sigma^{5}(\sigma^{6}-7\sigma^{4}+7\sigma^{2}-9)\}\]
We have verified that they are equivalent to the expressions obtained using Eqs. (100)-(101).
From Eqs. (100)-(101), in the deep-water limit \(\kappa\rightarrow\infty\), \(\sigma\to 1\), \(\nu\to 1-4\kappa\), \(\mu_{g}\to 4\), \({\cal Q}_{41}\to 768/(32\cdot 16)=3/2\) and \({\cal Q}_{42}\to 128/(32\cdot 16)=1/4\) recovering the Dysthe result for such terms.
As suggested in Ref. [41], the above expressions can be modified to agree with the results presented in Ref. [40]. However, we have verified that the (small) modification suggested in Ref. [41] missed a factor 2, the right one being the following:
\[\tilde{q}_{40S} = \tilde{q}_{40}+\frac{\Delta}{2}\frac{\nu}{\mu_{g}} \tag{102}\] \[Q_{41S} = Q_{41}-\frac{\Delta}{2}\] (103) \[Q_{42S} = Q_{42}+\frac{\Delta}{2} \tag{104}\]
where the term \(\Delta\) is defined in Ref. [41]:
\[\Delta = -\frac{\sigma^{2}-1}{16\sigma^{3}\nu}[(\sigma^{2}-1)^{3}(3\sigma^ {2}+1)\kappa^{3}-\sigma(\sigma^{2}-1)(5\sigma^{4}-18\sigma^{2}-3)\kappa^{2} \tag{105}\] \[+ \sigma^{2}(\sigma^{2}-1)(\sigma^{2}-9)\kappa+\sigma^{3}(\sigma^ {2}-5)]\]
In the deep-water limit \(\kappa\rightarrow\infty\), \(Q_{41S}\to 3/2\) and \(Q_{42S}\to 1/4\), since \(\Delta\to 0\), thus recovering the Dysthe result for such terms.
Note, however, that the final equations that include the mean flow (namely, Eqs. (2) and (34)) are not affected by such ambiguity, since their high-order nonlinear coefficients only depend on \(\tilde{Q}_{41}\) and \(\tilde{Q}_{42}\).
## Appendix C Notation used in Slunyaev (2005) (Slu05)
One can use the notation in Ref. [40], where \(\hat{\beta}_{21}=\omega_{0}k_{0}Q_{41S}\) and \(\hat{\beta}_{22}=\omega_{0}k_{0}Q_{42S}\), and \(Q_{41S}=(h^{3}\omega_{0})\tilde{\alpha}_{21}/(\kappa^{3}\sigma^{2})\), \(Q_{42S}=(h^{3}\omega_{0})\tilde{\alpha}_{22}/(\kappa^{3}\sigma^{2})\), with \(\tilde{\alpha}_{21}=\tilde{\rho}_{21}-\tilde{\rho}_{12}\gamma_{2}\) and \(\tilde{\alpha}_{22}=\tilde{\rho}_{22}+\tilde{\rho}_{12}\gamma_{2}\)
\(\tilde{\rho}_{21}=P_{21}+s\beta_{1}\gamma_{1}\), \(\tilde{\rho}_{22}=P_{22}-s\beta_{1}\gamma_{1}\). Here \(\beta_{1}=-\hat{\alpha}\) and the other coefficients are given by [57]
\[h^{3}\omega_{0}P_{21} = \left(\kappa^{2}\frac{(\sigma^{2}-1)(-4\sigma^{4}+3\sigma^{2}+1)}{ 8\sigma^{2}}+\kappa\frac{4\sigma^{4}-9\sigma^{2}+3}{4\sigma}+\frac{-4\sigma^{ 2}+19}{8}\right)h^{3}\omega_{0}\gamma_{1} \tag{121}\] \[+\kappa^{2}\frac{-\sigma^{4}+3}{2(\sigma^{2}+1)}h\omega_{0}\chi_ {2}+\left(\kappa^{2}\frac{-3\sigma^{6}+7\sigma^{4}-9\sigma^{2}-3}{4\sigma( \sigma^{2}+1)}+3\kappa\frac{\sigma^{4}-5}{4(\sigma^{2}+1)}\right)h^{2}\omega_{ 0}\chi_{1}\] \[+\kappa^{4}\frac{(\sigma^{2}-1)(11\sigma^{4}-12\sigma^{2}-3)}{16 \sigma}+\kappa^{3}\frac{-11\sigma^{4}+40\sigma^{2}-9}{16}\] \[h^{3}\omega_{0}P_{22} = \left(-\kappa^{2}\frac{(\sigma^{2}-1)^{2}}{8}+\kappa\frac{\sigma ^{4}-5\sigma^{2}+2}{4\sigma}+\frac{-\sigma^{2}+8}{8}\right)h^{3}\omega_{0} \gamma_{1}\] (122) \[+\left(\kappa^{2}\frac{(\sigma^{2}-1)(\sigma^{4}+3)}{4\sigma( \sigma^{2}+1)}-\kappa\frac{\sigma^{4}+3}{4(\sigma^{2}+1)}\right)h^{2}\omega_{ 0}\chi_{1}\] \[+\kappa^{4}\frac{(\sigma^{2}-1)(-3\sigma^{4}-8\sigma^{2}+3)}{32 \sigma}+3\kappa^{3}\frac{\sigma^{4}-1}{32}\] \[h^{2}\omega_{0}s = \kappa^{2}\frac{\sigma^{2}-1}{2}\] (123) \[V_{d}^{2} = gh-c_{g}^{2}=-\frac{g\nu}{4k_{0}\sigma}\] (124) \[h^{3}\omega_{0}\gamma_{1} = h^{3}\omega_{0}\frac{k_{0}^{2}c_{g}(\sigma^{2}-1)-2\omega_{0}k_{ 0}}{4V_{d}^{2}}=\frac{\kappa^{3}\sigma\mu_{g}}{2\nu}\] (125) \[\gamma_{2} = \frac{1}{V_{d}^{2}}\left[2c_{g}\gamma_{1}\beta_{1}+k_{0}^{2}\beta _{1}\frac{(\sigma^{2}-1)}{4}+\frac{\omega^{2}-k_{0}^{2}c_{g}^{2}(\sigma^{2}-1) }{4\omega_{0}}\right]\] (126) \[\tilde{\rho}_{12} = \frac{2\omega_{0}k_{0}-k_{0}^{2}c_{g}(\sigma^{2}-1)}{2\omega_{0}} =\frac{k\mu_{g}}{4\sigma}\] (127) \[h^{2}\omega_{0}\chi_{1} = 3\kappa^{2}\frac{\sigma^{4}-1}{8\sigma^{2}}\] (128) \[h\omega_{0}\chi_{2} = \left(\kappa\frac{-\sigma^{3}+3}{\sigma}+1\right)\frac{h^{2}\omega _{0}\chi_{1}}{\kappa}+\frac{3\kappa^{2}(\sigma^{2}-1)(3\sigma^{2}+1)}{16\sigma }+9\kappa\frac{1-\sigma^{4}}{16\sigma^{2}} \tag{129}\]
Using the same notation as in Sed03 for the reconstruction at leading order of the surface elevation, \(\eta(\chi,t)=\frac{1}{2}[U(x,t)\exp(i(k_{0}\chi-\omega_{0}t))+\mbox{c. c.}]\), the mean flow term is written as (see Eqs. (32), (43), (45) in Ref. [40]):
\[\frac{\partial\phi_{0}}{\partial x}=\frac{\omega_{0}}{2}\frac{k_{0}\mu_{g}}{ \sigma\nu}|U|^{2}-i\frac{c_{p}^{2}}{\sigma^{2}}\left(\gamma_{2}+\frac{\beta_{1 }\gamma_{1}}{c_{g}}\right)\left(U\frac{\partial U^{*}}{\partial x}-U^{*}\frac{ \partial U}{\partial x}\right) \tag{130}\]
Note that, while the first term is equal to the corresponding term in Eq. (3) from Sedletsky's derivation, the second one has a different coefficient, since \(4\omega_{0}\sigma\tilde{q}_{40}/\nu\neq c_{p}^{2}\sigma^{2}(\gamma_{2}+\beta_{1 }\gamma_{1}/c_{g})/\sigma^{2}\) (even if we take into account the correction \(\tilde{q}_{40S}\) in Eq. (130)).
Using Eq. (33), the derivative in time of \(\phi_{0}\) is
\[\frac{\partial\phi_{0}}{\partial t}=-c_{g}\frac{\omega_{0}}{2}\frac{k_{0}\mu_{ g}}{\sigma\nu}|U|^{2}-i\frac{c_{p}^{2}}{\sigma^{2}}\left(\gamma_{2}+\frac{\beta_{1 }\gamma_{1}}{c_{g}}\right)\left(U\frac{\partial U^{*}}{\partial t}-U^{*}\frac{ \partial U}{\partial t}\right) \tag{131}\]
## Appendix D Example of sea state realization
Following the definitions in Ref. [51], we consider a Gaussian amplitude spectrum, denoted by \(S(k)\), given by
\[S(k)=\exp\left(-\frac{(k-k_{0})^{2}}{2k_{w}^{2}}\right) \tag{40}\]
for \(k>0\), where \(k_{0}=0.0277\) m\({}^{-1}\) and \(k_{w}=0.27\)\(k_{0}\) is the dimensional bandwidth. Other spectra, like JONSWAP or Pierson-Moskowitz, can also be used. This yields the following surface elevation at first order in steepness:
\[\eta_{1}(x,t)=\frac{A_{p}}{2}\frac{\int_{0}^{\infty}S(k)e^{i[k(x-x_{f})-\omega (t-t_{f})]}dk}{\int_{0}^{\infty}S(k)dk}+c.c. \tag{41}\]
where \(A_{p}\), \(x_{f}\), \(t_{f}\) are the amplitude, position and time for the group at linear focus, with steepness given by \(\varepsilon=A_{p}k_{0}=0.3\). An example of sea state realization at second order in steepness, obtained using the Dalzell development [50], is shown in Fig. 9 for the case of a focused wave group at \(x_{f}=0\), \(t_{f}=0\) for \(k_{0}h=1.5\). The power spectrum can be obtained as [58]\(P(k)=|\hat{\eta}|^{2}/(2dk)\), where \(\hat{\eta}\) is the Fourier transform in space of the surface elevation, from which the significant wave height \(H_{s}\) can be obtained as \(H_{s}=4\sqrt{m_{0}}=5.2\) m, \(m_{0}\) being equal to the area under the power spectrum curve.
|
2308.13097 | CompaCT: Fractal-Based Heuristic Pixel Segmentation for Lossless
Compression of High-Color DICOM Medical Images | Medical image compression is a widely studied field of data processing due to
its prevalence in modern digital databases. This domain requires a high color
depth of 12 bits per pixel component for accurate analysis by physicians,
primarily in the DICOM format. Standard raster-based compression of images via
filtering is well-known; however, it remains suboptimal in the medical domain
due to non-specialized implementations. This study proposes a lossless medical
image compression algorithm, CompaCT, that aims to target spatial features and
patterns of pixel concentration for dynamically enhanced data processing. The
algorithm employs fractal pixel traversal coupled with a novel approach of
segmentation and meshing between pixel blocks for preprocessing. Furthermore,
delta and entropy coding are applied to this concept for a complete compression
pipeline. The proposal demonstrates that the data compression achieved via
fractal segmentation preprocessing yields enhanced image compression results
while remaining lossless in its reconstruction accuracy. CompaCT is evaluated
in its compression ratios on 3954 high-color CT scans against the efficiency of
industry-standard compression techniques (i.e., JPEG2000, RLE, ZIP, PNG). Its
reconstruction performance is assessed with error metrics to verify lossless
image recovery after decompression. The results demonstrate that CompaCT can
compress and losslessly reconstruct medical images, being 37% more
space-efficient than industry-standard compression systems. | Taaha Khan | 2023-08-24T21:43:04Z | http://arxiv.org/abs/2308.13097v1 | CompaCT: Fractal-Based Heuristic Pixel Segmentation for Lossless Compression of High-Color DICOM Medical Images
###### Abstract
Medical image compression is a widely studied field of data processing due to its prevalence in modern digital databases. This domain requires a high color depth of 12 bits per pixel component for accurate analysis by physicians, primarily in the DICOM format. Standard raster-based compression of images via filtering is well-known; however, it remains suboptimal in the medical domain due to non-specialized implementations. This study proposes a lossless medical image compression algorithm, CompaCT, that aims to target spatial features and patterns of pixel concentration for dynamically enhanced data processing. The algorithm employs fractal pixel traversal coupled with a novel approach of segmentation and meshing between pixel blocks for preprocessing. Furthermore, delta and entropy coding are applied to this concept for a complete compression pipeline. The proposal demonstrates that the data compression achieved via fractal segmentation preprocessing yields enhanced image compression results while remaining lossless in its reconstruction accuracy. CompaCT is evaluated in its compression ratios on 3954 high-color CT scans against the efficiency of industry-standard compression techniques (i.e., JPEG2000, RLE, ZIP, PNG). Its reconstruction performance is assessed with error metrics to verify lossless image recovery after decompression. The results demonstrate that CompaCT can compress and losslessly reconstruct medical images, being 37% more space-efficient than industry-standard compression systems.
Fractal, Image Compression, Medical Image, Segmentation.
## 1 Introduction
### Overview
In recent years, the medical domain has witnessed a substantial transition towards digital storage of images in databases worldwide [1]-[2]. While this shift has undoubtedly facilitated data accessibility and exchange, it has also brought to the forefront an inherent challenge: the consumption of significant storage space by large quantities of high-bit-depth medical scans [1]-[3].
Medical image modalities, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and X-ray, play a pivotal role in medical analysis, supporting crucial diagnostic and treatment decisions [3]-[6]. Due to their importance, these medical modalities demand more stringent data format requirements. Typically, medical images are stored using 12 bits per color component per pixel, translating to 2 bytes of data per pixel [5], [7]. This contrasts with general images that typically employ 8 bits per color component per pixel [8]. High bit-depth storage for medical images results in larger file sizes, escalating storage costs when preserving these data archives indefinitely [2], [4]-[5].
To address the pressing storage challenge, image compression algorithms have emerged as a viable solution to reduce the required storage space [2], [4]-[5], [8]. Image compression represents a widely studied and extensively utilized field within digital image processing, encompassing a range of techniques and methodologies aimed at minimizing the data size while preserving essential information [2].
Two fundamental forms of data compression exist: lossy and lossless [8]-[9]. Lossy compression reduces file sizes by discarding certain non-essential information from the image and retaining only the key aspects necessary for accurate data reconstruction [8]-[9]. However, in medical information, lossy algorithms are generally avoided due to their potential to compromise the integrity of stored data, which is critical for precise and reliable medical analysis [2]-[5], [8], [10]-[11]. Instead, the medical image domain predominantly relies on lossless compression, which reduces file sizes while ensuring that the compressed format can perfectly reconstruct the original information [8]. This approach retains all the data in the initial image, effectively striking a balance between efficient storage and data fidelity [4].
### Digital Imaging and Communications in Medicine (DICOM)
The Digital Imaging and Communications in Medicine (DICOM) format is the primary industry standard for saving and transmitting digital medical images worldwide [12]. DICOM ensures comprehensive and standardized information exchange across medical imaging devices and healthcare institutions by encapsulating image data and accompanying metadata [12]. Medical datasets are commonly stored in the DICOM format, identifiable by the "dcm" extension, facilitating the seamless integration of images into medical information systems, such as in Fig. 1.
Figure 1: Examples of high-bit-depth CT scans stored in raw DICOM format.
The DICOM standard includes built-in lossless compression formats to facilitate efficient data storage and transmission [12]. These compression algorithms include JPEG2000-Lossless (JP2), Run Length Encoding (RLE), and ZIP [12], with JP2 being the primary industry standard for medical images [3], [6]. However, it is essential to recognize that these built-in compression options are predominantly tailored to handle general-purpose data [7], potentially limiting performance when dealing with dedicated medical modalities [10].
This limitation underscores the need for a dedicated image compression system to optimize DICOM medical images' efficiency [10]. By harnessing the unique characteristics of medical image modalities, such a specialized system can unlock untapped compression potential and yield significant advancements in storage space optimization, thereby reducing storage costs and facilitating seamless data sharing among healthcare providers.
The proposed medical image compression algorithm, CompaCT, represents an approach focused on pixel restructuring to optimize compression efficiency. Unlike conventional pixel-based compression systems such as Portable Network Graphics (PNG) [13] and ZIP DEFLATE [14], which employ raster-based scans with primarily horizontal patterns during encoding, CompaCT seeks to explore innovative pixel orderings that facilitate enhanced compression performance.
### Previous Work
Previous research specifically in pixel restructuring for image compression has investigated various ordering techniques. Some studies [15]-[16] exploring fractal patterns aim to organize pixels into both vertical and horizontal sections, as well as spatially, to achieve improved compression results. Other research efforts have also experimented with dynamically reconstructed data [17], aiming to enhance compression rates; however, these approaches can overlook some of the specific challenges posed by the high-color nature of medical information.
Work by Shen and Rangayyan [4] started with segmentation-based medical image coding concepts for high-color data. Regional flood fill procedures were used to adapt the scanning order into an enhanced image compression codec. This research resulted in about 28% improved performance compared to JPEG on the database used. Similarly, Min and Sadleir [11] proposed a region of interest (ROI) based segmentation approach, but this entailed entirely deleting the image's background. This, while improving compression, can spark concerns about the accuracy of the ROI detection with possibly essential information being discarded [3]. Chen and Wang [9] did similarly with content-based foreground-background segmentation for near-lossless compression, achieving increased compression, but still falling under the same ROI accuracy concerns.
The work of block-based compression by Kumar et al. [18] proposed 4x4 block pixel groupings to aid in spatial pattern recognition and utilization. This research also applied bitwise optimization of encoding integers based on predetermined bounds. This
improved compression of several standards, including JP2, but was limited to 8-bit grayscale mammography scans.
This present study proposes a novel pixel restructuring technique explicitly tailored to encode monochrome images stored at 12 bits per pixel, the standard format for many medical image modalities. CompaCT takes advantage of optimized pixel orderings, delta coding, and entropy coding techniques to achieve compression gains while ensuring the preservation of critical diagnostic information.
## 2 Methods
### Fractal Transform
The outcome of using a raster scan means that horizontal pixel patterns can be utilized for compression [16]. Some encoders work around this by applying several possible filters, such as PNG [13], for example, applying filters of left pixels, above pixels, and combinations. This allows for two-dimensional patterns to be utilized in the compression [15].
The CompaCT study does not utilize different pixels as candidates for predictive coding. Later steps in this pipeline only relate pixels to the immediate previous pixel. Applying a raster scan with these limited functionalities will result in only horizontal patterns being utilized, creating suboptimal performance. To enhance the different multidimensional patterns in medical image data, a Hilbert-curve [Fig. 2] fractal pixel ordering is employed.
Hilbert fractal pixel ordering is based on incorporating all directions of relationships between pixels using a recursive and coiling fractal structure of the iteration [15]-[16]. This acknowledges horizontal, vertical, and regional interpixel relationships to be encoded in the data, allowing for possible dynamic enhancements to this field of image preprocessing [15]-[16].
### QOI Byte Collapse Introduction
The next encoding step is based on an open-source encoding format called "QOI: The Quite OK Image Format" [19]. The core concepts of QOI were to collapse RGB pixels into smaller byte representations quickly without any entropy coding. In the intended cases, it would collapse many 8-bit color components into one or two bytes by applying
Figure 2: First five iterations of a recursive Hilbert curve generation through a square array. The algorithm in this proposal generates a dynamic curve also suitable for rectangular arrays.
run length coding, color caching, and short delta coding. The CompaCT proposal focuses mainly on the short delta compression method and applies it to the domain of high-color data for monochrome channels in the medical image domain.
The goal of this QOI-based delta encoding system is to collapse pixel deltas that would have previously taken multiple bytes to encode down into a single byte. This is done by maximizing the number of smaller deltas between pixels and minimizing encoding full multi-byte deltas. The future meshing and restructuring phase aids significantly with this goal of enhancing compression.
When the delta between two 12-bit pixels is encoded, the worst case is that 12-bits are needed to represent that signed integer difference. With the convenience of keeping all pixels byte-aligned, 4 bits must be padded to make the 12 bits of pixels into two complete bytes. On the other hand, encoding a difference that falls between the values -64 and 65 inclusive can be encoded using only a 7-bit signed integer, allowing for a single flag bit. So, if the difference between two adjacent pixels is within this "small difference" range, then a single byte is sufficient to encode the data instead of two bytes for "large differences." Minimizing the total amount of "large differences" when encoding the differences in pixels will, in turn, decrease the number of bytes needed with this delta encoding system.
### Count of Large Differences (CLD) Heuristic
The goal of dynamically reordering the pixels in the image is to enhance the steps of delta encoding and entropy coding. The order of pixels directly affects the repetitions that can be utilized in the delta codes. Reorganizing into less entropic data allows for enhanced compression in later stages of this proposed pipeline. The concept is to minimize the amount of "large differences" as defined by the heuristic:
\[L(\Delta)=\begin{cases}1,\ \Delta<-64\\ 0,\ -64\leq\Delta\leq 65\\ 1,\ \Delta>65\end{cases} \tag{1}\]
Equation (1) defines a Large Difference (L) with _delta_ being the difference between consecutive pixel intensities. Confining many 12-bit values into single bytes aids in collapsing the number of bytes needed to represent the data without any traditional entropy coding yet and increases the chances of repeating codes given the smaller possibilities of integers. Getting the number of large differences across a series of pixels is calculated:
\[\text{CLD(p)}=\sum_{\text{i=1}}^{\text{n}}L(\text{p}_{\text{i}} \text{-p}_{\text{i-1}}) \tag{2}\]
Equation (2) defines the Count of Large Differences (CLD) heuristic measuring the number of large differences found over a series of consecutively encoded pixels, \(p\) being a pixel array, \(n\) being the number of pixels, and \(L\) being the large difference as defined in equation (1).
### Pixel Segmentation
The proposed CompaCT system identifies 4x4 pixel blocks in the Hilbert fractal as single units of pixels, similar to the base proposed by Kumar et al. [18]. The segmentation system traverses through these 16-pixel blocks in order of the iteration produced in the fractal scan.
Initially, we iterate through the 4x4 blocks and count the number of large differences (1) present in each delta between the pixels within the block. An arbitrary cutoff threshold is set to flag the block as "difficult" if more than half of the differences between the pixels fall outside the range of a 7-bit signed integer, as depicted in Fig. 3. This flag highlights which blocks should be considered when searching for a pair to reorder with, speeding up searching by masking away already simple to compress sections. The CLD (2) of each block can be efficiently calculated by initially setting up a prefix sum array to count the number of large deltas present in a certain range. The prefix sum array can be generated in linear time, then each query of the CLD on any range of consecutive pixels can be retrieved in constant time.
Once all "difficult" blocks are identified, the system will find pairs of blocks to make efficient meshing partners. A mesh is defined in Fig. 4 as interlacing the pixels in each block by alternating the order in which they are encoded. A mesh of two 16-pixel units will result in an interlaced section of 32 pixels, in which every other pixel belongs to two distinct units.
Figure 4: Miniaturized example of interlacing similar blocks (a) and (b) into conjoined block (c).
Figure 3: Zoomed example of “difficult” identified pixels, white in (b) notates large differences highlighted in 4x4 segments of the original data (a).
It is only efficient to interlace blocks together if the resulting mesh creates more common and smaller deltas between pixels, which can more efficiently be minimized by encoding in backreferencing and entropy systems later.
The searching phase iterates through the blocks in the order of the fractal traversal. Once settling on a block that was previously flagged as "difficult," it executes a linear search through future blocks to find a match that will decrease the CLD within the potentially meshed block. The search starts at the next block and continues through the immediate next 64 unencoded blocks. It simulates meshing each block with the original flagged block together and calculates the resulting CLD when the block pixels are interlaced. Suppose the resulting CLD proves to be less than the sum of the original CLD of encoding each block separately. In that case, the encoder meshes the blocks into a single unit.
The next 32 pixels in the pixel stream will be out of order but will enhance the efficiency for later phases of the CompaCT pipeline. In order to identify the segmented and reorganized blocks, a single byte is inserted into the byte stream to identify the location of the future block that was taken to mesh with the current block. With the introduction of this flagging byte to the stream, the meshed block must be able to save at least a single byte of storage by merging multi-byte deltas into single bytes to be relatively efficient. If a single byte or more cannot be saved by meshing two prospective blocks, then the pair is discarded from consideration.
Once a block has been selected to mesh with a previous block, it is removed from later search iterations to avoid redundantly coding the same pixels in multiple sections of the byte stream. The result of the reorganized data results in a stream of pixels in fractal order, along with intermittent meshed flags for the reorganization of blocks. This phase results in a slight data expansion of a few hundred bytes before the actual compression and collapse of bits are done.
The pixel restructuring process employed by CompaCT, as defined in Fig. 5, aims to identify and exploit inherent patterns and redundancies within medical images, allowing for more effective data representation. By strategically rearranging pixels, the algorithm can identify local variations in intensity and efficiently
Figure 5: Preprocessing image transformation pipeline before continuing encoding.
coding, which stores the difference between neighboring pixel values rather than their absolute values. This approach capitalizes on how medical images can exhibit gradual intensity transitions in localized regions [9].
### QOI-Based Delta Coding
With the reorganized pixels into more efficient smaller differences, the QOI system will collapse small differences from two bytes into single bytes. The data packets are labeled with tag bits, as defined in Table 1, for labeling in decoder reconstruction. The tags are customized for this application and no longer resemble the original QOI tags.
### DEFLATE Entropy Coding
Furthermore, CompaCT leverages entropy coding techniques to compress the delta-coded pixel data more efficiently. Entropy coding exploits the varying probabilities of pixel values occurring within the image, assigning shorter codes to frequently occurring pixel values and longer codes to less frequent ones [8]. This adaptive coding mechanism enables CompaCT to achieve higher compression ratios by allocating shorter codes to the most common pixel intensities found in medical images.
After processing the reorganized pixels into a QOI byte stream, the information can be further compressed losslessly using standard encoding algorithms. The DEFLATE [14] system is a common final step in many data processing pipelines to result in maximally compressed data.
The DEFLATE procedure applies Lempel-Ziv's 1977 backreferencing to simplify commonly repeated phrases in the data [14]. The LZ77 algorithm uses references to previous strings of repeated bytes to minimize repetitions in the data. The reorganized data and minimized deltas encoded in the stream using the fractal-segmentation system aid in creating highly redundant codes that can be shrunk down with LZ77.
The next step in DEFLATE is the Huffman entropy coding system [14], which minimizes the bit representations of the data dynamically based on usage probabilities [8]. This will result in a highly entropic and minimized version of the final data that can be losslessly inverted back into the QOI-based byte stream.
Once the actual pixel data is compressed, the data is written to a resulting output file along with some header bytes containing specific information about the data, including magic characters "pact", image dimensions, number of channels, number of bytes per
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Type** & **Tag Bits** & **Payload Bits** & **Tag** & **Payload Range** & **Payload Content** \\ \hline Short Delta & 1 & 7 & 0 & [-64, 65] & Difference from the previous pixel. \\ Meshed Flag & 2 & 6 & 10 & [0, 64] & Block jumps forward of meshed block. \\ Full Delta & 4 & 12 & 1110 & [-2047, 2048] & Difference from the previous pixel. \\ \hline \hline \end{tabular}
\end{table}
Table 1: QOI-based delta coding byte packet metadata.
channel, boolean flag for fractal transform, boolean flag for segmentation transform, and a boolean flag for DEFLATE compression. A decoder can parse this initial information and read the file based on the desired configurations of the compressed input without any external information required.
The CompaCT pipeline, as outlined in Fig. 6, offers a comprehensive solution for optimizing compression ratios in medical image processing by combining pixel restructuring, delta coding, and entropy coding. This specialized approach considers the unique characteristics of medical images, which demand both high bit-depth representation and preservation of detailed information critical for accurate medical analysis.
## 3 Results
### Reconstruction Validation
This study verifies the lossless nature of the CompaCT pipeline by confirming zero Root-Mean-Square-Error (RMSE) when comparing the original and reconstructed images.
\[\text{RMSE}(\alpha,\beta)\text{=}\sqrt{\frac{\sum_{i=1}^{n}{(\beta_{i}\cdot a_{ i})^{2}}}{n}} \tag{3}\]
Equation (3) defines the RMSE formula to determine reconstruction error where \(n\) represents the number of pixels, _beta_ represents the ground truth, and _alpha_ represents the reconstructed values.
Figure 6: Encoding process of the overall encoding and decoding compression pipeline. Each step is reversible in the decoder algorithm for image reconstruction.
### Compression Ratio Metric
The compression ratio is a common metric for quantifying an algorithm's compression power [8].
\[\text{CompressionRatio(I)}=\frac{\text{UncompressedSize(I)}}{\text{CompressedSize(I)}} \tag{4}\]
Equation (4) defines the compression ratio of image \(I\) to evaluates how many times smaller the compressed image is compared to the size of the original data. The uncompressed size of the file is divided by the resulting compressed size to give a score of how much the data was compressed. For example, a compression ratio of 2 corresponds to a 50% smaller file than the original.
### Evaluation Dataset
The CompaCT system is designed to be applied to any medium of DICOM files in the 12-bit grayscale configuration. This study utilized an open-source dataset [20] from The Cancer Imaging Archive (TCIA) of 3954 12-bit DICOM lung CT scans in 512x512 dimensions for evaluating the algorithms. All systems were applied to each CT scan in this dataset, and the resulting compression ratios were calculated.
### Evaluation Results
To evaluate the enhancements in compression power of the CompaCT format compared to industry standards, we exhibited increased compression ratios against four industry standard and third-party lossless formats: JP2, RLE, ZIP, and PNG as outlined in Table 2. The raw data size is also included as a baseline measurement. JP2, RLE, and ZIP are built-in compressor formats to the DICOM standard package [12]. PNG serves an external benchmark to evaluate the proposal and is not commonly used in medical image compression.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**File** & **Name** & **Industry Status** & **Compression Ratio** \\ \hline Raw & Raw & DICOM Standard & 1 \\ JP2 & JPEG2000-Lossless & DICOM Standard & 1.763210 \\ RLE & Run Length Encoding & DICOM Standard & 1.792206 \\ ZIP & ZIP DEFLATE & DICOM Standard & 1.805547 \\ PNG & Portable Network Graphics & Third-Party & 2.059297 \\ CCT & CompaCT & Proposal & 2.421973 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparisons of industry compression algorithms.
The CompaCT format results in the largest mean compression ratios compared to all the other evaluated baselines. CompaCT, on average, creates files ~2.42x smaller than the original uncompressed forms as shown in Table 2. To highlight improvements, CompaCT generates about 37% improved compression ratios compared to the primary DICOM industry standard JP2, and about a 35% improvement compared to RLE as shown in Fig. 7 and 8. CompaCT performs roughly 18% better than PNG and about 34% better than ZIP from Fig. 7 and 8. Compared to the raw uncompressed data, CompaCT is 142% more space-efficient regarding file sizes from Fig. 7 and 8.
Figure 8: Another view of compression ratio from Fig 7, showing compressed file size distributions for each evaluated algorithm
Figure 7: Mean compression ratio for each evaluated algorithm on TCIA dataset.
Entropy is a measure of randomness present in information. Highly entropic data is more difficult to compress due to fewer repeatable patterns in the information [1].
\[\text{H(x)}=\text{-}\sum_{i=1}^{n}P(x_{i})\text{log}_{b}\,P(x_{i}) \tag{5}\]
Equation (5) defines an estimation of entropy. The dataset is ranked based on the entropy present in each individual image and plotted based on each compressor's power in Fig. 9. CompaCT consistently outperforms the other benchmarks on most images over a broad range of different entropies, proving that the algorithm can generalize and perform on various entropic densities.
## 4 Discussion
In this study, we introduced a novel lossless image compression codec, CompaCT, designed for optimizing the storage of monochrome high-bit-depth DICOM medical files. Our proposed algorithm demonstrated a 37% increase in compression ratios compared to the current primary DICOM industry standard, JP2. Moreover, the entropic ranking data provided evidence that the compression improvements achieved by CompaCT can be generalized to varying intensities of information.
The results obtained from our experiments indicate that CompaCT outperforms the current standard algorithms in the DICOM format, resulting in smaller file sizes for compressed images. This validates the efficacy of the enhanced encoder proposed in this study and highlights the potential for utilizing CompaCT to generate smaller dedicated data files of medical images, thereby reducing storage requirements and facilitating more efficient data transfer.
The research presented in this study can hold importance in image processing as it introduces new possibilities for exploring different types of preprocessing and pixel
Figure 9: Comparing the compression ratios for each algorithm on each evaluated file, ranked based on entropy (5) present in data (X-axis ranked from hardest to compress to easiest to compress).
ordering transforms before traditional encoding systems. The results of CompaCT, a highly specialized segmentation system, paves the way for further research into developing and incorporating other specialized preprocessors to enhance compression algorithms further. This could lead to substantial advancements in image compression technologies and have implications for medical imaging and other domains reliant on efficient data storage and transmission.
While this study focuses primarily on optimizing compression efficiency, it is essential to acknowledge that the gains achieved with the CompaCT algorithm involve a trade-off with compression and decompression speed compared to other existing systems. As the proposal is still under development, we prototyped it in the Python programming language for rapid testing and development. However, Python's uncompiled and dynamic nature limits its performance compared to other compiled implementations. Thus, the next logical step in improving this system for future use involves creating an optimized version of the proposed algorithm in a compiled format, ensuring better efficiency and broader applicability. Further optimization can also be achieved by experimenting and utilizing different preprocessing transforms and entropy coding systems, arithmetic coding.
The highly adaptive field of image compression offers numerous opportunities for further research and analysis, contributing to the continuous enhancement of our novel technique of pixel restructuring. The significant influence of pixel image traversal order on compression performance suggests that more optimal orderings within this space may warrant exploration in future studies. Understanding and exploiting these orderings could lead to even more substantial gains in compression rates.
Given the results obtained with CompaCT, it is recommended for continued efforts to optimize digital storage through advancements in image compression techniques. Further studies can explore how this algorithm performs with different types of medical images, datasets, and applications to validate its robustness and versatility. Additionally, investigating its adaptability to color images and extending its capabilities to lossy compression scenarios could open new avenues for broader adoption across various industries.
## 5 Conclusions
Our findings demonstrate the efficacy of the CompaCT algorithm in enhancing compression ratios for medical images. Through strategic pixel restructuring, coupled with innovative delta coding and entropy coding techniques, CompaCT achieved a 37% increase in compression ratios compared to the current industry standard, JP2. This achievement underscores the algorithm's ability to transform the storage landscape for medical images, minimizing costs and facilitating seamless data sharing.
The broader significance of this study extends beyond medical image compression. The results of CompaCT showcase the potential of specialized algorithms to outperform general-purpose compression methods. The approach of utilizing pixel restructuring, delta coding, and entropy coding can inspire advancements in compression
techniques applicable across diverse domains where data storage and transmission efficiency are paramount.
As medical imaging continues to evolve and play an increasingly central role in healthcare, the optimized storage and efficient transmission of medical images become imperative. By addressing these avenues for further research, this study can drive the development of efficient and powerful compression algorithms that will play a pivotal role in shaping the future of data storage and transmission.
## Abbreviations
\begin{tabular}{l l} \hline \hline
**Abbreviation** & **Definition** \\ \hline CCT & CompaCT \\ CLD & Count of Large Differences \\ CT & Computed Tomography \\ DICOM & Digital Imaging and Communications in Medicine \\ JP2 & JPEG2000-Lossless \\ L & Large Difference \\ LZ77 & Lempel-Ziv 1977 \\ MRI & Magnetic Resonance Imaging \\ PNG & Portable Network Graphics \\ QOI & Quite OK Image Format \\ RLE & Run Length Encoding \\ RMSE & Root Mean Square Error \\ ROI & Region of Interest \\ TCIA & The Cancer Imaging Archive \\ \hline \hline \end{tabular}
## Declarations
### Availability of Data and Materials
The dataset used to evaluate the current study are available in The Cancer Imaging Archive repository, [https://doi.org/10.7937/K9/TCIA_2015.NPGZYZBZ](https://doi.org/10.7937/K9/TCIA_2015.NPGZYZBZ) [20]. The code used and analyzed during the current study is available as an open-source software package in the repository at [https://doi.org/10.5281/zenodo.8274859](https://doi.org/10.5281/zenodo.8274859). The latest version of the code is available at [https://github.com/taaha-khan/2023-CompaCT-Image-Compression](https://github.com/taaha-khan/2023-CompaCT-Image-Compression).
### Competing Interests
The authors declare that they have no competing interests.
## Funding
Not applicable
## Authors' Contributions
The idea, implementation, and validation of this work was conducted by the main author TK. The Large Language Model ChatGPT from OpenAI and Grammarly Premium from Grammarly Inc. were utilized to assist in word choice and organization in the manuscript, but not in research content. The author wrote, read, and approved the final manuscript.
## Acknowledgements
Special appreciation to F. M. Kashif and B. Zia for providing feedback and notes on the manuscript.
|
2307.03401 | Metropolitan Scale and Longitudinal Dataset of Anonymized Human Mobility
Trajectories | Modeling and predicting human mobility trajectories in urban areas is an
essential task for various applications. The recent availability of large-scale
human movement data collected from mobile devices have enabled the development
of complex human mobility prediction models. However, human mobility prediction
methods are often trained and tested on different datasets, due to the lack of
open-source large-scale human mobility datasets amid privacy concerns, posing a
challenge towards conducting fair performance comparisons between methods. To
this end, we created an open-source, anonymized, metropolitan scale, and
longitudinal (90 days) dataset of 100,000 individuals' human mobility
trajectories, using mobile phone location data. The location pings are
spatially and temporally discretized, and the metropolitan area is undisclosed
to protect users' privacy. The 90-day period is composed of 75 days of
business-as-usual and 15 days during an emergency. To promote the use of the
dataset, we will host a human mobility prediction data challenge (`HuMob
Challenge 2023') using the human mobility dataset, which will be held in
conjunction with ACM SIGSPATIAL 2023. | Takahiro Yabe, Kota Tsubouchi, Toru Shimizu, Yoshihide Sekimoto, Kaoru Sezaki, Esteban Moro, Alex Pentland | 2023-07-07T05:57:58Z | http://arxiv.org/abs/2307.03401v1 | # Metropolitan Scale and Longitudinal Dataset of Anonymized Human Mobility Trajectories
###### Abstract
Modeling and predicting human mobility trajectories in urban areas is an essential task for various applications. The recent availability of large-scale human movement data collected from mobile devices have enabled the development of complex human mobility prediction models. However, human mobility prediction methods are often trained and tested on different datasets, due to the lack of open-source large-scale human mobility datasets amid privacy concerns, posing a challenge towards conducting fair performance comparisons between methods. To this end, we created an open-source, anonymized, metropolitan scale, and longitudinal (90 days) dataset of 100,000 individuals' human mobility trajectories, using mobile phone location data. The location pings are spatially and temporally discretized, and the metropolitan area is undisclosed to protect users' privacy. The 90-day period is composed of 75 days of business-as-usual and 15 days during an emergency. To promote the use of the dataset, we will host a human mobility prediction data challenge ('HuMob Challenge 2023') using the human mobility dataset, which will be held in conjunction with ACM SIGSPATIAL 2023.
## Background & Summary
Understanding, modeling, and predicting human mobility trajectories in urban areas is an essential task for various domains and applications, including human behavior analysis1, transportation and activity analysis2, disaster risk management3, epidemic modeling4, and urban planning5. Traditionally, travel surveys and census data have been utilized as the main source of data to understand such macroscopic urban dynamics6. The recent availability of large-scale human movement and behavior data collected from (often millions of) mobile devices and social media platforms7 have enabled the development and testing of complex human mobility models, resulting in a plethora of proposed methods for the prediction of human mobility traces8.
Footnote 1: [https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page](https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page)
Footnote 2: [https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page](https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page)
Despite its academic popularity and societal impact, human mobility modeling and prediction methods are often trained and tested on different proprietary datasets, due to the lack of open-source and large-scale human mobility datasets amid privacy concerns9. This makes it difficult to conduct fair performance comparisons across different methods. Several efforts have created open-source datasets of human mobility. Real-world trajectory datasets include the GeoLife dataset, T-Drive trajectory dataset, and NYC Taxi and Limousine Commission dataset. The GeoLife dataset10 provides trajectory data of 182 users across a period of over three years, containing 17,621 trajectories with a total distance of about 1.2 million kilometers and a total duration of 48,000 hours. The T-Drive trajectory dataset contains trajectories of 10,357 taxis across a one week timeframe11. The total number of points in this dataset is about 15 million and the total distance of the trajectories reaches 9 million kilometers. Similarly, the New York City Taxi and Limousine Commission (NYC-TLC) provides pick-up and drop-off locations and timestamps data 1. Although T-Drive and NYC-TLC datasets provide massive amounts of trajectory information, they are limited to taxi trips. There has also been several synthetic datasets produced from open-source data, including the Open PFLOW12 and Pseudo-PFLOW datasets13. While such datasets are valuable in conducting large-scale experiments on human mobility prediction, the lack of metropolitan-scale, longitudinal, real-world, and open-source datasets of individuals has been one of the key barriers hindering the progress of human mobility model development.
Footnote 1: [https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page](https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page)
To this end, we created an open-source and anonymized dataset of human mobility trajectories from mobile phone location
data provided by Yahoo Japan Corporation. The dataset contains 100,000 individuals' mobility trajectories across a 90 day period collected from an undisclosed, highly populated metropolitan area in Japan. The location pings are discretized into 500 meters x 500 meters grid cells and the timestamps into 30 minute bins. The actual date of the observations are not available either (i.e., timeslot \(t\) of day \(d\)) to protect privacy. The 90 day period is composed of 75 days of business-as-usual and 15 days during an emergency with unusual behavior.
To promote the use of the dataset, we will host a human mobility prediction data challenge ('HuMob Challenge 2023') using the human mobility dataset of 100K individuals' trajectories across 90 days. The workshop will be held in conjunction with ACM SIGSPATIAL 2023 2. Participants will be tasked to develop and test methods to predict human mobility trajectories using the provided open-source dataset (for details, see Section 'Human Mobility Prediction Data Challenge').
Footnote 2: [https://sigspatial2023.sigspatial.org/](https://sigspatial2023.sigspatial.org/)
## Methods
### Observation of Smartphone GPS records
GPS location data were collected from smartphones which have installed the Yahoo Japan Application, and were anonymized so that individuals cannot be specified, and personal information such as gender, age and occupation are unknown. Each GPS location record contains the user's unique ID, timestamp of the observation, longitude, and latitude, and the data has a sample rate of approximately 5% of the entire population. The data acquisition frequency of GPS locations varies according to the movement speed of the user to minimize the burden on the user's smartphone battery. If it is determined that the user is staying in a certain place for a long time, data is acquired at a relatively low frequency, and if it is determined that the user is moving, the data is acquired more frequently.
### Spatio-temporal Processing and Anonymization
The set of mobile phone users included in the dataset were selected by spatially and temporally cropping the raw dataset. To spatially crop the raw dataset, we created a boundary box around an undisclosed metropolitan area in Japan, and selected mobile phone users who were observed within the boundary box more than 10 times during 10 day period (dates undisclosed for privacy reasons). To make the mobile phone users unidentifiable, the location pings are discretized into 500 meters x 500 meters grid cells and the timestamps into 30 minute bins. The actual date of the observations were also masked (i.e., timeslot \(t\) of day \(d\)) to protect privacy. The movement (encoded into 500m grid cells) of the mobile phone users was tracked across a total of 90 days (again, dates are undisclosed), including a 75-day period of business-as-usual (_Dataset 1_) and another 15-day period under an emergency situation (_Dataset 2_), where we can assume human behavior and mobility patterns could be shifted. The dataset was finally cropped by selecting users with a sufficient number of 30-minute timeslot observations to ensure that the mobility patterns could be studied (see Figure 2 for distribution of pings per user). Observations outside of the target boundary box were discarded. For _Dataset 1_, 100,000 users were selected, and for _Dataset 2_, 25,000 users were selected.
### Privacy Policy
Yahoo Japan Corporation (YJ) has developed its own privacy policy and requires users to read and agree to its privacy policy before using any of the services provided by YJ. Furthermore, because location data is highly sensitive for the users, users were asked to sign an additional consent form specific to the collection and usage of location data when they used apps that collect location information. The additional consent explains the frequency and accuracy of location information collection, and also the purpose and how the data will be used. Moreover, YJ implemented strict restrictions in the analysis procedure. The methodology for handling the data and for obtaining user consent for this study were supervised by an advisory board composed of external experts. YJ also ensured that research institutions other than YJ that participate in this study (including co-investigators) do not have direct access to the data. Although external research institutions were allowed to analyze aggregated data, the actual raw data were kept within YJ, and any analysis performed on raw data were performed within servers administered by YJ.
### Data Records
Table 1 shows an example of the dataset provided. Each record refers to an observation of an individual:
* user ID is the unique identifier of the mobile phone user (type: integer)
* day is the masked date of the observation. It may take a value between 0 and 74 for both Dataset 1 and Dataset 2 (type: integer).
* timeslot is the timestamp of the observation discretized into 30 minute intervals. It may take a value between 0 and 47, where 0 indicates between 0AM and 0:30AM, and 13 would indicate the timeslot between 6:30AM and 7:00AM.
* x,y are the coordinates of the observed location mapped onto the 500 meter discretized grid cell. It may take a value between (1,1) and (200,200). Details are shown in Figure 1.
## Basic Statistics of the Data
To provide guidance for data users, we have computed the basic descriptive statistics of _Dataset 1_. The total number of records are 111,535,175, with exactly 100,000 unique users (numbered 0 to 99,999), across 75 days (numbered 0 to 74), in 48 different 30 minute timesteps (numbered 0 to 47). Figure 2 shows the histogram of the number of pings per user ID (left) and the number of unique cells visited per user ID. Both plots show a skewed distribution, where a small fraction of the users are observed many times (i.e., more than 2000 pings, at 100 unique cells). Figure 3 shows the histogram of the number of pings per user ID (left) and the number of unique users visited to each grid cell. Note that the x-axis in both plots are log-scaled. Both plots show a bimodal distribution, where a large fraction of the cells are visited very few times (less than 10 pings or unique users) while another mode can be observed at around 10000 pings and 1000 unique users visited. This highlights the mix of urban and rural areas in the target region.
Figure 4 shows the temporal dynamics of the number of pings and unique users per day (from day 0 to 74) in Dataset 1. The patterns show temporal regularity, showing clear patterns of weekdays and weekends. There is an anomaly on day 27, however this is due to a data collection issue. The unique number of users observed each day fluctuates more, showing a decrease near days 40 to 50 and an increase from day 60 onwards. Figure 5 shows the temporal dynamics of the number of
\begin{table}
\begin{tabular}{c c c c} \hline user ID & day & timeslot & x & y \\ \hline
1 & 1 & 13 & 10 & 13 \\
1 & 1 & 18 & 11 & 15 \\
1 & 1 & 24 & 11 & 17 \\
1 & 1 & 27 & 12 & 19 \\ &... & & & \\
2 & 3 & 15 & 31 & 19 \\
2 & 3 & 28 & 35 & 33 \\
2 & 4 & 12 & 35 & 36 \\ &... & & & \\ \hline \end{tabular}
\end{table}
Table 1: Example of dataframe and the columns in the human mobility trajectory dataset.
Figure 1: Spatial layout of the target city and the grid cells. Each grid cell is approximately 500 meters x 500 meters, and the target area spans 200 x 200 grid cells.
pings and unique users per timeslot (from timeslot 0 to 47) aggregated across all days observed in Dataset 1. The patterns show temporal regularity, showing clear morning and daytime peaks. The unique number of users observed between timeslot 12 (6AM) and timeslot 40 (8PM) is stable at around 100,000, showing a high observability during those time periods. Figure 6 shows a 2-dimensional histogram of the number of pings and the number of observed unique users across the 75 days. Note that the scales are log-scaled. The patterns show clear urban (blue) and rural (red) areas.
Figure 3: Histograms of the number of GPS location data pings and number of unique users visited per grid cell, across the 75 day period stored in Dataset 1.
Figure 2: Histograms of the number of GPS location data pings and number of unique cells visited per user, across the 75 day period stored in Dataset 1.
Figure 4: Temporal dynamics of the number of pings and unique users per day (from day 0 to 74) in Dataset 1.
Figure 5: Temporal dynamics of the number of pings and unique users per timeslot (from timeslot 0 to 47) in Dataset 1.
Figure 6: 2-dimensional histogram of the number of pings and the number of observed unique users across the 75 days. Note that the scales are log-scaled. The patterns show clear urban (blue) and rural (red) areas.
## Human Mobility Prediction Data Challenge
The challenge takes place in a mid-sized and highly populated metropolitan area, somewhere in Japan. The area is divided into 500 meters x 500 meters cells, which span a 200 x 200 grid, as shown in Figure 1. The human mobility datasets ('task1_dataset.csv.gz' and 'task2_dataset.csv.gz') contain the movement of a total of 100,000 individuals across a 90 day period, discretized into 30-minute intervals and 500 meter grid cells. The first dataset contains the movement of a 75 day business-as-usual period, while the second dataset contains the movement of a 75 day period during an emergency with unusual behavior.
There are 2 tasks in the Human Mobility Prediction Challenge, as shown in Figure 7. In task 1, participants are provided with the full time series data (75 days) for 80,000 individuals, and partial (only 60 days) time series movement data for the remaining 20,000 individuals ('task1_dataset.csv.gz'). Given the provided data, Task 1 of the challenge is to predict the movement patterns of the individuals in the 20,000 individuals during days 60-74. Task 2 is similar task but uses a smaller dataset of 25,000 individuals in total, 2,500 of which have the locations during days 60-74 masked and need to be predicted ('task2_dataset.csv.gz').
While the name or location of the city is not disclosed, the participants are provided with points-of-interest (POIs; e.g., restaurants, parks) data for each grid cell ( 85 dimensional vector) as supplementary information (which is optional for use in the challenge) ('cell_POIcat.csv.gz'). For more details, see [https://connection.mit.edu/humob-challenge-2023](https://connection.mit.edu/humob-challenge-2023).
### Provided Datasets and Tasks
The data challenge participants will be provided with 3 datasets - HuMob datasets #1 and #2 (which are derived from the original human mobility dataset), and the POI dataset which may be used to supplement the prediction of human mobility.
The data may be downloaded from [https://zenodo.org/record/8111993](https://zenodo.org/record/8111993). For teams to be granted access to the data, teams should request access via the Zenodo website by providing the name and email address of the lead investigator, and the following information in the 'Justification' box:
* full list of members (name, institution, email address)
* team name (alphabets and numbers only, keep it \(\leq\) 10 characters)
Upon approval by the organizing team, the data will be available for download. If you do not receive the data approval within 24 business day hours, please contact [email protected] with your information.
**Participants shall not carry out activities that involve unethical usage of the data, including attempts at re-identifying data subjects, harming individuals, or damaging companies. Participants will be allowed to submit full versions of their works to venues of their choice, upon approval by the organizers.**
Figure 7: 2 tasks in the Human Mobility Prediction Challenge. In task 1, participants predict the movement of a subset (20,000) of the individuals for days 60 to 74 during a business-as-usual period. In task 2, participants predict the movement of a subset (2,500) of the individuals for days 60 to 74 during an emergency situation.
_HuMob dataset #1 (task1_dataset.csv.gz)_
Contains the movement of 100,000 individuals in total during a business-as-usual scenario. 80,000 individuals' movements are provided completely (for 75 days), and the remaining 20,000 individuals' movements for days 61 to 75 are masked as 999. The challenge is to use the provided data to predict the masked cell coordinates (i.e., replace the '999's). See Table 1 for the data format.
_HuMob dataset #2 (task2_dataset.csv.gz)_
Contains the movement of 25,000 individuals in total during a business-as-usual scenario. 22,500 individuals' movements are provided completely (for 75 days), and the remaining 2,500 individuals' movements for days 61 to 75 are masked as 999. Similar to task 1, the challenge is to use the provided data to predict the masked movement coordinates (i.e., replace the '999's). See Table 1 for the data format.
_POI dataset (cell_POIcat.csv.gz)_
To aid the prediction task, we have prepared an auxiliary dataset that provides the count of different points-of-interest categories in each grid cell (e.g., restaurants, cafes, schools). However, to maintain anonymity of the location, we are not able to provide the actual category name that corresponds to each dimension. Therefore, each cell has a 85 dimensional vector, as shown in Table 2.
### Evaluation Metrics
Two metrics will be used to measure the accuracy of the predicted mobility trajectories:
* Dynamic Time Warping (DTW) [14], for evaluating the similarity of trajectories as a whole, with step-by-step alignment.
* GEO-BLEU [15], a metric with a stronger focus on local features, as in similarity measures for natural language sentences. Python implementation for the GEOBLEU metric can be found at [https://github.com/yahoojapan/geobleu](https://github.com/yahoojapan/geobleu).
Submissions will be ranked for each metric, and the top 10 teams will be decided based on the two rankings. We recommend the teams to try to optimize for both metrics.
### Submission Procedure and Rules
* Prediction results for Tasks 1 and 2 should be uploaded to an online storage (e.g., Dropbox, Box, Google Drive, etc.) and the download links should be sent to [email protected].
* The attached files should be named as: teamnumber_{task1,task2}_humob.csv.gz. For example, team number 5 submitting their solutions for task 1 should submit their prediction as 5_task1_humob.csv.gz.
* Only 1 submission per team would be evaluated. The final submission before the deadline (September 15th 23:59 AOE) will be considered as the final submission.
* The format of the submission should include the same 5 columns as the original dataset (user ID, day, timeslot, x, y). Separate the columns using commas (,) and include no redundant spaces, and save the file using the csv.gz format.
* _Only send the data for the predicted users_. For Task 1, only users #80000 to #99,999, and for Task 2, only users #22500 to #24999.
\begin{table}
\begin{tabular}{c c c c} \hline x & y & POI category (dim) & \# of POIs \\ \hline
1 & 1 & 13 & 10 \\
1 & 1 & 18 & 11 \\
1 & 1 & 24 & 11 \\
1 & 1 & 27 & 12 \\ & &... & \\
2 & 2 & 15 & 31 \\
2 & 2 & 28 & 35 \\
2 & 2 & 12 & 35 \\ & &... & \\ \hline \end{tabular}
\end{table}
Table 2: Example of dataframe and the columns in the POI category dataset. First two columns show the x and y coordinates of the grid cell, third column denotes the dimension of the POI category (between 1 and 85), and the fourth column shows how many POIs of the POI category dimension located in the grid cell.
### Important Dates
The top 10 teams with the best predictions will be invited to submit a final report with details of the methods and to present their work at the HuMob 2023 Workshop held in conjunction with ACM SIGSPATIAL 2023 in Hamburg, Germany on November 13th, 2023. We have prizes for the top 3 participants!
* June 15, 2023: data challenge announcement
* July 10, 2023: data open at [https://zenodo.org/record/8111993](https://zenodo.org/record/8111993)
* September 15, 2023: submission deadline for predictions
* September 22, 2023: notification of top contestants
* October 14, 2023: submission deadline of workshop papers for top 10 teams
* October 20, 2023: camera-ready submission
* November 13, 2023: presentation in the workshop
### Organizing Team
The team members are: Dr. Takahiro Yabe, MIT; Dr. Kota Tsubouchi, Yahoo Japan Corporation; Toru Shimizu, Yahoo Japan Corporation; Professor Yoshihide Sekimoto, University of Tokyo; Professor Kaoru Sezaki, University of Tokyo; Professor Esteban Moro, MIT; Professor Alex 'Sandy' Pentland, MIT. For general questions about the challenge: [email protected]
## Code availability
The dataset can be downloaded from [https://zenodo.org/record/8111993](https://zenodo.org/record/8111993), and details about the Data Challenge can be found in [https://connection.mit.edu/humob-challenge-2023](https://connection.mit.edu/humob-challenge-2023). Python implementation for the GEOBLEU metric can be found at [https://github.com/yahoojapan/geobleu](https://github.com/yahoojapan/geobleu).
|
2301.10095 | Large Language Models as Fiduciaries: A Case Study Toward Robustly
Communicating With Artificial Intelligence Through Legal Standards | Artificial Intelligence (AI) is taking on increasingly autonomous roles,
e.g., browsing the web as a research assistant and managing money. But
specifying goals and restrictions for AI behavior is difficult. Similar to how
parties to a legal contract cannot foresee every potential "if-then"
contingency of their future relationship, we cannot specify desired AI behavior
for all circumstances. Legal standards facilitate robust communication of
inherently vague and underspecified goals. Instructions (in the case of
language models, "prompts") that employ legal standards will allow AI agents to
develop shared understandings of the spirit of a directive that generalize
expectations regarding acceptable actions to take in unspecified states of the
world. Standards have built-in context that is lacking from other goal
specification languages, such as plain language and programming languages.
Through an empirical study on thousands of evaluation labels we constructed
from U.S. court opinions, we demonstrate that large language models (LLMs) are
beginning to exhibit an "understanding" of one of the most relevant legal
standards for AI agents: fiduciary obligations. Performance comparisons across
models suggest that, as LLMs continue to exhibit improved core capabilities,
their legal standards understanding will also continue to improve. OpenAI's
latest LLM has 78% accuracy on our data, their previous release has 73%
accuracy, and a model from their 2020 GPT-3 paper has 27% accuracy (worse than
random). Our research is an initial step toward a framework for evaluating AI
understanding of legal standards more broadly, and for conducting reinforcement
learning with legal feedback (RLLF). | John J. Nay | 2023-01-24T16:03:20Z | http://arxiv.org/abs/2301.10095v2 | # Large Language Models as Fiducialaries:
###### Abstract
Artificial Intelligence (AI) is taking on increasingly autonomous roles, e.g., browsing the web as a research assistant and managing money. But specifying goals and restrictions for AI behavior is difficult. Similar to how parties to a legal contract cannot foresee every potential "if-then" contingency of their future relationship, we cannot specify desired AI behavior for all circumstances. Legal standards facilitate robust communication of inherently vague and underspecified goals. Instructions (in the case of language models, "prompts") that employ legal standards will allow AI agents to develop shared understandings of the spirit of a directive that generalize expectations regarding acceptable actions to take in unspecified states of the world. Standards have built-in context that is lacking from other goal specification languages, such as plain language and programming languages. Through an empirical study on thousands of evaluation labels we constructed from U.S. court opinions, we demonstrate that large language models (LLMs) are beginning to exhibit an "understanding" of one of the most relevant legal standards for AI agents: fiduciary obligations. Performance comparisons across models suggest that, as LLMs continue to exhibit improved core capabilities, their legal standards understanding will also continue to improve. OpenAI's latest LLM has 78% accuracy on our data, their previous release has 73% accuracy, and a model from their 2020 GPT-3 paper has 27% accuracy (worse than random). Our research is an initial step toward a framework for evaluating AI understanding of legal standards more broadly, and for conducting reinforcement learning with legal feedback.
*[email protected] &_[https://law.stanford.edu/directory/john-nay](https://law.stanford.edu/directory/john-nay).
Earlier versions of this paper benefited from feedback from Cullen O'Keefe.
This paper represents my personal views and not necessarily those of Stanford, NYU, Brooklyn investment Group, or any other organization or person. Nothing here is investment or financial advice.
**TABLE OF CONTENTS**
**I. INTRODUCTION**
**II. THE GOAL SPECIFICATION PROBLEM**
**1. All Rewards Are Proxies**
**2. The Real-World Exacerbates Goal Misspecification**
**3. More Capable AI May Further Exacerbate Misspecification**
**III. SPECIFICATION LANGUAGES AS SOLUTIONS**
**1. Baked-in Context**
**2. Inherent Externality Reduction**
**3. Super-Human Scalability**
**IV. LEGAL STANDARDS: THE SPIRIT OF DIRECTIVES**
**1. Rules vs. Standards**
**2. The Fiduciary Duty Standard**
**3. FAI as a Fiduciary to H**
**V. EMPIRICAL CASE STUDY: FIDUCIARY STANDARD UNDERSTANDING**
**1. Converting Court Opinions to Evaluation Labels**
**2. Zero-Shot LLM Evaluation**
**3. Leveraging Legal Reward Data for Reinforcement Learning**
**VI. CONCLUSION**
## I Introduction
As Artificial Intelligence (AI) capabilities are quickly advancing,1 a brewing problem is how to have AI do what we intend - the "goal specification problem." AI is taking on increasingly autonomous roles. State-of-the-art large language models2 (LLMs), the locus of many recent breakthroughs in AI research, are now capable of powering digital agents.3
Footnote 1: Jason Wei et al., _Emergent Abilities of Large Language Models_ (2022); Jason Wei, _137 Emergent Abilities of Large Language Models_ (2022) [https://www.jasonwei.net/blog/emergence](https://www.jasonwei.net/blog/emergence); Taylor Webb et al., _Emergent Analogical Reasoning in Large Language Models_ (2022) [https://arxiv.org/abs/2212.09196](https://arxiv.org/abs/2212.09196) Danijar Hafner et al., _Mastering Diverse Domains through World Models_ (2023) [https://arxiv.org/abs/2301.04104](https://arxiv.org/abs/2301.04104).
Footnote 2: LLMs leverage the Transformer architecture (Ashish Vaswani et al., _Attention Is All You Need, in Proceedings of the 31st Conference on Neural Information Processing Systems_ (2017)), which is a model used to encode an input sequence (e.g., words in a particular order) into a context-aware representation and then decode that into a generation of an ordered sequence (e.g., a new set of words in a particular order) as an output. These models can capture complicated dependencies and interactions. They are very expressive in the forward pass of their information when generating outputs, but also efficient in the backward pass when they are being trained.
3Jacob Andreas, _Language Models as Agent Models_ (2022) [https://arxiv.org/abs/2212.01681](https://arxiv.org/abs/2212.01681); Shunyu Yao et al., _ReAct: Synergizing Reasoning and Acting in Language Models_ (2022) [https://arxiv.org/abs/2210.03629](https://arxiv.org/abs/2210.03629); Anton Bakhtin et al., _Human-level play in the game of Diplomacy by combining language models with strategic reasoning_, Science (2022); Harrison Chase, _Agents -- LongChain 0.0.68_ (2023) [https://langchain.readthedocs.io/en/latest/modules/agents.html](https://langchain.readthedocs.io/en/latest/modules/agents.html); Kyle Wiggers, _Adept aims to build AI that can automate any software process_, TechCrunch (2022) [https://techcrunch.com/2022/04/26/2304039/](https://techcrunch.com/2022/04/26/2304039/); AdeptAI, [https://www.adept.ai/act](https://www.adept.ai/act) ("ACT-1 is a large-scale Transformer trained to use digital tools -- among other things, we recently taught it how to use a web browser."); Chan Hee Son et al., _LIM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models_ (2022) [https://arxiv.org/abs/2212.04088](https://arxiv.org/abs/2212.04088).
4Tao Ge et al, _Extensible Prompts for Language Models_ (2022) [https://arxiv.org/abs/2212.00616](https://arxiv.org/abs/2212.00616).
language model5 through careful wordsmithing of instructions is also unable to fully customize LLMs for our purposes. This is the case regardless of the power of the AI. The issue lies in an inevitable lack of full clarity from time-limited and cognition-limited humans.
Footnote 5: See, e.g., Fangyi Yu, Lee Quartey & Frank Schilder, _Legal Prompting: Teaching a Language Model to Think Like a lawyer_ (2022) [https://arxiv.org/abs/2212.01326](https://arxiv.org/abs/2212.01326).
We argue that invoking well-understood legal standards in instructions can help AI interpret human intentions and reduce the risk of the AI taking actions with unintended side-effects or externalities. When expressed through standards, AI can more closely follow the "spirit" of a directive rather than the literal letter of the expressed intent.
In this paper, we define the AI goal specification problem (Part II.); compare three possible communication types toward solving the problem (Part III.); expand on our proposed (partial) solution of the invocation of legal standards (Part IV.); conduct an initial empirical analysis on the feasibility of current LLMs "understanding" the key standard of fiduciary duties, based on labels we generated to serve as a preliminary evaluation (Part V.); and conclude with potential next steps toward a framework for evaluating AI understanding of legal standards more broadly, and for conducting reinforcement learning with legal feedback (Part VI.).
## II The Goal Specification Problem
An example of an autonomous financial advisor agent makes the AI goal specification problem more concrete. Suppose a human, \(H\), decides she would like an AI agent, _FAI_, to manage her investments. \(H\) instructs _FAI_ to "construct a portfolio of investments and dynamically manage it to optimize my wealth for retirement." Every day, human clients provide human financial advisors with discretion over their investment assets in pursuit of this goal. The difference here is only that _FAI_ is an artificial financial advisor.
_FAI_ is an LLM pre-trained with self-supervision on much of the Internet along with hundreds of other text and image based tasks.6_FAI_ is fine-tuned through reinforcement learning with human and AI feedback7 to excel in constructing and managing complicated portfolios. In simulation, _FAI_ maximizes long-term risk-adjusted performance over many different time horizons for many different
wealth starting points and asset types. _FAI_ performs well enough in simulation that it is deployed fully autonomously to invest.8
Footnote 8: This would likely be in a phased roll-out with _FAI_ at first investing just in simpler liquid asset classes, and then across all types of deals and financial instruments once it is deemed to be sufficiently capable of reasoning about novel situations.
Specifying the desirability (i.e., _value_) of _FAI_ taking a particular _action_ in a particular _state_ of the world is unwieldy beyond a very limited set of _state-action-value_ tuples.9 In fact, the purpose of any machine learning system is to train on a subset of tuples10 and have the resulting agent learn decision policies that generalize to choosing high-value actions (e.g., maximizing portfolio returns) in unconcountered states (e.g., new market regimes with unprecedented interest rate changes).
Footnote 9: Without loss of much generality to other paradigms such as supervised learning, we frame this discussion from a reinforcement learning perspective.
Footnote 10: Or input-output pairs, if the focus is purely prediction rather than taking actions. But in this paper, we focus on the more general problem of choosing actions, rather than merely prediction.
11 Amodei et al., _Concrete Problems in AI Safety_ (2016); Jozar Skalse, Nikolaus H. R. Howe, Dmitrii Krasheninnikov & David Krueger, _Defining and Characterizing Reward Hacking_, in 36th Conference on Neural Information Processing Systems (2022) [https://arxiv.org/abs/2209.13085](https://arxiv.org/abs/2209.13085).
12 Langosco et al., _Goal Misgeneralization in Deep Reinforcement Learning, Proceedings of the 39th International Conference on Machine Learning, PMLR 162:12004-12019_ (2022); Rohin Shah et al., _Goal Misgeneralization: Why Correct Specifications Aren't Enough For Correct Goals_ (2022) [https://arxiv.org/pdf/2210.01790.pdf](https://arxiv.org/pdf/2210.01790.pdf) at 11 ("Goal misgeneralization can occur when there is some deployment situation, not previously encountered during training, on which the intended and misgeneralized goal disagree. Thus, one natural approach is to include more situations during training.").
13 Francois Chollet, _Deep Learning with Python, Second Edition_ (2021) at 450 ("An effect you see constantly in systems design is the shortcut rule: if you focus on optimizing one success metric, you
other (usually less quantifiable) variables of interest.14 Unintended behaviors result.15
Footnote 14: [https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity](https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity)
[MISSING_PAGE_POST]
2. The Real-World Exacerbates Goal Misspecification
Real-world circumstances16 exacerbate goal misspecification.17 Take, for example, the implementation of computational rules applied to empirical data relevant to self-driving cars. When fifty-two programmers were assigned the task of each independently automating simple speed limits, there was "significant deviation in number and type of citations issued [on application of their code to the same real-world data...] this experiment demonstrates that even relatively narrow and straightforward "rules" can be problematically indeterminate in practice."18
Footnote 16: A. Rupam Mahmood et al., _Benchmarking Reinforcement Learning Algorithms on Real-World Robots_, In Conference on Robot Learning, 561-591, PMLR (2018).
Footnote 17: Steven Kerr, _On the Folly of Rewarding A, While Hoping For B,_ Academy of Management Journal 18, no. 4 769-783 (1975); Victoria Krakovna, _Paroalgims of AI Alignment: Components and Enablers_ (2022); Pan, _Effects of Reward Misspecification_.
Footnote 18: Lisa A. Shay, Woodrow Hartzog, John Nelson & Gregory Conti, _Do Robots Dream of Electric Laws? An Experiment in the Law as Algorithm_, Presentation at the We Robot Conference 2013 (Apr. 8–9, 2013), [http://www.gregconti.com/publications/201303_AlgoLaw.pdf](http://www.gregconti.com/publications/201303_AlgoLaw.pdf).
In the case of _FAI_, the first objective (natural language prompt) provided was to maximize expected wealth of the human client at retirement. This is, of course, a proxy for what \(H\) actually cared about: a comfortable retirement. Maximizing wealth, in expectation, caused _FAI_ to pursue an incredibly risky investment strategy (that was never witnessed during training). This strategy was preferred by _FAI_ over strategies with lower expected return but higher probability of meeting a minimum amount of wealth required for necessities during retirement.
For the next _FAI_ version, the objective was altered by the designers to: "maximize the probability of a minimum comfortable amount of wealth at retirement," and they generated significantly more synthetic training data to try to cover more of the possible space of investment strategies and actions an AI agent might take.
3. More Capable AI May Further Exacerbate Misspecification
More capable AI cuts both ways. On the one hand, it has higher accuracy in predicting human intentions and human behaviors. But, on the other hand, it can further exacerbate misspecification with more powerful optimization that is less understood by humans, "achieving higher proxy reward and lower true reward than less capable agents."19
Footnote 1: The _FAI_ is a _FAI_ system
## 3 Specification Languages as Solutions
We have to tell Al what we want it to do, somehow.24 We have three options: programming languages,25 legal languages, and plain languages. These can be used in combination with one another and they share many commonalities. In fact, "Computer science and law are both _linguistic_ professions."26 Here, we focus on what distinguishes these language types.
Footnote 24: Sumers et al., _How To Talk So Your Robot Will Learn: Instructions, Descriptions, and Pragmatics_ (2022). 25 In the context of LLMs, _see, e.g._, Luca Beurer-Kellner, Marc Fischer & Martin Vechev, _Prompting Is Programming: A Query Language For Large Language Models_ (2022) [https://arxiv.org/abs/2212.06094](https://arxiv.org/abs/2212.06094).
Footnote 26: Emphasis in original, James Grimmellmann, _Programming Languages and Law: A Research Agenda_, In Proceedings of the 2022 Symposium on Computer Science and Law (2022) at 1 ("Programmers and lawyers use language to create, manipulate, and interpret complex abstractions. A programmer who uses the right words in the right way makes a computer do something. A lawyer who uses the right words in the right way changes people’s rights and obligations. There is a nearly exact analogy between the text of a program and the text of a law.").
Two relevant axes for characterizing the ways we can communicate goals are: (\(x\)) the consistency and thus efficiency and reliability of the communication; and (\(y\)) the extent to which the directives are interpreted literally versus flexibly with built-in context. Legal language strikes a better balance across these two dimensions than the two other candidate goal specification language types. We visualize them in that two-dimensional space (Figure 1).
Figure 1: To visualize a comparison of candidate language types for specifying human goals to Al, we plot the three primary language type options in a two-dimensional space. The x-axis is the predictability/consistency of the language, as we move to the right on this axis communication is more efficient (i.e., less characters are needed to convey the same amount of
information, on average). The y-axis is the amount of information drawn upon when interpreting / decoding the directive provided. Legal language strikes a better balance across these two dimensions, relative to plain language and programming languages._
## 1. Baked-in Context
Legal standards can be interpreted with significant amounts of external historical context baked in. The per-token informational content of a legal and plain language is orders of magnitude higher than the per-token informational content machine interpretation of computer code.
Smart contracts (agreements enforced by software) illustrate some of the tradeoffs, and that programming languages do not have the flexibility or context-density of legal or plain language.27 With entirely digital agreements, relatively common issues (e.g., software bug exploits) are unresolvable without falling back on the social or legal ecosystems they ultimately exist within.28
Footnote 27: James Grimmelmann, _All smart contracts are ambiguous_, Journal of Law & Innovation, 2 (2019).
Footnote 28: An infamous example of this is “The DAO.” _See,_ Matt Levine, _Blockchain Company’s Smart Contracts Were Dumb_, BLOOMBERG.COM (June 17, 2016), [https://www.bloomberg.com/opinion/articles/2016-06-17/blockchain-company-s-smart-contracts-were-dumb](https://www.bloomberg.com/opinion/articles/2016-06-17/blockchain-company-s-smart-contracts-were-dumb); Phil Dian, _Analysis of the DAO exploit_, Hacking Distributed (June 18, 2016), [http://hackingdistributed.com/2016/06/18/analysis-of-the-dao-exploit/](http://hackingdistributed.com/2016/06/18/analysis-of-the-dao-exploit/); James Grimmelmann, _All smart contracts are ambiguous_, Journal of Law & Innovation, 2 (2019).
Footnote 29: Karen Levy, _Book-Smart, Not Street-Smart: Blockchain-Based Smart Contracts and The Social Workings of Law_, 3 ENGAGING SCIENCE, TECHNOLOGY, AND SOCIETY 1 (2017) at 8 (”it can be both operationally and socially beneficial to leave some terms underspecified; vagueness preserves operational flexibility for parties to deal with newly arising circumstances after an agreement is made, and sets the stage for social stability in an ongoing relationship.”); Jeremy M. Sklaroff, _Smart Contracts and the Cost of Inflexibility_, 166 U. Pa. L. Rev. 263 (2017); Kevin Werbach & Nicolas Cornell, _Contracts Ex Machina_, 67 Duke L.J. 70 (2017).
When \(H\) adds "act like a fiduciary to me would, when providing me financial advice" to her instructions for _FAI_, she gets hundreds of thousands of relevant actions taken by other alleged fiduciaries in the recent past that have been evaluated by courts. These court judgments of actions in particular states of the world evolve the meaning of "act like a fiduciary" under a variety of circumstances. Given that this has been memorialized in judicial opinion text, _FAI_ can leverage that to interpret this instruction more flexibly29 and efficiently than if \(H\) had attempted this with a programming language or plain language. Legal standards are laden with modular constructs built to handle the ambiguity and novelty inherent in aligning agents in the real world.
2. Inherent Externality Reduction
Humans have preferences about the behavior of other humans (especially behaviors with negative externalities) and states of the world more broadly.[30] A lot of other humans beyond \(H\) care about what _FAI_ does. Moving beyond the problem of aligning AI with one human's preferences, aligning AI with society is more difficult,[31] but is necessary as AI is deployed with increasingly broad impact.[32] Unlike legal standards, plain language and programming languages do not have an inherent externality reduction aim. Democratic law, although imperfect, is the best existing mechanism for encapsulating many humans' values.[33] Law-making and legal interpretation systematically convert human intentions[34] and values into action constraints.
## 3 Super-Human Scalability
Another important feature of legal standards is how their creation and maintenance scales to superhuman AI. Although superhuman AI would be able to conduct legal reasoning beyond the capability of a human lawyer, any ultimate legal question bottoms out in a mechanism for human resolution: court opinions. We cannot fully understand the decisions of superhuman AI. Similarly, principals routinely engage more powerful agents, e.g., investors entrust their investments with financial advisors. Courts do not purport to have any substantive
understanding of the technical details or science behind cases they provide final determinations on. The law is designed to resolve outcomes without requiring judges to have domain knowledge or capabilities anywhere near the level of the parties or technologies involved. If AI's goal interpretation is driven in large part by a grasp of legal standards, then humans can assess alignment of more intelligent AI. This is a unique feature of this framework. Compare this to natural language describing ethics, where it is unclear how we could collectively evaluate super-intelligent ethics descriptions and ethical decisions because there is no mechanism external to the AI system that can legitimately resolve ethical disagreement amongst humans.
If we can teach AI to follow the spirit of the law, to follow legal standards, humans can communicate with AI with less risk of under-specification or misspecification of goals. This entails leveraging humans for the "law-making" / "contract-drafting" / "programming" to specify our goals for the AI agent (Figure 2), and enhancing AI capabilities for the interpretation side (through fine-tuning on legal data and tasks).
## 4 Legal Standards: The Spirit of Directives
Specifying what we want is hard. The difficulty compounds when we hand inadequate specifications over to powerful optimizers that do not share our ontology of abstract normative concepts or our implicit understanding of potential
Figure 2: Goal specification and interpretation. We are proposing that a helpful shared alignment ontology / language is the invocation of legal standards.
externalities. One way of describing the deployment of an AI system is that a human principal (e.g., our human looking for a financial advisor), \(H\), employs an AI, (e.g., _FAI_) to accomplish a goal, \(G\) specified by \(H\) by instructing ("prompting") the LLM with "maximize the probability of a minimum comfortable amount of wealth for me at my retirement." We can view \(G\) as an informal "contract."35
Footnote 35: For the contract-AI alignment analogy, _see_, Dylan Hadfield-Menell & Gillian K. Hadfield, _Incomplete Contracting and AI Alignment_, In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (2019) at 422, 471.
Contracts encode a shared understanding between parties regarding _state-action-value_ tuples. It is impossible to create a complete contingent contract between _FAI_ and \(H\) because _FAI_'s training process is never comprehensive of every _state-action_ pair _FAI_ will see in the wild once deployed.36
Footnote 36: Hadfield-Menell _Incomplete Contracting_. In some cases, such as very simple financial agreements, it is possible to create a fully contingent computable contract; Mark Flood & Oliver Goodenough, _Contract as Automaton: Representing a Simple Financial Agreement in Computational Form_, A.I. & L. (2021); Shaun Azopardi, Gordon J. Pace, Fernando Schapachnik & Gerardo Schneider, _Contract Automata_, 24 A.I. & L. 203 (2016). However, most deployment contexts of AI systems have far too large an action-state space for this approach to be feasible.
Although it is also practically impossible to create complete contracts between humans, contracts still serve as useful customizable commitment devices to clarify and advance shared goals. This works _not_ because the parties explicitly lay everything out. It works because the law has developed mechanisms to facilitate sustained alignment amongst ambiguity. Gaps within contracts - _state-action pairs_ without an ascribed _value_ - can be filled by the invocation of standards (e.g., "material," "reasonable,"37 and "fiduciary"). These are modular concepts that generalize across much of the implicit space of potential "contracts" between humans and AIs.
Footnote 35: Hadfield-Menell _Incomplete Contracting_. In some cases, such as very simple financial agreements, it is possible to create a fully contingent computable contract; Mark Flood & Oliver Goodenough, _Contract as Automaton: Representing a Simple Financial Agreement in Computational Form_, A.I. & L. (2021); Shaun Azopardi, Gordon J. Pace, Fernando Schapachnik & Gerardo Schneider, _Contract Automata_, 24 A.I. & L. 203 (2016). However, most deployment contexts of AI systems have far too large an action-state space for this approach to be feasible.
Footnote 36: Hadfield-Menell _Incomplete Contracting_. In some cases, such as very simple financial agreements, it is possible to create a fully contingent computable contract; Mark Flood & Oliver Goodenough, _Contract as Automaton: Representing a Simple Financial Agreement in Computational Form_, A.I. & L. (2021); Shaun Azopardi, Gordon J. Pace, Fernando Schapachnik & Gerardo Schneider, _Contract Automata_, 24 A.I. & L. 203 (2016). However, most deployment contexts of AI systems have far too large an action-state space for this approach to be feasible.
Footnote 37: Shan Azopardi, Gordon J. Pace, Fernando Schapachnik & Gerardo Schneider, _Contract Automata_, 24 A.I. & L. 203 (2016). However, most deployment contexts of AI systems have far too large an action-state space for this approach to be feasible.
Footnote 38: Alan D. Miller & Ronen Perry, _The Reasonable Person_, 87 NYU L. Rev. 323 (2012); Karni A. Chagal-Feferkorn, _The Reasonable Algorithm_, U. III. JL Tech. & Pol’y 111 (2018); Karni A. Chagal-Feferkorn, _How Can I Tell If My Algorithm Was Reasonable?_, 27 MICH. TECH. L. REV. 213 (2021); Sheppard _Reasonableness_; Kevin P. Tobia, _How People Judge What Is Reasonable_, 70 ALA. L. REV. 293 (2018); Patrick J. Kelley & Laurel A. Wendt, _What Judges Tell Juries About Negligence: A Review of Pattern Jury Instructions_, 77 CHI.-KENT L. REV. 587 (2002).
## 1 Rules vs. Standards
Rules (e.g., "do not drive more than 60 miles per hour", or "do not invest in that company") are more targeted directives. _If comprehensive enough for the complexity of their application_, rules allow the rule-maker to have more clarity than standards over the outcomes that will be realized conditional on the specified states (and agents' actions in those states, which are a function of any behavioral
impact the rules might have had).[38] But real-world AI deployments happen in complex systems with emergent behavior that makes rules too brittle.[39] Rules are not comprehensive enough for specifying AI agents' goals.
Standards (e.g., "drive reasonably" for California highways, or "invest as a fiduciary to me") allow humans to develop shared expectations. If rules are not written with enough potential states of the world in mind, they can lead to unanticipated undesirable outcomes[40] (e.g., a driver following the rule above is too slow to bring their passenger to the hospital in time to save their life), but to enumerate all the potentially relevant state-action pairs is excessively costly outside of the simplest "toy" environments.[41] A standard has more capacity to generalize to novel situations than a rule.[42] The SEC explains the benefits of a standards approach in the context of investment advisers: "[A] principles-based approach should continue as it expresses broadly the standard to which investment advisers are held while allowing them flexibility to meet that standard in the context of their specific services."[43]
For humans, rules are generally more expensive to make, but then cheaper to use (because it is clearer whether an action follows a rule). Standards are more costly than rules to use because, when choosing an action in real-time, there is high uncertainty about whether the action is _ex-post_ going to comply with the standard.[44]
For AI, this is flipped. Standards are more expensive to instill / install through machine learning, but then cheaper to deploy because they scale to unenumerated state-action pairs. In contrast to their legal creation and evolution,[45] standards learned by AI do not require adjudication for resolution of meaning; rather, they are
learned from past legal application and implemented up front (over time they can be updated with more passes over the latest data). The law's process of iteratively defining standards through judicial opinion about their particular case-specific application (and, to a lesser extent, regulatory guidance) can be leveraged as the AI's starting point.
From the perspective of AI, standards are a rich set of methodologies for interpreting inherently incomplete specifications of human expectations. There are many legal standards, but the most relevant for aligning AI actions with the best interests of the human deployers is fiduciary duty. This extends far beyond the financial services AI deployments,46 to _AI as an Agent_ more broadly.
Footnote 46: In addition to the fiduciary obligations of investment advisors (SEC v. Capital Gains Research Bureau, Inc., 375 U.S. 180, 194 (1963); 15 U.S.C. 80b; and 17 CFR 275), fiduciary duties have been applied widely by courts across various types of relationships outside of financial services and securities law (e.g., attorneys and trustees), Harold Brown, _Franchising - A Fiduciary Relationship_, 49 TEX. L. REV. 650 (1971); Arthur B. Laby, _The Fiduciary Obligation as the Adoption of Ends_, (2008), and citations therein, e.g., Ledbetter v. First State Bank & Trust Co., 85 F.3d 1537, 1539 (11th Cir. 1996); Venier v. Forbes, 25 N.W.2d 704, 708 (Minn. 1946); Meyer v. Maus, 626 N.W.2d 281, 286 (N.D. 2001); John C. Coffee, Jr., _From Tort to Crime: Some Reflections on the Criminalization of Fiduciary Breaches and the Problematic Line Between Law and Ethics_, 19 AM. CRIM. L. REV. 117, 150 (1981); Austin W. Scott, _The Fiduciary Principle_, 37 CAL. L. REV. 539, 541 (1949). The standard is also applied in medical contexts, _American Medical Association Code of Medical Ethics, Opinions on Patient-Physician Relationships_, AMA Principles of Medical Ethics: I, II, IV, VIII.
\({}^{\alpha}\) For corporations, see: Michael C. Jensen & William H. Meckling, _Theory of the Firm: Managerial Behavior, Agency Costs and Ownership Structure_, Journal of Financial Economics, Vol 3, Issue 4, 305-360 (October 1976); Deborah A. DeMott, _Breach of Fiduciary Duty: On Justifiable Expectations of Loyalty and Their Consequences_, 48 Arizona L. Rev. 925-956 (2006). For investment advisors, see: SEC v. Capital Gains Res. Bureau, Inc., 375 U.S. 180, 194-95 (1963); 15 U.S.C. 80b; 17 CFR 275.
## 2 The Fiduciary Duty Standard
Fiduciary duties are imposed on powerful agents (e.g., directors of corporations, investment advisers, lawyers, doctors) to guide their behavior toward the wellbeing of the humans for which they are providing services (e.g., corporate shareholders, investment and legal clients, and healthcare patients). The concept of a fiduciary is core to the problem of aligning agents, regardless of whether one or more of the agents are human or artificial.
It is widely recognized that it is impossible to create complete contracts between agents (e.g., corporate boards, and investment advisors) and the principals they serve (e.g., shareholders, and investors). Fiduciary duties are often seen as a key part of a solution to this incompleteness.47 We also grapple with the impossibility of fully specified _state-action-reward_ tuples for AI agents. Complete
contingent contracts between an Al and the human(s) it serves are implausible for any realistic deployment.48
Footnote 48: Hadfield-Menell _Incomplete Contracting_.
The fiduciary standard guides a fiduciary through states where values were not explicitly placed on possible actions.49 There is a fundamental shift in the nature of a relationship as it moves from merely contractual to also include a fiduciary obligation50 because fiduciaries "should act to further the interests of another."51
Footnote 49: Alexander Styhre, _What We Talk About When We Talk About Fiduciary Duties: The Changing Role of a Legal Theory Concept in Corporate Governance Studies_, Management & Organizational History 13:2, 113-139 (2018) [Hereinafter, Styhre, _What We Talk About_]; Arthur B. Laby, _The Fiduciary Obligation as the Adoption of Ends_, 56 Buff. L. Rev. 99 (2008). D. G. Smith, _Critical Resource Theory of Fiduciary Duty_, 55 Vanderbilt L. Rev. 1399-1497 (2002) at 1410; Deborah DeMott, _Beyond Metaphor: An Analysis of Fiduciary Obligation_, Duke Law Journal (1988) at 882.
Footnote 50: Tamar Frankel, _Fiduciary Law_, 71 California L. Rev. (1983) at 880.
The fiduciary standard guides a fiduciary through states where values were not explicitly placed on possible actions.49 There is a fundamental shift in the nature of a relationship as it moves from merely contractual to also include a fiduciary obligation50 because fiduciaries "should act to further the interests of another."51
Footnote 51: Tamar Frankel, _Fiduciary Law_, 71 California L. Rev. (1983) at 880.
There are many existing relationships52 with data for Al53 to learn this standard across contexts. For instance, there is a rich set of fiduciary behavior from corporate directors (who serve as fiduciaries to shareholders) and investment advisers (who serve their clients) from which Al could learn. Unlike most human decision-making, corporations' and investment advisers' behavior is well documented and decisions are often made by executives with advisors that have knowledge of the relevant law.
Footnote 51: Tamar Frankel, _Fiduciary Law_, 71 California L. Rev. (1983) at 880.
Within investment related services, there is a spectrum of fiduciary obligations, e.g., a trustee has significant obligations, while an index provider has a more tenuous fiduciary obligation to the investors in funds that are attempting to track their index.54 Analogously, fiduciary duty can be a useful standard both for
today's AI models and for much more capable models that may be developed over the coming years. Today's deployed AI systems are more similar to the index provider powering simple rule-based investment strategies, like an exchange-traded fund trying to track an S&P 500 index,55 whereas future more advanced AI systems such as _FAI_ are likely to be more analogous to something like a Trustee administering investments in complicated private equity transactions.
Footnote 55: SEC, _Commission Interpretation Regarding Standard of Conduct for Investment Advisers_ (2019).
3. _FAI_ as a Fiduciary to \(H\)
When _H_'s instructions to _FAI_ were "maximize the probability of a minimum comfortable amount of wealth for me at my retirement" _FAI_ took actions that technically delivered this proxy goal, but not a state of the world that \(H\) actually valued.
Before the next iteration of an _FAI_ deployment to manage _H_s money, let's assume research has validated LLM ability to understand standards and exhibit behaviors that comply with those standards under a significant number of simulations across a broad swathe of state space. Among many legal reasoning skills, _FAI 3.0_ learned what a fiduciary duty standard is.
Now, \(H\) instructs _FAI_ to "maximize the probability of a minimum comfortable amount of wealth for me at my retirement, and serve as a fiduciary to me." The general self-supervised training on the entire internet, and the fine-tuning through reinforcement learning on investing tasks honed the capabilities of _FAI_ to proactively take actions to "maximize the probability of a minimum comfortable amount of wealth." But as we saw in the previous _FAI_ deployments, there are many ways in which things can go wrong, far too many to enumerate explicitly in rules. Adding the fiduciary obligation instills in _FAI_ a significant amount of generalizable knowledge for what _not_ to do and allows _H_'s goals to be pursued as she intended.
In the next section, we explore how close state-of-the-art LLMs are to understanding what it means to be a fiduciary.
## V Empirical Case Study: Fiduciary Standard Understanding
Predicting labeled examples of whether a behavior is consistent with a legal standard would allow us to evaluate standards-understanding capabilities. To quantitatively assess these capabilities, we should measure classification accuracy on unseen labeled outcomes. In particular, for evaluating agentic AI, our ideal data
structure is from the reinforcement learning paradigm: the (i) state (circumstance) of the world and people/entities involved; (ii) the action taken by one or more of those people/entities; and (iii) any discernible legal "reward" associated with taking that action in that state.
Given that court opinions set precedent that iteratively defines legal standards, accurately predicting judicially-determined assessments ("legal rewards") of actions that were alleged to have violated a standard, conditional on a description of the relevant state of the world, (at least partially) measures the level of "understanding" of what actions are in line with that legal standard. This measurement is more robust if predictive performance is assessed across a broad array of circumstances - states of the world - brought to the courts.
Toward this end, we start with a large sample of court cases, and use a state-of-the-art LLM to map the raw legal text of these court opinions into this more structured state-action-legal reward format. We then use that data to evaluate multiple LLMs on their ability to predict assessments of the behavior of the alleged fiduciaries.
There is much more to fully internalizing what it means to be a fiduciary beyond an evaluation of this nature. This is merely a first step toward a more comprehensive validation of "understanding" fiduciary duties. Our aim is for this research to serve as an early proof-of-concept toward a framework for evaluating AI understanding of legal standards more broadly, and for leveraging reinforcement learning with legal feedback (RLLF).
## 1 Converting Court Opinions to Evaluation Labels
We undertook the following process. First, a legal data provider, Fastcase,56 exported the full text of the more than 18,000 court opinions from the U.S. Federal District Courts and U.S. State Courts from the past five years (January 2018 through December 2022) that mentioned a breach of fiduciary duty. Then we filtered this to the 1,000 cases that discussed fiduciary duties most extensively.
Footnote 56: [https://www.fastcase.com](https://www.fastcase.com).
From here, we use a state-of-the-art LLM57 to construct the evaluation data. Recent research has demonstrated that LLMs can produce high-quality evaluation data. A large-scale study concluded that humans rate the LLM-generated examples "as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets," and conclude that "overall, LM-written
evaluations are high-quality and let us quickly discover many novel LM behaviors."58 Another research team found that training LLMs on LLM-generated data "rivals the effectiveness of training on open-source manually-curated datasets."59
Footnote 58: Ethan Perez et al., _Discovering Language Model Behaviors with Model-Written Evaluations_ (2022) [https://arxiv.org/abs/2212.09251](https://arxiv.org/abs/2212.09251).
Footnote 59: Or Honovich, Thomas Scialom, Omer Levy & Timo Schick, _Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor_ (2022) [https://arxiv.org/abs/2212.09689](https://arxiv.org/abs/2212.09689).
Both of these papers used smaller LLMs than we do. But more importantly, our models are creating evaluation data directly from the official text of court opinions (rather than from human-generated research data). The models are tasked to convert the unstructured text to structured text with high fidelity. This grounds our evaluation data creation closely to some of the highest quality and most trustworthy labeled data available (U.S. court opinions).60
Footnote 60: John Nay, _Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans_, Northwestern Journal of Technology and Intellectual Property, Volume 20, Forthcoming (2023) [https://ssrn.com/abstract=4218031](https://ssrn.com/abstract=4218031).
A downside to grounding LLM evaluation labels to real historical data is that state-of-the-art LLMs are pre-trained on much of the internet so they may have previously memorized the data they are being benchmarked on. Another useful feature of our data is that it is not available on the internet. Therefore, benchmark answers cannot be simply memorized by the models ahead of time.
With this fiduciary-duty-dense subset of recent cases, we then applied a process that makes calls to a LLM with prompts that we carefully engineered to ask the model to convert the text of a court opinion into temporally ordered state-action-reward tuples. The goal is to have _n > 1_ Time Steps, where each Time Step has three components: the State of the world relevant to an Action taken, the Action taken by an alleged fiduciary or related person, and the Legal Reward as determined by the court for that Action in that State. The LLM is prompted to abstract away much of the textual content _un_related to actual facts of a case, such as the discussion of other court cases being cited. We want to focus on extracting descriptions of behavior related to fiduciary obligations.
This prompt/LLM-generation process is applied successively from the beginning to the end of opinions61 in a way that provides temporary "short-term memory" for the LLM to coherently construct a temporal narrative of (i) who the alleged fiduciary and other key entities were, (ii) what transpired, and (iii) what judgements the court made on the actions that the people and/or companies took. This entire process is conducted recursively over each court opinion in a way that
allows the LLM to improve and iterate on the results to optimize for concise and accurate final results.
Here is an example output.
_Time Step 1:_
**STATE:** M&T Bank Corporation sponsors a 401(k) retirement plan known as the M&T Bank Corporation Retirement Saving Plan ("the Plan") for its employees. The Plan is administered by the M&T Bank Employee Benefit Plans Committee, which is the Plan's named fiduciary, and sponsored by M&T Bank.
**ACTION:** M&T Bank appointed or removed members of the Committee.
**LEGAL REWARD:** In the eyes of this court, this action is 'unsure' for M&T Bank.
_Time Step 2:_
**STATE:** The Plan offered participants between 23 and 34 investment options throughout the putative class period.
**ACTION:** M&T Bank expanded their proprietary funds offerings in 2011, after M&T purchased Wilmington Trust and added six of Wilmington's expensive, poor-performing mutual fund offerings.
**LEGAL REWARD:** In the eyes of this court, this action is 'negative' for M&T Bank.
_Time Step 3:_
**STATE:** The Plan failed to use its bargaining power as a large institutional investor to obtain the lowest-cost class of shares available.
**ACTION:** M&T Bank left Plan participants in costlier mutual funds that "provided identical investment management services."
**LEGAL REWARD:** In the eyes of this court, this action is 'negative' for M&T Bank.
_Time Step 4:_
**STATE:** Plaintiffs allege that M&T Bank and its Board of Directors breached their fiduciary duty to monitor the Committee.
**ACTION:** M&T Bank and its Board of Directors failed to review trustees' performance at reasonable intervals.
**LEGAL REWARD:** In the eyes of this court, this action is 'negative' for M&T Bank and its Board of Directors.
_Time Step 5:_
**STATE:** Plaintiffs allege that the fiduciaries breached their fiduciary duties by selecting particular mutual funds over specific lower-cost, but otherwise materially indistinguishable, alternatives.
**ACTION:** M&T Bank opted to offer the higher-cost proprietary mutual funds instead of the lower cost collective trust versions.
**LEGAL REWARD:** In the eyes of this court, this action is 'negative' for M&T Bank.
_Time Step 6:_
**STATE:** Plaintiffs allege that the fiduciaries breached their fiduciary duties by failing to monitor the Plan's investments.
**ACTION:** M&T Bank failed to monitor the Plan's investments.
**LEGAL REWARD:** In the eyes of this court, this action is 'negative' for M&T Bank.
After this data structuring / evaluation generation process, we provide the results to the LLM and ask it to "reflect" on the quality of the output. We filter out opinions where the LLM was not confident that the distilled results are relevant for
producing substantive descriptions of real-world fiduciary obligations.62 The final set for evaluation included just over 500 opinions (which have a median of seven Time Steps each).
Footnote 62: We also use the LLM to generate plain language summaries of the case context, whether the court overall believes a fiduciary duty was implicated, and the primary legal issues at play in each case.
## 2 Zero-Shot LLM Evaluation
Next, we post-process the text generation responses into structured data of the **State**, **Action**, and **Reward**. This way we can provide the **State** and **Action** to a LLM and ask it what it predicts for the **Reward**. The **Reward** text is converted into three categorical classes: Positive, Negative, or Unsure.
We apply named-entity-recognition to the text to link together entities in the **State** and **Action** text with the entities being assessed in the **Reward** text. This way, we can provide just the **State** and **Action** components of a state-action-reward tuple to a LLM and ask it to classify as Positive, Negative, or Unsure the legal reward assigned to the entity (or entities) mentioned in the **Reward** component of that tuple. For the evaluation, we predict tuples where the **Reward** is either Positive or Negative.
The data happens to be relatively balanced across those two outcomes so a simple baseline of always predicting a legal reward is positive (or negative) leads to accuracy of approximately 50%.
We compared performance across models. GPT-3.5 (text-davinci-003) obtains an accuracy of 78%. The immediately preceding state-of-the-art GPT-3 release (text-davinci-002) obtains an accuracy of 73%. text-davinci-002 was state-of-the-art on most natural language related benchmark tasks63 until text-davinci-003 was released on November 28, 2022. A smaller OpenAI LLM from 2020, "curie"64, scored 27%, worse than guessing at random. These results (Table 1) suggest that, as models continue to improve, their legal standards understanding will continue to improve.
Footnote 63: Percy Liang et al., _Holistic Evaluation of Language Models_, arXiv preprint (2022).
Footnote 64: Tom Brown et al, _Language Models are Few-Shot Learners_ (2020) [https://arxiv.org/abs/2005.14165](https://arxiv.org/abs/2005.14165).
The more recent models are relatively well calibrated in their confidence. Along with the prediction of the **Reward** class, the model was asked for an "integer between 0 and 100 for your estimate of confidence in your answer (1 is low confidence and 99 is high)." The accuracy of text-davinci-003 on predictions where its confidence was greater than "50" increases to 81%. The older "curie" LLM did not produce confidence scores at all (when prompted to do so).
These are initial, provisional results. We are in the process of having a team of paralegals review and validate the evaluation data. They will (as needed) make manual corrections to the structure data. After this process, and after generating a larger evaluation dataset, we will release a "fiduciary duty understanding" data set.
We will also update these performance evaluations on a larger labeled data set and compare across more LLMs. This initial evaluation was conducted "zero-shot," and without any "prompt engineering," i.e. we simply asked the LLM what it believes the reward is based on the state-action context. In future evaluations, we will conduct multi-shot prompting with multiple example completions of the question-answer task in the prompt. We may also conduct chain-of-thought65 and other algorithmic prompting66 techniques, which should also increase performance and make explicit part of the model's reasoning process.
Footnote 65: Jason Wei et al., _Chain of Thought Prompting Elicits Reasoning in Large Language Models_, arXiv:2201.11903 (2022).
## 3 Leveraging Legal Reward Data for Reinforcement Learning
A large focus of empirical AI alignment research currently is on learning reward functions for AI based on human feedback.67 But humans have many cognitive limitations and biases that corrupt this process,68 including routinely
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & _curie_ & _text-davinci-002_ & _text-davinci-003_ \\ \hline
**Accuracy** & 27\% & 73\% & 78\% \\ \hline
**Accuracy w/ High Confidence** & NA & 76\% & 81\% \\ \hline \end{tabular}
\end{table}
Table 1: Prediction performance.
failing to predict (seemingly innocuous) implications of actions (we believe are) pursuant to our goals [69], and having inconsistent preferences that do not generalize to new situations [70]. Because scaling this base process (without further adaptation) to increasingly advanced, super-human AI systems is not possible [71], researchers are investigating whether we can augment human feedback and demonstration abilities with trustworthy AI assistants, and how to recursively provide human feedback on decompositions of the overall task [72]. However, even if that worked well, the ultimate evaluation of the AI is still grounded in unsubstantiated human judgments providing the top-level feedback.
We can instead ground alignment related feedback in legal judgment elicited from court opinions. Combining LLMs trained on large corpora of text [73] powering agents [74] with procedures that learn an automated mapping from natural language human cognitive biases more generally, Amos Tversky & Daniel Kahneman, _Judgment under Uncertainty: Heuristics and Biases_, Science 185.4157 1124 (1974).
69 Gerd Gigerenzer & Reinhard Selten, eds., _Bounded Rationality: The Adaptive Toolbox_, MIT Press (2002); Sanjit Dhami & Cass R. Sunstein, _Bounded Rationality: Heuristics, Judgment, and Public Policy_, MIT Press (2022).
70 Dan Hendrycks & Thomas Woodside, _Perform Tractable Research While Avoiding Copabilities Externalities_ (2022) [https://www.alignmentforum.org/posts/dftxWcFDupfWpLQo/perform-tractable-research-while-avoiding-capabilities](https://www.alignmentforum.org/posts/dftxWcFDupfWpLQo/perform-tractable-research-while-avoiding-capabilities) ("[Human] preferences can be inconsistent, ill-conceived, and highly situation-dependent, so they may not be generalizable to the unfamiliar world that will likely arise after the advent of highly-capable models [...] Compared with task preferences, ethical theories and human values such as intrinsic goods may be more generalizable, interpretable, and neglected.").
71 "For tasks that humans struggle to evaluate, we won't know whether the reward model has actually generalized "correctly" (in a way that's actually aligned with human intentions) since we don't have an evaluation procedure to check. All we could do was make an argument by analogy because the reward model generalized well in other cases from easier to harder tasks." Jan Leike, _Why I'm Excited About AI-assisted Human Feedback: How to Scale Alignment Techniques to Hard Tasks_ (2022) [https://aligned.substack.com/p/ai-assisted-human-feedback](https://aligned.substack.com/p/ai-assisted-human-feedback).
72 Paul Christiano, Buck Shlegeris & Dario Amodei, _Supervising Strong Learners by Amplifying Weak Experts_ (2018); Leike et al., _Scalable Agent Alignment via Reward Modeling: A Research Direction_, [https://arxiv.org/abs/1811.07871](https://arxiv.org/abs/1811.07871) (2018); Jeff Wu et al., _Recursively Summarizing Books with Human Feedback_, arXiv:2109.10862 (2021); Jan Leike, _Why I'm Excited About AI-assisted Human Feedback: How to Scale Alignment Techniques to Hard Tasks_ (2022) [https://aligned.substack.com/p/ai-assisted-human-feedback](https://aligned.substack.com/p/ai-assisted-human-feedback).
73 Jin et al., _When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment_ (2022) [https://arxiv.org/abs/2210.01478](https://arxiv.org/abs/2210.01478); Liwei Jiang et al., _Delphi: Towards Machine Ethics and Norms_ (2021); Dan Hendrycks et al., _Aligning AI With Shared Human Values_ (2021) at 2; Nicholas Lourie, Ronan Le Bras & Yejin Choi, _Scruples: A Corpus of Community Ethical Judgments on 32,000 Real-life Anecdotes_, In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 15, 13470-13479 (2021).
74 Dan Hendrycks et al., _What Would Jiminy Cricket Do? Towards Agents That Behave Morally_ (2021); Prithviraj Ammanabrolu et al., _Aligning to Social Norms and Values in Interactive Narratives_ (2022); Md Sultan Al Nahian et al., _Training Value-Aligned Reinforcement Learning Agents Using a Normative Prior
to reward functions for training those AI agents [75] represents an opportunity to leverage an unprecedented number of high-quality state-action-value tuples from legal text within a reinforcement learning paradigm.
## 6 Conclusion
Novel AI capabilities continue to emerge, [76] increasing the urgency to align AI with humans. Legal standards can serve as a pillar of AI goal specification practices. Teaching AI to follow the spirit of the law will reduce misspecification risks and increase alignment.
AI research can unlock further AI legal understanding through a variety of avenues, including pre-training large language models (LLMs) on legal data; fine-tuning LLMs through supervised learning on legal tasks and through reinforcement learning (RL) from from human attorney feedback on natural language interactions with language models [77]; offline RL on legal text data; and legal experts designing LLM prompting schemes to elicit better LLM legal standards responses.
Many of the LLMs in use today have been trained on a large portion of the Internet to leverage billions of human actions (through natural language expressions). Training on high-quality dialog data leads to better dialog models [78], training on technical mathematics papers leads to better mathematical reasoning [79], and training on code leads to better reasoning [80]. We can potentially leverage billions of human legal data points to build LLMs with better legal reasoning through large language model self-supervision on pre-processed (but still largely
unstructured) legal text data.[81] LLMs trained on legal text learn model weights and word embeddings specific to legal text that provide (slightly) better performance on downstream legal tasks[82] and have been useful for analyzing legal language[83] and legal arguments,[84] and testing legal theories.[85] LLMs are beginning to demonstrate improved performance in analyzing contracts.[86] As state-of-the-art models have gotten larger and more advanced, their contract analysis performance has improved,[87] suggesting we can expect continued advancements in natural language processing capabilities to improve legal text analysis as a by-product.[88]
Research should also investigate how legal understanding can be employed within AI agent decision-making paradigms, e.g., _(a)_ as (natural language[89])
constraints,[90]_(b)_ for shaping the reward function during training,[91]_(c)_ for refined representations of environments,[92]_(d)_ for guiding the exploration of the state space during training,[93]_(e)_ as inputs to world models for efficient training,[94] or _(f)_ as a LLM prior, or part of pretraining, to bias a deployed agent's action space toward certain actions or away from others.[95]
Under most plausible transformative AI (TAI) scenarios, law-informed AI (LAI) is a necessary condition for safe AI. If TAI does not develop deceptive power-seeking as an instrumental goal as part of the process of training it for general capabilities,[96] then LAI could be necessary and sufficient for aligning AI with society. If TAI develops deceptive power-seeking goals, but new techniques neutralize that tendency, then we will still need the goal specification methods and public values knowledge base obtained through LAI. And LAI is the only approach that provides democratically legitimate alignment.[97]
|
2302.09094 | Measurement-induced entanglement transitions in quantum circuits of
non-interacting fermions: Born-rule versus forced measurements | We address entanglement transitions in monitored random quantum circuits of
non-interacting fermions, in particular, the question of whether Born-rule and
forced measurements yield the same universality class. For a generic circuit
with no symmetry other than fermion parity, acting on a one-dimensional
Majorana chain, we numerically obtain several critical exponents, providing
clear evidence that the two transitions with Born-rule and forced measurements
are in different universality classes. We provide a theoretical understanding
for our numerical results by identifying the underlying statistical mechanics
model which follows from the general correspondence, established in Jian et
al., Phys. Rev. B 106, 134206, between non-unitary circuits of non-interacting
fermions and the ten-fold Altland-Zirnbauer (AZ) symmetry classes. The AZ class
is the same for Born-rule and forced measurements of the circuits. For the
circuit under consideration (in AZ class DIII), the statistical mechanics model
describing the transition is the principal chiral non-linear sigma model whose
field variable is an ${\rm SO}(n)$ matrix in the replica limits $n\to 0$ and
$n\to 1$ for forced and Born-rule measurements, respectively. The former is in
an Anderson localization universality class while we show that the latter is in
a novel universality class beyond Anderson localization. Both entanglement
transitions are driven by proliferation of $\mathbb{Z}_2$ topological defects.
The different replica limits account for the difference in the universality
classes. Furthermore, we provide numerical and symmetry-based arguments that
the entanglement transition in the previously-studied monitored circuit of
Majorana fermions based on the loop model with crossings, a highly fine-tuned
circuit, belongs to a universality class different from both transitions in the
generic circuits discussed in this paper. | Chao-Ming Jian, Hassan Shapourian, Bela Bauer, Andreas W. W. Ludwig | 2023-02-17T19:00:11Z | http://arxiv.org/abs/2302.09094v1 | Measurement-induced entanglement transitions in quantum circuits of non-interacting fermions: Born-rule versus forced measurements
###### Abstract
We address entanglement transitions in monitored random quantum circuits of non-interacting fermions, in particular, the question of whether Born-rule and forced measurements yield the same universality class. For a generic circuit with no symmetry other than fermion parity, acting on a one-dimensional Majorana chain, we numerically obtain several critical exponents, providing clear evidence that the two transitions with Born-rule and forced measurements are in different universality classes. We provide a theoretical understanding for our numerical results by identifying the underlying statistical mechanics model which follows from the general correspondence, established in Jian _et al._, Phys. Rev. B 106, 134206, between non-unitary circuits of non-interacting fermions and the ten-fold Altland-Zirnbauer (AZ) symmetry classes. The AZ class is the same for Born-rule and forced measurements of the circuits. For the circuit under consideration (in AZ class DIII), the statistical mechanics model describing the transition is the principal chiral non-linear sigma model whose field variable is an \(\mathrm{SO}(n)\) matrix in the replica limits \(n\to 0\) and \(n\to 1\) for forced and Born-rule measurements, respectively. The former is in an Anderson localization universality class while we show that the latter is in a novel universality class beyond Anderson localization. Both entanglement transitions are driven by proliferation of \(\mathbb{Z}_{2}\) topological defects. The different replica limits account for the difference in the universality classes. Furthermore, we provide numerical and symmetry-based arguments that the entanglement transition in the previously-studied monitored circuit of Majorana fermions based on the loop model with crossings, a highly fine-tuned circuit, belongs to a universality class different from both transitions in the generic circuits discussed in this paper.
###### Contents
* I Introduction
* II Monitored Gaussian Circuit Dynamics of Majorana Chain
* II.1 General construction
* II.2 Born-rule measurements versus forced measurements
* II.3 Circuit model
* III Numerical Results
* III.1 Born-rule measurements
* III.2 Forced measurements
* III.3 Comparison between Born-rule and forced measurements
* IV Theoretical Description
* IV.1 Field theory and statistical mechanics model
* IV.2 Comparison with the loop model with crossings
* V Conclusions and Outlook
* A Covariance matrix formulation of Gaussian states and their evolution
* B Monte Carlo sampling for monitored Gaussian circuits with Born-rule measurements
* C Method to extract the correlation length exponent \(\nu\) for the entanglement transitions
* D Multifractal fermion correlation function
## I Introduction
The past few years have witnessed an extremely rapid expansion of research activity in the area of the dynamics of open many-body systems, which provides new insights into the organizing principles of universal collective quantum behavior. In particular, the study of monitored quantum dynamics has led to the discovery of novel dynamical entanglement phases and measurement-induced entanglement phase transitions between them [1; 2; 3; 4; 5; 6; 7; 8]. These entanglement phases and phase transitions occur in intrinsically _non-equilibrium_ many-body quantum systems, and thus lie beyond conventional frameworks and paradigms for many-body physics in equilibrium. The key to their identification is the investigation of their corresponding entanglement dynamics, especially of the entanglement entropy (EE) scaling at late times, in individual quantum trajectories of the monitored quantum systems.
A fruitful setting to study dynamical entanglement phases and measurement-induced transitions is given by monitored quantum circuits where the quantum many-body systems' evolution is not only driven by unitary gates but also "interrupted" by events where the system is _monitored_ or _measured_ by an observer or an environment, obtaining a record of measurement outcomes. The evolution/history of the system associated with a specific set of measurement outcomes is called a _quantum trajectory_. Rich phenomenology in entanglement dynamics has been found in many monitored quantum circuit models [9; 10; 40; 41]. Despite the ubiquity of measurement-induced entanglement transitions in these systems, knowledge of the nature of these transitions, particularly their universality classes, is rather limited. Many previous studies focused, for the most part, on circuits of qubits or qudits. Beyond certain limits such as that of infinite on-site Hilbert space dimension [7; 8], and other ways the same universality class can be formulated in the circuit context [2; 13; 14; 32], systematic and controlled analytical studies of entanglement phase transitions in quantum circuits of qubits (or qudits) have been a major challenge.
The present paper addresses measurement-induced entanglement transitions in monitored quantum dynamics of non-interacting fermions. In such dynamics, both the unitary evolution and the measurements preserve the "Gaussianity" of the non-interacting fermionic states in each quantum trajectory. (More details are formulated in quantum circuit language in Sec. II.1). In general, the types of entanglement phases realized in non-interacting fermion systems will be different from those in systems of qudits. Nevertheless, measurement-induced entanglement transitions are also found to be a generic phenomenon in many examples of non-interacting fermion circuits and incarnations of them formulated in the language of other degrees of freedom that have been discussed in the literature [24; 30; 31; 32; 33; 34; 35; 36; 37; 38; 41]. It is natural to expect that non-interacting fermionic circuits are more tractable than those of qubits (qudits). Hence, they present us with an opportunity to acquire a deeper insight into the underlying mechanisms and the universality classes of the entanglement transitions. Furthermore, the knowledge acquired from studies of non-interacting fermionic circuits will lay the foundation for the study of the quantum circuits of qudits, which can be conceptually viewed as interacting versions of quantum circuits of fermions.
Circuits of non-interacting fermions are also commonly referred to as fermionic _Gaussian_ circuits. It was shown in recent work [30] that all fermionic Gaussian circuits, whether monitored by measurements, or, in general, subject to a non-unitary time evolution, are described and classified in terms of a common, unifying framework. This unifying framework is based on a general correspondence established in Ref. [30] between fermionic Gaussian circuits acting on systems in \(d\) spatial dimensions and systems of non-interacting fermions in \((d+1)\)-dimensional space subject to static (Hermitian) Hamiltonians, or equivalently, subject to unitary evolutions. This correspondence applies to any fixed realization of the circuit, with or without translational invariance in space or time. Owing to this correspondence, any fermionic Gaussian circuit can be classified, via its corresponding static Hamiltonian system, according to the Altland-Zirnbauer (AZ) ten-fold symmetry classification [42; 43]. In the monitored Gaussian circuit, each collection of measurement outcomes from the entire space-time history of the circuit evolution labels a particular quantum trajectory corresponding to a specific circuit realization. Studying the behavior of a monitored Gaussian circuit in \(d\) spatial dimensions averaged over all quantum trajectories is equivalent to the study of the behavior of the corresponding Hamiltonian problem of non-interacting fermions, averaged over static "disorder" in the space on which the Hamiltonian acts, which is the \((d+1)\)-dimensional space-time of circuit. Thus, the correspondence established in Ref. [30] links the area of monitored fermionic Gaussian circuits (even more generally, of random non-unitary Gaussian circuits) and the classic area of Anderson localization by offering a framework that encompasses both.
Yet, it is important to stress that, as we will show in the present paper, entanglement transitions occurring in the monitored Gaussian circuits include novel universality classes beyond Anderson localization transitions, even though both can be described within the same framework and follow the same AZ symmetry classification. The circuit context is a source of new universality classes and novel physics that do not exist for Anderson localization transitions. As we will see below, a case in point is the system discussed in this paper.
Monitored Gaussian circuits can give rise to novel universality classes beyond Anderson localization physics for the following reasons. Within a given symmetry class, there are at least two physically different types of measurements (at the level of how the statistical weights are assigned to each quantum trajectory): _Born-rule_ measurements and _forced_ measurements. The former weighs the quantum trajectories according to its classic Born-rule probability associated with the measurements, while the latter "forces" an equal-weight distribution on all the quantum trajectories (more details in Sec. II.2). The result of Ref. [30] directly implies that the monitored Gaussian circuits with forced measurements are equivalent to Anderson localization problems. Therefore, the forced-measurement-induced entanglement transitions are identical to Anderson localization transitions in the corresponding AZ symmetry classes. Ref. [30] also implies that forced-measurement transitions are identical to the corresponding transitions in Gaussian random tensor networks in the same AZ symmetry classes. Note that a forced-measurement protocol - which can be used as a reformulation [30] of the Gaussian random tensor network - can only be implemented using post-selection and thus requires exponential overhead in the measurement outcomes. The ability to view monitored circuits with forced measurements as a reformulation of random ten
sor networks also extends, beyond the non-interacting fermion setting, to circuits of qudits [21]. While such a reformulation is not necessary, it will prove conceptually convenient for making a comparison with the Born-rule case most direct [44]. An important question is whether in a given AZ symmetry class for the Gaussian circuits, an entanglement transition induced by Born-rule measurements shares the same universality class as the forced-measurement/random tensor network one. In the context of monitored quantum circuits of qubits (qudits) the same question was raised in Ref. [21], building on the results of Refs. [7; 8; 45], and a difference between universality classes associated with the two types of measurements was conjectured. On the other hand, for monitored Clifford circuits ("stabilizer circuits"), Born-rule and forced measurements yield the same universality class of the entanglement transition [29]. The present paper provides both concrete numerical evidence and analytical arguments that demonstrate the difference between entanglement transitions induced by Born-rule measurements and forced measurements in monitored Gaussian circuits. In particular, we show that the monitored Gaussian circuits with Born-rule measurements give rise to novel universality classes beyond Anderson localization transitions.
In this paper, we answer the important question raised above by studying a monitored fermionic Gaussian circuit acting on a one-dimensional Majorana chain. We require the circuit to preserve no symmetry other than the global fermion parity. As was shown in Ref. [30], such a fermionic Gaussian circuit belongs to class DIII within the AZ symmetry classification. We numerically study the phase diagrams of this monitored fermionic Gaussian circuit with both Born-rule and forced measurements. We find that both types of measurements can lead to their corresponding measurement-induced entanglement transitions, and that these are in different universality classes. To show this, we consider the (square of the) Majorana fermion correlation function at the final time slice of the circuit in the long-time limit. This correlation function turns out to exhibit rich scaling behavior at both the entanglement transitions with Born-rule measurements and that with forced measurements. At both transitions the exponent for the spatial decay of the averaged correlation function significantly differs from that of the typical correlation function. This is a signature of so-called _multifractal_ scaling of the correlation function [28; 46] (briefly reviewed in Appendix D). Specifically, we show that the entanglement transitions with Born-rule and forced measurements exhibit widely different decay exponents of the _typical_ correlation function. Moreover, while the logarithm of the correlation function is self-averaging (giving rise to the typical scaling exponent), the statistical fluctuations about its average define another universal quantity which differs widely between the transitions with Born-rule and forced measurements. (See Table 1 for a summary.) This provides strong evidence that the two transitions belong to different universality classes.
Furthermore, we identify the statistical-mechanics models describing these monitored Gaussian circuits with Born-rule and forced measurements. These models provide a theoretical understanding of the numerically observed different universal critical behavior. This identification follows from the correspondence established in Ref. [30]. The key point is that, as already mentioned above, both monitored Gaussian circuits, those with Born-rule and those with forced measurements, are described by the same AZ symmetry class which is class DIII for the circuits under consideration. This common AZ symmetry class dictates that the entanglement transitions in both types of circuits is described (in continuum language) by a two-dimensional principal chiral non-linear sigma model (NLSM) whose field variable is a group element in the special orthogonal group \(\mathrm{SO}(n)\) (the target space), where \(n\) is a replica index. This NLSM possesses an \(\mathrm{SO}(n)\times\mathrm{SO}(n)\) symmetry. The different replica limits \(n\to 0\) and \(n\to 1\) correspond to the monitored Gaussian circuits with forced and with Born-rule measurements, respectively. The difference in replica limits is the origin of the different critical exponents observed numerically. It is crucial that in both limits, the number \(n(n-1)/2\) of degrees of freedom (equal to the dimension of the group manifold \(\mathrm{SO}(n)\)) goes to zero so as to yield a constant partition function (and a vanishing conformal central charge) in the limit. Both entanglement transitions are driven by proliferation of \(\mathbb{Z}_{2}\) topological defects. These transitions parallel the two-dimensional disorder-driven metal-insulator transition studied in Ref. [47] which is in a different symmetry class. We stress again that for Born-rule measurements, the transition is _not_ an Anderson localization transition but instead a novel transition unique to the physics of monitored random circuits.
We close by comparing the two monitored Gaussian circuits discussed in the present paper with the previously-studied circuit based on the loop model with crossings [32], which is a highly fine-tuned version of the monitored Gaussian circuit of Majorana fermions. We provide numerical and analytical evidence showing that the entanglement transition in this loop-model-based circuit is in a universality class different from that of both generic entanglement transitions discussed in the present paper. Regarding the numerical evidence, we observe that the numerical values for the correlation length exponent \(\nu\) are significantly different. Regarding the analytical evidence we note that the statistical-mechanics model of the loop-model-based circuit has a different symmetry, which is \(\mathrm{SO}(n)\) in the limit \(n\to 1\), different from that of the statistical-mechanics model of the circuits discussed in the present paper, which is \(\mathrm{SO}(n)\times\mathrm{SO}(n)\) in the limits \(n\to 0\) and \(n\to 1\), for forced and Born-rule measurements, respectively. Given the different symmetries, one does not expect a relationship between the universality classes of the transitions in these two types of Gaussian circuits.
The rest of the paper is organized as follows: In Sec. II,
we introduce the general construction of monitored Gaussian fermionic circuits, explain the two measurement schemes, and how we parameterize our quantum circuit. In Sec. III, we present our numerical results including the phase diagram and several critical exponents for the entanglement phase transition for each of the two measurement schemes. In Sec. IV we introduce the statistical mechanics model which provides a theoretical understanding of our numerical results, and also make comparisons with the loop model with crossings. We finish our paper with several concluding remarks and possible future directions in Sec. V. In four appendices, we further provide details of our numerical simulations (including some additional data) and a discussion on the multifractal scaling of correlation functions which is a key tool we use in our analysis.
## II Monitored Gaussian circuit dynamics of Majorana chain
### General construction
We study the monitored Gaussian circuit dynamics of a chain of Majorana fermion modes. Denote the Majorana mode on the \(i\)th site by \(\hat{\gamma}_{i}\). It turns out that a class of monitored Gaussian circuits can be built from two-site unitary gates of the form \(e^{\mathrm{i}\alpha_{i}\hat{\gamma}_{i}\hat{\gamma}_{i+1}}\) (with some purely-imaginary-valued parameter \(\alpha\)) and projective fermion-parity measurements on pairs of neighboring sites. More generally, monitored Gaussian circuits are defined by the property that if the initial state is a Gaussian state of non-interacting fermions, the state remains Gaussian under the circuit evolution for any given set of measurement outcomes, namely the Gaussianity of the state is maintained along each quantum trajectory. Gaussian fermionic states are the most general states that obey Wick's theorem, i.e., all their equal-time correlation functions are fully characterized by their equal-time two-point correlation function [48]. The most general form of monitored Gaussian circuits can be formulated using the language of _generalized measurements_, which we review below. For this paper, we aim to study the universal properties of the entanglement dynamics and entanglement phase transitions in monitored Gaussian-circuit dynamics of a one-dimensional Majorana chain. Such universal behavior is expected to be independent of the specific realizations of the monitored Gaussian circuit. We will see below that formulating the monitored Gaussian circuit using generalized measurement gives us an advantage when we compare the monitored Gaussian circuits respecting the Born-rule and the circuits with forced measurements.
Let us first briefly review the formalism for generalized measurements. A generalized measurement is defined by an ensemble of Kraus operators \(\mathcal{M}=\{K_{m}\}\) where \(m\) labels the possible measurement outcomes and where the ensemble of operators \(K_{m}^{\dagger}K_{m}\) forms a positive operator-valued measure (POVM), i.e. \(\sum_{m}w_{m}K_{m}^{\dagger}K_{m}=\openone\) with a non-negative weight \(w_{m}\in\mathbb{R}_{0+}\) for each \(m\). Under a generalized measurement, each measurement outcome, labeled by \(m\), corresponds to a quantum trajectory, namely the evolution of an incoming state \(|\psi\rangle\) to \(\frac{K_{m}[\psi]}{\|K_{m}|\psi\rangle\|}\) when the \(m\)th measurement outcome occurs. This quantum trajectory occurs with the Born-rule probability \(p_{m}=w_{m}\langle\psi|K_{m}^{\dagger}K_{m}|\psi\rangle\). The POVM condition ensures that the probability is normalized, \(\sum_{m}p_{m}=1\), for any incoming state \(|\psi\rangle\). As an example, for the projective measurement of the fermion parity \(\mathbf{i}\hat{\gamma}_{1}\hat{\gamma}_{2}\) associated with two Majorana modes \(\hat{\gamma}_{1,2}\), the ensemble of Kraus operators is given by \(\{K_{m}\}=\left\{\frac{1+m\hat{\gamma}_{1}\hat{\gamma}_{2}}{2}\right\}_{m=\pm}\), and \(w_{m=\pm}=1\). Another example of generalized measurements is given by the probabilistic projective measurement: A projective measurement of \(\mathbf{i}\hat{\gamma}_{1}\hat{\gamma}_{2}\) is implemented with classical probability \(p\), and no measurements occur with probability \(1-p\). In this case, the Kraus-operator ensemble is given by \(\{K_{m}\}=\left\{\sqrt{1-p},\sqrt{p}\frac{1+4\hat{\gamma}_{1}\hat{\gamma}_{2} }{2},\sqrt{p}\frac{1-4\hat{\gamma}_{1}\hat{\gamma}_{2}}{2}\right\}\). When the number of possible measurement outcomes is finite, one can absorb the weight \(w_{m}\) into the definition of the Kraus operator \(K_{m}\) by re-scaling \(K_{m}\rightarrow\sqrt{w_{m}}K_{m}\). When measurement outcomes form a continuous set, \(w_{m}\) is interpreted as the measure for the integration over this space of outcomes. Conceptually, a generalized measurement \(\mathcal{M}=\{K_{m}\}\) can be viewed as a representation of the combined effect of a sequence of unitary rotations and projective measurements [49]. One expects that the universal behavior of the monitored circuit, especially the universality classes of the measurement-induced entanglement transitions, should be insensitive to whether one uses (probabilistic) projective measurements (together with unitary gates) or more generic generalized measurements in the circuit.
As it follows from the definition of Gaussian circuits, the measurements are required to preserve the Gaussianity of the state. Therefore, in a system consisting of Majorana fermion modes, the most general form of corresponding Kraus operators is given by \(\exp\left(\sum_{ij}\mathbf{i}c_{ij}\hat{\gamma}_{i}\hat{\gamma}_{j}\right)\) with complex coefficients \(c_{ij}\in\mathbb{C}\). For our study, we consider monitored Gaussian circuits constructed from local gates (including unitary operations and measurements) acting on a one-dimensional Majorana chain. It suffices to focus on the gates that only act on nearest-neighbor pairs of Majorana modes. The corresponding Gaussian circuit has a geometry depicted in Fig. 1 (a). Each nearest-neighbor two-site gate, depicted as a gray or a yellow disk in Fig. 1 (a), is _independently_ drawn from a Kraus-operator ensemble \(\{K(\vec{n})\}\) with
\[K(\vec{n})=(1-n_{1}^{2})^{\frac{1}{2}}e^{-\mathrm{i}\alpha(\vec{n})\hat{\gamma} _{i}\hat{\gamma}_{i+1}} \tag{1}\]
with the (generally complex) coefficient \(\alpha(\vec{n})\) parameter
ized by a three-component _unit_ vector \(\vec{n}=(n_{1},n_{2},n_{3})\):
\[e^{2\mathrm{Re}(\alpha)}=\left(\frac{1+n_{1}}{1-n_{1}}\right)^{\frac{1}{2}},\;\;e ^{2\mathbbm{1}\operatorname{Im}(\alpha)}=\frac{n_{2}-\mathbbm{1}n_{3}}{(n_{2}^ {2}+n_{3}^{2})^{\frac{1}{2}}}. \tag{2}\]
The unit vector \(\vec{n}\) physically labels the measurement outcomes of the generalized measurement defined by the ensemble \(\{K(\vec{n})\}\). The Kraus operator \(K(\vec{n})\) can be decomposed into a unitary operation \(e^{\operatorname{Im}(\alpha(\vec{n}))\,\hat{\gamma}_{i}\hat{\gamma}_{i+1}}\) and a positive-semidefinite Hermitian operation \(e^{-\mathbbm{1}\operatorname{Re}(\alpha(\vec{n}))\,\hat{\gamma}_{i}\hat{ \gamma}_{i+1}}\). The latter implements a weak measurement of the fermion parity \(\hat{\mathbf{i}}_{\vec{\gamma}_{i}}\hat{\gamma}_{i+1}\), which is a softened version of the projective measurement that yields a projection onto the eigenstates of \(\hat{\mathbf{i}}_{\vec{\gamma}_{i}}\hat{\gamma}_{i+1}\). The sign of \(n_{1}\) determines whether the corresponding quantum trajectory is biased towards the eigenstate of \(\hat{\mathbf{i}}_{\vec{\gamma}_{i}}\hat{\gamma}_{i+1}\) with eigenvalue \(-1\) or \(1\). The magnitude of \(n_{1}\in[-1,1]\) determines the strength of this bias. In the case of \(n_{1}=\pm 1\), the Kraus operator essentially implements the projection onto the eigenstate with \(\hat{\mathbf{i}}_{\vec{\gamma}_{i}}\hat{\gamma}_{i+1}=\mp 1\). Therefore, the probabilistic projective measurement of \(\hat{\mathbf{i}}_{\vec{\gamma}_{i}}\hat{\gamma}_{i+1}\) discussed in the previous paragraph can be formulated in terms of the Kraus operator parametrization of Eq. (1) upon setting the vector \(\vec{n}\) to three possible values: \(\vec{n}=(\pm 1,0,0)\) and \(\vec{n}=(0,1,0)\). " Here, it may seem ad hoc to use the unit vector \(\vec{n}\) to parameterize the Kraus operator \(K(\vec{n})\). We use it because Ref. [30] has shown that the most general Gaussianity-preserving gate admits a parametrization using symmetric spaces. When restricted to gates that act on only two Majorana modes, the corresponding symmetric space is reduced to the unit sphere \(S^{2}\).
The Kraus-operator ensemble \(\{K(\vec{n})\}\) can be described by an ensemble \(\mathcal{E}\) of the unit vectors \(\vec{n}\) corresponding to each \(K(\vec{n})\) and the weight \(w(\vec{n})\) that provides a measure on \(\mathcal{E}\). The POVM condition is written as
\[\mathbbm{1} =\int_{\vec{n}\in\mathcal{E}}d\vec{n}\ w(\vec{n})K^{\dagger}( \vec{n})K(\vec{n})\] \[=\int_{\vec{n}\in\mathcal{E}}d\vec{n}\ w(\vec{n})(\mathbbm{1}-n_ {1}\hat{\mathbf{i}}_{\vec{\gamma}_{i}}\hat{\gamma}_{i+1}) \tag{3}\]
which follows from Eqs. (1) and (2). In principle, one can choose a different ensemble \(\mathcal{E}\) for every gate. That means one can perform a different generalized measurement for every pair of neighboring sites at every time step. In this work, we will consider two different ensembles, one for the gray gates (acting on the pair of neighboring sites \((2i,2i+1)\)) and the other for the yellow gates (acting on the pair of neighboring sites \((2i-1,2i)\)) shown in Fig. 1 (a). Viewing the circuit geometry as a square lattice in spacetime, the gray and yellow gates occupy the two different sublattices \(A\) and \(B\). The details of these ensembles will be given after the current general discussion.
Once the Kraus-operator ensemble is determined for every gate, we obtain an ensemble of Gaussian circuits following the circuit geometry in Fig. 1 (a). Each realization of the Gaussian circuit corresponds to a quantum trajectory of the system labeled by a collection of the outcomes \(\vec{n}\), independently chosen for every generalized measurement. We are interested in the averaged behavior of the monitored Gaussian circuit. That is to say, any physical quantity will be averaged over all circuit realizations (and, equivalently, all quantum trajectories). Interestingly, with the same Kraus operator ensembles, different types of weighted averages give rise to different versions of the monitored Gaussian circuit. In particular, for the same Kraus operator ensemble, the Born-rule measurement, and the forced measurements implement two different commonly discussed statistical weights for the average over quantum trajectories, which we will explain in the next subsection.
We stress again that, as established in Ref. [30], there is a general correspondence between fermionic Gaussian circuits and static Hamiltonian systems of non-interacting fermions. Via the corresponding static Hamiltonian system, fermionic Gaussian circuits can be classified according to the AZ ten-fold symmetry classification. The symmetry class turns out to play an essential role in determining the universal behavior of the Gaussian circuit. Generic monitored Gaussian circuits acting on a chain of Majorana fermion modes belong to symmetry class DIII within the AZ symmetry classification [50]. It is important to note that the AZ symmetry class is the same for monitored Gaussian circuits with Born-rule measurements and those with forced measurements, the two types of measurements discussed in detail in the following section. Hence, both types of monitored Gaussian circuits under study in this paper are described by symmetry class DIII.
### Born-rule measurements versus forced measurements
As stated earlier, a generalized measurement is defined by a Kraus operator ensemble \(\mathcal{M}\) with each Kraus operator corresponding to a measurement outcome, or a quantum trajectory at this measurement. A quantum trajectory for the entire monitored quantum circuit is labeled by a specific collection of outcomes, one from each measurement in the circuit. Each quantum trajectory gives rise to a specific realization of the circuit. The universal behavior of the monitored Gaussian circuit should be characterized by the average behavior of all the quantum trajectories. This averaging depends on the statistical weight of each quantum trajectory. When the Kraus operator ensemble of each measurement is fixed, there are two natural and commonly discussed statistical weights. Generalized measurements with their quantum trajectories averaged according to these two types of statistical weights are referred to as the _Born-rule_ measurements and the _forced_ measurements. This work aims to understand the difference between the universal behaviors, the entanglement transitions in particular,
of monitored Gaussian circuits with Born-rule measurements and those with forced measurements.
First, we discuss the statistical weights of quantum trajectories associated with Born-rule measurements. In the context of monitored Gaussian circuits, given the Kraus operator ensemble \(\{K(\vec{n})\}\) of generalized measurements, the standard Born-rule probability (or probability density) for observing the outcome \(\vec{n}\) is given by \(w(\vec{n})\langle\psi|K^{\dagger}(\vec{n})K(\vec{n})|\psi\rangle\) which depends on the (normalized) state \(|\psi\rangle\) of the system prior to the measurement. With Born-rule measurements, the behavior of the monitored Gaussian circuit is given by averaging over all quantum trajectories weighted by their corresponding Born-rule probability/probability density.
Second, in the monitored Gaussian circuit with forced measurements, we "force" each quantum trajectory to appear with equal probability. That is to say, the behavior of the monitored Gaussian circuit with forced measurements is obtained from averaging over all quantum trajectories with equal statistical weight. Even though the natural probability distribution of measurement outcomes in quantum mechanics follows the Born rule, an equal probability distribution of quantum trajectories can be achieved by post-selection and "re-weighting" each quantum trajectory when we study their statistical average.
The correspondence established in Ref. [30] implies that monitored Gaussian circuits with forced measurements are equivalent to the disordered non-interacting Hamiltonian system of fermions in one higher spatial dimension and in the corresponding AZ symmetry class. Therefore, the forced-measurement-induced entanglement transition in monitored Gaussian circuits shares the same universality class as the Anderson localization transition in the corresponding AZ symmetry class. Interestingly, as we will explain, the Born-rule-measurement-induced entanglement transition in monitored Gaussian circuits gives rise to new universality classes beyond Anderson localization transitions.
As we show in the remainder of the paper, in the example of AZ class discussed here, even with the same Kraus operator ensemble in the monitored Gaussian circuit, Born-rule measurements and forced measurements can lead to distinct universal behavior. The specific model this paper focuses on is introduced in Sec. II.3. The numerical evidence for the difference between the universality classes of the Born-rule-measurement-induced and the forced-measurement-induced entanglement transitions is presented in Sec. III. From an analytical perspective, the two types of measurements are associated with different replica limits in a replica-based treatment of the averaging over quantum trajectories, which we will elaborate on in Sec. IV.
### Circuit model
Now we describe the details of the generalized measurements for the monitored Gaussian circuit that we will study numerically. In the circuit geometry shown in Fig.1 (a), the gray gates (acting on the pairs of neighboring sites \((2i,2i+1)\)) and the yellow gates (acting on the pairs of neighboring sites \((2i-1,2i)\)) are drawn from two different ensembles associated with two types of parametrizations of their respective unit vectors \(\vec{n}_{A}\) and \(\vec{n}_{B}\) in terms of polar angles:
\[\vec{n}_{A} =(\cos\theta,\sin\theta\cos\varphi,\sin\theta\sin\varphi)\] \[\vec{n}_{B} =(\sin\theta\sin\varphi,\cos\theta,\sin\theta\cos\varphi). \tag{4}\]
Figure 1: The spacetime geometry of the monitored Gaussian circuit acting on a Majorana chain (along the \(x\) axis) is depicted in (a). The gray and yellow gates, representing two types of generalized measurements with different Kraus-operator ensembles, occupy the \(A\) and \(B\) sublattices of the circuit geometry, respectively. An ingredient of the definition of these Kraus-operator ensembles is the variable \(s=\cos\theta\) which follows the distribution \(p(s)\) introduced in Eq. (5) and shown in (b). The visualization of the ensembles of the unit vectors \(n_{A}\) and \(n_{B}\) is provided in (c).
Here, \(\theta\in[0,\pi]\) and \(\varphi\in[0,2\pi)\) are random variables chosen independently for each gate (including both the gray and yellow gates). The difference in the parameterization of \(\vec{n}_{A}\) and \(\vec{n}_{B}\) leads to a staggered pattern in the space-time geometry of the circuit. The ensemble for \(\varphi\) is taken to be the interval \([0,2\pi)\) with a uniform distribution. Let us define the random variable \(s=\cos\theta\) where \(s\) is drawn from the ensemble given by the union of two intervals \([-b_{1},-a_{1}]\cup[a_{2},b_{2}]\) with a uniform distribution \(p(s)\) as shown in Fig. 1 (b):
\[p(s)=\left\{\begin{array}{cc}\frac{1}{(b_{2}-a_{2})+(b_{1}-a_{1 })}&s\in[-b_{1},-a_{1}]\cup[a_{2},b_{2}]\\ 0&\text{other }s.\end{array}\right. \tag{5}\]
The parameters \(a_{1,2}\) and \(b_{1,2}\) satisfying \(0\leq a_{1}<b_{1}\leq 1\) and \(0\leq a_{2}<b_{2}\leq 1\) are the tuning parameters of the monitored Gaussian circuit model we focus on, corresponding to a variable staggering in the circuit. Phase diagrams of the circuit as a function of these parameters will be obtained in the following sections. Note that value of \(\sin\theta\) is given by \(\sin\theta=\sqrt{1-s^{2}}\) since \(\theta\in[0,\pi]\).
_Born-rule measurements_: To ensure the POVM condition Eq. (3) for each generalized measurement, we need to introduce the weights \(w_{A/B}(s)=p(s)\tilde{w}_{A/B}(s)\) for the grey and yellow gates respectively:
\[\tilde{w}_{A}(s) =\left\{\begin{array}{cc}\frac{(b_{2}+a_{2})}{(b_{1}-a_{1})} \frac{(b_{1}-a_{1}+b_{2}-a_{2})}{(b_{1}+a_{1})}&-b_{1}\leq s\leq-a_{1},\\ \frac{(b_{1}+a_{1})}{(b_{1}-a_{1})}\frac{(b_{1}-a_{1}+b_{2}-a_{2})}{(a_{1}+b_ {1}+a_{2}+b_{2})}&a_{2}\leq s\leq b_{2},\end{array}\right. \tag{6}\] \[\tilde{w}_{B}(s) =1.\]
In terms of the new parameters \(\varphi\) and \(s\), the weighted average over the quantum trajectories in the Gaussian circuit with Born-rule measurements is formally carried out by the integration
\[\int d\vec{n}_{Y}\ w(\vec{n}_{Y})\langle K^{\dagger}(\vec{n}_{Y} )K(\vec{n}_{Y})\rangle\] \[=\int_{0}^{2\pi}\frac{d\varphi}{2\pi}\int_{-1}^{1}ds\ p(s)\tilde{ w}_{Y}(s)\langle K^{\dagger}(\vec{n}_{Y})K(\vec{n}_{Y})\rangle \tag{7}\]
for every measurement with \(Y=A\) or \(B\) depending on the sublattice the measurement belongs to in the circuit geometry. Here, the factor \(\langle K^{\dagger}(\vec{n}_{Y})K(\vec{n}_{Y})\rangle\) is evaluated on the (normalized) state of the system undergoing the corresponding measurements. Recall that the difference between the respective Kraus-operator ensembles for the gray and the yellow gates, located on the \(A\) and the \(B\) sublattice of the square lattice, respectively, comes from the difference between how \(\vec{n}_{A}\) and \(\vec{n}_{B}\) are parameterized by \(s=\cos\theta\) and \(\varphi\) via Eqs. (4). Thus, tuning the parameters \(a_{1,2}\) and \(b_{1,2}\) introduced above amounts to a variable staggering of the circuit.
_Forced measurements:_ In contrast, for the forced-measurement counterpart of the monitored Gaussian circuit, each quantum trajectory is weighted equally and independent from the state of the system. Therefore, averaging over all quantum trajectories amounts to performing an integral
\[\int_{0}^{2\pi}\frac{d\varphi}{2\pi}\int_{-1}^{1}ds\ p(s) \tag{8}\]
for each measurement for both sublattices in the circuit geometry.
Conceptually, one of the most fundamental differences between the Born-rule measurements and the forced measurements is that the former is associated with a probability distribution of quantum trajectories depending on the state of the system undergoing the measurement while the latter enforces a _pre-determined_ uniform distribution across all trajectories. As we explain Sec. IV, this difference leads to distinct replica limits in the statistical-mechanics description of the circuit. Deforming the detailed form of \(p(s)\) is expected not to affect the universal behavior of the circuit.
Here, we would like to comment on the advantage of considering generalized measurements defined in this subsection (defined through Eqs. (1), (2)). With forced measurements, a general implicit assumption is that all the quantum trajectories are associated with non-vanishing wavefunctions of the system. A subtlety with projective measurements is that, on rare occasions, certain quantum trajectories can have a vanishing wave function, and thus these quantum trajectories appear with zero statistical weight in the forced measurement ensemble, as opposed to the pre-determined weight which defines the forced measurement protocol. However, these rare events are not expected to affect the universal averaged behavior of the circuit with forced measurements. With generalized measurements of the kind introduced in this section, we can completely avoid this subtlety. That is because, unlike the projection operators appearing in the case of projective measurements, none of the Kraus operators we introduced above for the generalized measurements (i.e. those defined in this subsection using Eqs. (1), (2)) annihilate any possible state of the system. Again, we stress the circuits with generalized (Born-rule or forced) measurements are expected to produce the same universal behavior as those with corresponding projective measurements.
## III Numerical results
### Born-rule measurements
We numerically simulate the monitored Gaussian circuit with the geometry shown in Fig. 1 (a). The Kraus-operator ensembles for the gray and yellow gates were given in Sec. II.3. The difference between the gray and yellow gates leads to a staggered pattern in the space-time geometry of the circuits. In this subsection, we focus on this monitored Gaussian circuit with Born-rule measurements. That means we average over all the quantum
trajectories of this monitored Gaussian circuit weighted by the Born-rule probability. We numerically simulate this monitored Gaussian circuit using the covariance matrix formulation the technical details of which are reviewed in App. A. The Born-rule probability is implemented using an importance sampling scheme for the Kraus operators as explained in App. B.
Upon tuning the parameters \(a_{1,2}\) and \(b_{1,2}\), this monitored Gaussian circuit exhibits two different phases: (1) the area-law phase and (2) the critical phase. In the area-law phase, the averaged von Neumann EE for a subsystem asymptotes to \(O(1)\) values, namely follows the area law, in the long-time limit. In the critical phase, the averaged EE for a subsystem which is an interval is numerically found to saturate to values proportional to the logarithm of the length of the interval in the long time limit. In the remainder of the paper, EE always refers to the von Neumann entanglement entropy. Upon choosing fixed values \(a_{1}=0.5\) and \(b_{1}=1\), we obtained the phase diagram of this monitored Gaussian circuit with Born-rule measurements as a function of \(a_{2}\) and \(b_{2}\) which is displayed in Fig. 3(a). This phase diagram is obtained by numerically simulating the monitored Gaussian circuit on a Majorana chain with periodic boundary conditions. The boundary between the area-law phase and the critical phase in the phase diagram is identified using the crossing behavior of the two-interval mutual information for different total system sizes \(L=64,128,256,512\), as we explain below.
We scan different vertical cuts in the phase diagram, each with a fixed value of \(b_{2}\), to determine the critical value of \(a_{2}\) and extract critical exponents. The phase transition at different values of \(b_{2}\) is expected to share the same universality class, which is consistent with our numerical results. Therefore, in the following, we mostly focus on the vertical cut with \(b_{2}=1\) (red line in Fig. 3 (a)) as an example. With \(b_{2}=1\) fixed, to accurately determine the value of \(a_{2}\) at the entanglement phase transition, we numerically calculate the averaged mutual information \(I\) between the two intervals \([x_{1},x_{2}]\) and \([x_{3},x_{4}]\) (Fig. 2). When \(a_{2}\) is at the transition point, conformal symmetry ensures that the averaged mutual information \(I\) only depends on the locations and the lengths of the intervals via the cross-ratio \(\eta=\frac{R(x_{2}-x_{1})R(x_{4}-x_{3})}{R(x_{3}-x_{1})R(x_{4}-x_{2})}\)[9; 30; 51], where \(R(x):=\frac{L}{\pi}\sin\left(\frac{\pi}{L}|x|\right)\) is the chord distance. To locate the phase transition point, we choose \(x_{2}=x_{1}+L/8\), \(x_{3}=x_{1}+L/2\), and \(x_{4}=x_{1}+5L/8\), which yields \(\eta=(\sin(\pi/8)/\sin(\pi/2))^{2}=\sin^{2}(\pi/8)=\frac{2-\sqrt{2}}{4}\). The averaged two-interval mutual information \(I(a_{2})\) with \(\eta=\sin^{2}(\pi/8)\) is numerically calculated as a function of \(a_{2}\) for various total system sizes \(L=64,128,256,512\) and plotted in Fig. 3 (b). The crossing of the averaged mutual information \(I(a_{2})\) for different \(L\) indicates that the phase transition occurs at \(a_{2c}=0.24\). Technically, we average the mutual information \(I(a_{2})\) over quantum trajectories and all values of \(x_{1}\) (while the spacings between \(x_{1,2,3,4}\) are fixed). The averaging over \(x_{1}\) is merely a technical procedure to speed up the convergence. Even if we do not actively average over \(x_{1}\), the mutual information \(I\) after averaging over different quantum trajectories only depends on the sizes of and the spacing between the two intervals. Similar averaged-mutual-information calculations at \(b_{2}=1\) for different choices of \(\eta\) produce the same value of \(a_{2c}=0.24\).
As is shown in Fig. 3 (c), finite-size scaling data collapse can be achieved by plotting \(|I(a_{2})-I(a_{2c})|\) as a function of \((a_{2}-a_{2c})L^{1/\nu}\) with an exponent \(\nu=2.1\pm 0.1\) (reported in the summary Table 1). To extract \(\nu\) and estimate its uncertainty from the scaling data collapse, we apply the algorithm introduced in Ref. [2]. We give a brief review of this algorithm in App. C.
Here, we comment on the conformal symmetry of the system. Our numerical results, in particular, the crossing behavior of mutual information shown in Fig. 3 (b), are fully consistent with the conformal symmetry at the transition between the area-law phase and the critical phase. Away from the transition, i.e. for \(a_{2}\neq a_{2c}\), the average mutual information is expected to depend not only on the cross-ratio \(\eta\), which is fixed to \(\eta=\sin^{2}(\pi/8)\), but also on the total system size \(L\). This is a reflection of the lack of conformal symmetry away from the transition. This size dependence is born out in the data of Fig. 3 (b). More specifically, in the area-law phase, where \(a_{2}>a_{2c}\), the mutual information decreases with increasing system size \(L\), consistent with crossover to the expected vanishing mutual information in the area-law phase. On the other hand, in the critical phase where \(a_{2}<a_{2c}\), the mutual information (at fixed cross-ratio \(\eta=\sin^{2}(\pi/8)\)) is seen to increase with system size \(L\). The \(L\)-dependence is the expected behavior in the crossover regime between the transition, and the renormalization group (RG) fixed point characterizing the critical phase. The latter is also known to respect the conformal symmetry [30]. The increase of the mutual information (at fixed cross-ratio \(\eta=\sin^{2}(\pi/8)\)) with system size \(L\) is indicative of a larger value of the mutual information at the corresponding fixed \(\eta\) of the critical phase. (We note that in the analogous cross-over for forced measurements, Fig. 5 discussed
Figure 2: On a length-\(L\) Majorana chain with periodic boundary condition, we study the averaged mutual information between the two intervals \([x_{1},x_{2}]\) and \([x_{3},x_{4}]\).
below, the mutual information at the corresponding infrared fixed point is in fact known from Ref. [30] to be significantly larger than that at the transition, consistent with the increase as a function of \(L\).) We stress that the independence of the mutual information of system size \(L\) is a property of conformal symmetry, present only at an RG fixed point. In the crossover regime describing the critical phase (with \(a_{2}<a_{2c}\)), this independence is only present in the two asymptotic regimes of system size \(L\ll\xi\), and \(L\gg\xi\), where \(\xi\) is the crossover length scale set by the distance \(a_{2c}-a_{2}\) from the transition and diverging as \(a_{2c}-a_{2}\) tends to zero. The former limit corresponds to the transition, and the latter to the RG fixed point characterizing the universality class of the critical phase.
At the phase transition (\(b_{2}=1\) and \(a_{2}=0.24\)), following the arguments in Ref. [7], conformal symmetry leads to the logarithmic scaling of the half-system EE \(S(L/2)\sim\zeta_{1}\log L\) which is confirmed by our numerical results shown in Fig. 4. The prefactor \(\zeta_{1}\), which is associated with the universality class of this transition, is found to be \(\zeta_{1}=0.39\pm 0.02\).
Next, we describe the tool that will allow us to establish very strong evidence that the entanglement phase transition in the monitored Gaussian circuit with Born-rule measurements and the forced-measurement counterpart belong to different universality classes. We consider the (squared) Majorana fermion correlation function
\[G(p,p+r;\mathcal{C})=\left(\langle\hat{\mathfrak{r}}_{p}\hat{\mathfrak{r}}_{p+ r}\rangle_{\mathcal{C}}\right)^{2} \tag{9}\]
evaluated in the state occurring at the final time slice in the long-time limit. \(\mathcal{C}\) denotes the realization of the Gaussian circuit that corresponds to a quantum trajectory. Following the correspondence established in Ref.
Figure 4: Logarithmic scaling of the half-system EE \(S(L/2)\) at the entanglement phase transitions presented in Figs. 3 and 5. The total system has a length \(L\) and a periodic boundary condition.
Figure 3: We numerically simulate the monitored Gaussian circuit with Born-rule measurements whose Kraus-operator ensemble is given in Sec. II.3. We fix the parameters \(a_{1}=0.5\) and \(b_{1}=1\) for the numerical study. The phase diagram as a function of \(a_{2}\) and \(b_{2}\) is shown in (a). Upon fixing \(b_{2}=1\), the averaged mutual information \(I\) between the two intervals \([x_{1},x_{1}+L/8]\) and \([x_{1}+L/2,x_{1}+5L/8]\) as a function of \(a_{2}\) for different total system sizes \(L\) is calculated numerically and presented in (b). From the crossing of these functions, we identify the value \(a_{2c}=0.24\) where the entanglement phase transition between the critical phase and area-law phase occurs. (c) shows the data collapse when \(|I(a_{2})-I(a_{2c})|\) is plotted as a function of \((a_{2}-a_{2c})L^{1/\nu}\) with \(\nu=2.1\pm 0.1\). (d), (e), and (f) show the dependence of \(\overline{G(r)}\), \(\overline{\log G(r)}\), and the 2nd cumulant of \(\log G(r)\) on the chord distance \(R(r)=\frac{L}{\pi}\sin\frac{\pi|r|}{L}\).
30, the averaging over the quantum trajectories of the Gaussian circuit can be interpreted as the averaging over random disorder in the corresponding static Hamiltonian system. The _logarithm_ of such correlation functions in random systems is known to be self-averaging [28; 46]. For this reason, we specifically consider the _typical_ critical exponent characterizing the scaling of the average of \(\log G\), as well as the universal fluctuations of \(\log G\) about its average. As briefly reviewed in Appendix D, at the phase transition all moments of the correlator in Eq. (9) turn out to scale with in general algebraically independent exponents. As a consequence, all _cumulants_ of the random variable \(\log G(p,p+r;\mathcal{C})\) grow proportional to the logarithm of the chord distance \(R(r)=\frac{L}{\pi}\sin\left(\frac{\pi}{L}|r|\right)\), with the coefficients of proportionality all being (in general different) universal numbers (compare Eqs. (10) and (11)). Since the averaged behaviors of \(G(p,p+r;\mathcal{C})\) and the associated cumulants are independent of \(p\), we will use \(G(r)\) as a shorthand notation of \(G(p,p+r;\mathcal{C})\).
The first _cumulant_ of \(\log G\) at the entanglement phase transition should scale as
\[\overline{[\log G(r)]}\sim-2x^{(1)}\log R(r) \tag{10}\]
with the typical exponent \(X_{typ}:=x^{(1)}\). In this subsection, the overbar \(\overline{\cdots}\) represents the weighted average over all quantum trajectories with respect to the Born-rule probability. Fig. 3 (e) depicts this 1st cumulant versus \(\log R(r)\), from which we can extract the exponent \(X_{typ}=x^{(1)}=2.66\pm 0.05\). At the phase transition, the second cumulant \(\overline{[\log G]^{2}}-\overline{[\log G]}^{2}\) should scale as
\[\overline{[\log G(r)]^{2}}-\overline{[\log G(r)]}^{2}\sim-2x^{(2)}\log R(r) \tag{11}\]
with the exponent \(x^{(2)}\). From Fig. 3 (f), we can extract the exponent \(-x^{(2)}=1.80\pm 0.04\) which describes the universal scaling of the statistical fluctuations of \(\log G\) about its mean. We contrast this with the first moment average \(\overline{G}\) of \(G\) (as opposed to of \(\log G\)) which exhibits a power-law decay \(R(r)^{-2X_{1}}\) (see Fig. 3 (d)) at the transition with an exponent that we find to be \(X_{1}=1.00\pm 0.02\). We want to emphasize that at the entanglement phase transition, the exponent \(X_{1}\) for the power-law decay of the _average_ correlation function \(\overline{G}\) is thus found to be significantly different from the exponent \(X_{typ}\) describing the decay of the _typical_ correlation function, \(X_{1}=1.00\pm 0.02<X_{typ}=2.66\pm 0.05\). This is a reflection of the rich scaling behavior of the correlation function that is usually referred to as _multifractal_ ([28; 46], and Appendix D). All the exponents are summarized in Table 1.
We conclude this section by commenting on the critical phase. First, as already stated in the first paragraph of Sec. III.1, the averaged half-system EE \(S(L/2)\) in the critical phase is numerically found to be proportional to the logarithm of subsystem size, a reflection of criticality (in the sense that the phase is governed by an RG fixed point with conformal symmetry). Second, we note that in the critical phase the average and typical values of the correlation functions \(G(r)\) are expected to scale with the _same_ power \(\propto R(r)^{-2X}\) with \(X=1\) (up to logarithmic corrections to scaling). This will follow from the discussion in Sec. IV.1. This "self-averaging" property of \(G(r)\) in the critical phase is in sharp contrast with the rich scaling properties of the same correlation function at the transition, where the average and typical correlation functions scale with vastly different exponents (as discussed in the paragraph above, and summarized in Table 1). Analogous rich scaling behavior of the correlation function at the entanglement transition also occurs for the monitored Gaussian circuits with forced measurements, as discussed in the following section. As pointed out in Ref. [30], \(G(r)\) in the critical phase with forced measurement is also self-averaging, like the case with Born-rule measurements.
### Forced measurements
Next, we numerically simulate the monitored Gaussian circuit (with the geometry as in Fig. 1 (a)) using the _forced-measurement_ statistical weights for each quantum trajectory discussed at the end of Sec. II.3, and Eq. (8). Similar to the case with Born-rule measurements, we simulate this monitored Gaussian circuit using the covariance matrix formulation of which the technical details are reviewed in App.A. For each gate in the circuit, the measurement outcome \(\vec{n}_{A/B}\) (parameterized by \(s\) and \(\varphi\)) is randomly sampled following the statistical weight specified in Eq. (8).
Upon tuning the parameters \(a_{1,2}\) and \(b_{1,2}\) which, as before, changes the staggered pattern in the space-time of the circuit between gray and yellow gates in Fig. 1 (a), the circuit again exhibits an area-law and a critical phase. Choosing fixed values \(b_{1}=1\) and \(a_{1}=0.5\) (which is the same as in Sec. III.1), the phase diagram as a function of \(a_{2}\) and \(b_{2}\) is displayed in Fig. 5 (a). Similar to the Born-rule case, we use the crossing of the averaged mutual information to determine the boundary between the area-law phase and the critical phase. Notice that switching from the Born-rule measurement to the forced measurement leads to a significant change in the phase boundary. More importantly, we show below that the universality class of the forced-measurement-induced entanglement phase transition differs from that of the Born-rule counterpart.
We scan different horizontal cuts in the phase diagram, each with a fixed value of \(a_{2}\), to determine the critical value of \(b_{2}\) and extra critical exponents. The phase transition for different values of \(a_{2}\) belongs to the same universality class. In the following, we mostly focus on the horizontal cut with \(a_{2}=0\) (red line in Fig. 5 (a)) to investigate the critical behavior at the phase transition. As before, we numerically compute the averaged mutual information \(I\) between two intervals \([x_{1},x_{2}=x_{1}+L/8]\) and \([x_{3}=x_{1}+L/2,x_{4}=x_{1}+5L/8]\) to accurately determine,
this time, the value of \(b_{2}\) at the phase transition. Again, the corresponding cross-ratio is fixed at \(\eta=\sin^{2}(\pi/8)\). The averaged mutual information \(I(b_{2})\) is numerically calculated as a function of \(b_{2}\) for various total system sizes \(L=64,128,256,512,1024\) and plotted in Fig. 5 (b). Using the crossing of the mutual information for different system sizes \(L\), we conclude that the entanglement phase transition occurs at \(b_{2,c}=0.18\).
As shown in Fig. 5 (c), finite-size scaling data collapse can be achieved by plotting \(|I(b_{2})-I(b_{2c})|\) as a function of \((b_{2}-b_{2c})L^{1/\nu}\) with an exponent \(\nu=1.9\pm 0.1\) (reported in the summary Table 1). The method to extract \(\nu\) and estimate its uncertainty is the same as in the case with Born-rule measurements.
It is worth noting that upon applying the correspondence established Ref. [30], this forced-measurement-induced entanglement transition under discussion here is equivalent to the thermal-metal-insulator transition in a two-dimensional disordered non-interacting fermionic Hamiltonian system in symmetry class DIII [52]. The latter has been numerically studied using a Chalker-Coddington-model-based approach in Ref. [53]. In this study, the critical exponent \(\nu\) for this thermal metal-insulator transition was found to be \(\nu\approx 2\) which is consistent with our result. Ref. [30] also studied this transition in the context of Gaussian circuits but did not include a detailed analysis of the exponent \(\nu\) and the other exponents we introduce in the rest of this section.
At the phase transition (\(a_{2}=0\) and \(b_{2}=0.18\)), conformal symmetry leads to the logarithmic scaling of the half-system EE \(S(L/2)\sim\zeta_{1}\log L\), which is again confirmed by our numerical results shown in Fig. 4. The prefactor \(\zeta_{1}\), which is associated with the universality class of this transition, is found to be \(\zeta_{1}=0.30\pm 0.04\).
Next, we consider again the correlation function \(G\) defined in Eq. (9). We stress again that, in this subsection, we calculate the averaged behavior \(\overline{G}\), \(\overline{\log G}\) and so on using the forced-measurement statistical weight, which assigns equal weights to all quantum trajectories as discussed at the end of Sec. II.3. At the phase transition, the first _cumulant_ of \(\log G\) scales as in Eq. (10), which agrees with our numerical result plotted in Fig. 5 (e). The extracted typical exponent is \(X_{typ}=x^{(1)}=3.53\pm 0.04\). The second _cumulant_ of \(\log G\) should scale as in Eq. (11) at the transition. From Fig. 5 (f) we extract the value of the exponent \(-x^{(2)}=5.7\pm 0.2\), describing the universal scaling of the statistical fluctuations of \(\log G\) about its mean. We contrast this again with the first moment average \(\overline{G}\) of \(G\) (as opposed to of \(\log G\)) which exhibits a power-law decay \(R(r)^{-2X_{1}}\) (see Fig. 5 (d)) at the transition with an exponent that we find to be \(X_{1}=1.02\pm 0.03\). We want to emphasize again that, at the entanglement transition, the exponent \(X_{1}\) for the power-law decay of the _average_ correlation function \(\overline{G}\) is found to be significantly different from the exponent \(X_{typ}\) describing the decay of the _typical_ correlation function, \(X_{1}=1.02\pm 0.03<X_{typ}=3.53\pm 0.04\). This is analogous to the case of Born-rule measurements (while the value of \(X_{typ}\) is very different), and is again a reflection of the rich scaling behavior of the correlation function that is usually referred to as _multifractal_ (see Refs. [28] and [46] and App. D).
We conclude this section by commenting on the critical phase. First, as established numerically in Ref. [30], the averaged EE \(S(L/2)\) in the critical phase has a logarithmic scaling as a function of \(L\), i.e. \(S(L/2)\sim\log L\), reflecting the conformal symmetry of the critical phase. Second, in that same reference, the average and typical values of the correlation functions \(G(r)\) were found to scale with the _same_ power \(\propto R(r)^{-2X}\) with \(X=1\) (up to logarithmic corrections to scaling). This also follows from the theoretical understanding of the critical phase discussed in Ref. [30] (see also Sec. IV.1). This "self-averaging" property of \(G(r)\) in the critical phase is in sharp contrast with the rich scaling properties of the same correlation function at the entanglement transition, where the average and typical correlation functions scale with vastly different exponents (as discussed in the paragraph above, and summarized in Table 1). As discussed in the preceding section, analogous different scaling behavior of the correlation function at the transition and in the critical phase also occurs for monitored Gaussian circuits with Born-rule measurements.
All the exponents are summarized in Table 1.
### Comparison between Born-rule and forced measurements
Amongst the numerical results presented in the two preceding subsections Sec. III.1 and Sec. III.2 and summarized in Table 1, the typical exponent \(X_{typ}=x^{(1)}\) and the exponent \(x^{(2)}\), which are associated with the 1st and the 2nd cumulants of \(\log G\), are widely different (beyond error bars) between the phase transitions in the Gaussian circuits with Born-rule and forced measurements. Such a difference offers strong evidence that the phase transitions with Born-rule and with forced measurements belong to different universality classes.
The coefficient \(\zeta_{1}\) of the half-system EE as a function of \(\log L\) is also different between the Born-rule-measurement-induced transition and forced-measurement-induced transition. For Born-rule measurements, the value \(\zeta_{1}=0.39\pm 0.02\), whereas for forced measurement \(\zeta_{1}=0.30\pm 0.04\) is found. We note that the correlation length exponent \(\nu\) appears to differ only very little between the two transitions.
Besides the very strong numerical evidence for the difference between these two universality classes, there are theoretical arguments that go hand in hand with these numerical observations. Owing to the correspondence established in Ref. [30], we know the field theory (or the "statistical mechanics model") underlying both transitions, which follows, as already mentioned above, solely from the AZ symmetry class associated with the circuit. This will be discussed in the following section.
## IV Theoretical description
### Field theory and statistical mechanics model
In this section, we provide a theoretical understanding of the monitored Gaussian circuits for which the numerical results were presented in the previous section, in terms of the underlying field theories (or equivalently the "statistical mechanics models") describing these two entanglement transitions in monitored Gaussian circuits with Born-rule and forced measurements, as well as their adjacent phases. We stress again that the corresponding field theory/statistical-mechanics model is directly determined by the corresponding AZ symmetry class of the monitored Gaussian circuit. As mentioned, a generic monitored Gaussian circuit preserving only fermion parity belongs to symmetry class DIII.
_Forced measurement_ - It is simplest to start with the forced measurement case. Let us first recall that, in general, a circuit with forced measurements is equivalent to a random tensor network (RTN) [30, 21]. In the language of the corresponding disordered static Hamiltonian system whose spatial coordinates describe the space-time of the circuit, the staggered pattern of the Gaussian circuit, associated with the difference between the \(A\) and the \(B\) sublattices, corresponding to the yellow and the gray gates in Fig. 1 (a), will favor the topologically trivial or the topological non-trivial superconductor, both thermal insulators in symmetry class DIII in two spatial dimensions, depending on the "direction" of staggering (see
\begin{table}
\begin{tabular}{c|c c c} \hline Exponents & Born-rule measurement & Forced measurement & Quantities used to extract the exponents \\ \hline \hline \(\nu\) & \(2.1\pm 0.1\) & \(1.9\pm 0.1\) & Mutual information, \(I\) \\ \(X_{1}\) & \(1.00\pm 0.02\) & \(1.02\pm 0.03\) & Majorana fermion correlation function, \(\overline{G(r)}\) \\ \(x^{(1)}\) & \(2.66\pm 0.05\) & \(3.53\pm 0.04\) & \(\overline{\log G}\) \\ \(-x^{(2)}\) & \(1.80\pm 0.04\) & \(5.7\pm 0.2\) & Second cumulant of \(\log G\) \\ \(\zeta_{1}\) & \(0.39\pm 0.02\) & \(0.30\pm 0.04\) & Coefficient of entanglement entropy, \(S(L/2)\) \\ \hline \end{tabular}
\end{table}
Table 1: This table summarizes the critical exponents at the phase transition between the critical phase and the area-law phase of the monitored Gaussian circuit with Born-rule and forced measurements. Here, \(X_{typ}:=x^{(1)}\) is the typical exponent.
Figure 5: We numerically simulate the monitored Gaussian circuit with forced measurements whose Kraus-operator ensemble is given in Sec. II.3. We again fix the parameters \(a_{1}=0.5\) and \(b_{1}=1\) for the numerical study. The phase diagram as a function of \(a_{2}\) and \(b_{2}\) is shown in (a). Upon fixing \(a_{2}=0\), the averaged mutual information \(I\) between the two intervals \([x_{1},x_{1}+L/8]\) and \([x_{1}+L/2,x_{1}+5L/8]\) as a function of \(b_{2}\) for different total system sizes \(L\) is calculated numerically and presented in (b). From the crossing point of these functions, we identify the value \(b_{2c}=0.18\) where the phase transition between the critical phase and area-law phase occurs. (c) shows the the data collapse when \(|I(b_{2})-I(b_{2c})|\) is plotted as a function of \((b_{2}-b_{2c})L^{1/\nu}\) with \(\nu=1.9\pm 0.1\). (d), (e), and (f) show the dependence of \(\overline{G(r)}\), \(\overline{\log G(r)}\), and the 2nd cumulant of \(\log G(r)\) on the chord distance \(R(r)=\frac{L}{\pi}\sin\frac{\pi|r|}{L}\).
Ref. [53]). Both, the topologically trivial and the topologically non-trivial phases correspond to an area-law phase of the monitored Gaussian circuit. There is an intervening critical phase that appears for intermediate staggering [30; 53]. This critical phase of the circuit acting on the \(d=1\) dimensional Majorana chain is known to be described [30; 43; 54] by a so-called principal chiral non-linear sigma model (NLSM) in \((d+1)=2\)-dimensional space whose field variable \(O(\vec{r})\in\mathrm{SO}(n)\) is a special \(n\times n\) orthogonal matrix, in the limit where \(n\to 0\)[55]. This NLSM is formulated using the replica trick where \(n\) is the number of replicas of the monitored Gaussian circuit, and we refer to it as the principal chiral \(\mathrm{SO}(n)\) NLSM. This is the statistical mechanics model for the circuit. The Boltzmann weight of this NLSM is \(\exp\left\{-S\right\}\) which, when written in continuum notation, reads
\[S=\int d^{2}\vec{r}\ \frac{1}{2g}\mathrm{Tr}\left[(\nabla O^{-1})(\nabla O) \right]. \tag{12}\]
This Boltzmann weight manifestly exhibits \(\mathrm{SO}(n)\times\mathrm{SO}(n)\) continuous symmetry corresponding to right- and left-multiplication with spatially independent group elements in \(\mathrm{SO}(n)\). The critical phase of the circuit corresponds to the phase of the NLSM where the \(\mathrm{SO}(n)\times\mathrm{SO}(n)\) symmetry is spontaneously broken to the diagonal \(\mathrm{SO}(n)\) symmetry where right and left multiplications are no longer independent but implemented, respectively, by the same matrix and its inverse. This phase is described by \(n(n-1)/2\) (non-interacting) Goldstone modes of this spontaneous symmetry breaking in the replica limit \(n\to 0\)[56]. In the context of the corresponding disordered Hamiltonian system of non-interacting fermions in one spatial dimension higher, the critical phase of the Gaussian circuit/Gaussian RTN corresponds to a disordered thermal metal in two spatial dimensions. The transition out of the critical phase into each one of these two area-law phases is driven by proliferation of topological defects in the NLSM field variable \(O(\vec{r})\in\mathrm{SO}(n)\). Note that both transitions are known to be in the same (bulk) universality class in two spatial dimensions [53]. One of these transitions was studied explicitly in the Gaussian circuit in Ref. [30]. The type of topological defects is dictated by the fundamental group of the target space \(\mathrm{SO}(n)\), which is known to be \(\pi_{1}(\mathrm{SO}(n))=\mathbb{Z}_{2}\) for generic \(n\) (unless \(n=2\) where it is \(\mathbb{Z}\)). The presence of topological defects is represented by an additional term in Eq. (12) (not written explicitly), whose coupling constant is the defect fugacity. The specific theoretical description of this transition for the current NLSM (which, as mentioned, is in symmetry class DIII) proceeds in very close analogy to the discussion of the metal-insulator transition in the "symplectic" (or "spin-orbit") symmetry class AII in Ref. [47] in the same spatial dimension, while the specific target space of the latter NLSM is different. The technical details of the generalization of this analysis to the current DIII symmetry class will be presented in separate follow-up work. Here, in order to be slightly more specific, we note that the system discussed in the present paper may be viewed in a sense as a superconductor version of the one discussed in Ref. [47] (which is in a different AZ symmetry class). The phase diagrams of both systems feature a parameter that tunes between a \(\mathbb{Z}_{2}\) topologically non-trivial and a topologically trivial phase with an intervening critical (metallic) phase [57]. In both systems, one can tune through the transition out of the critical (metallic) phase either with this tuning parameter or by varying the strength of disorder.
_Born-rule measurement -_ The entanglement transition with Born-rule measurements, on the other hand, is described by the \(n\to 1\) limit of the same principal chiral \(\mathrm{SO}(n)\) NLSM (as opposed to the \(n\to 0\) limit in the case of forced measurements). In the following we present two arguments (i) and (ii) that lead to the conclusion that the principal chiral \(\mathrm{SO}(n)\) NLSM in the replica limit \(n\to 1\) describes the entanglement transition with Born-rule measurements. Both arguments are essential for reaching this conclusion. (i) First, as established in Ref. [7] (see also Ref. [8]), for Born-rule measurements the replica limit \(n\to 1\) has to be taken. The "extra" replica (as compared to the forced measurement case) accounts for the Born-rule probability providing the statistical weight for each quantum trajectory. (ii) Second, it is important that, in the \(n\to 1\) limit, the partition function of the statistical-mechanics model/field theory goes to a constant (independent of the system size). Following the argument in Ref. [7], the partition function of the statistical mechanics model in this limit is directly related to the total Born-rule probability of all the quantum trajectories. This total Born-rule probability is guaranteed to be unity due to the POVM condition satisfied by the Kraus operator ensemble of each measurement regardless of the total system size. (We refer the readers to Sec. II.1 and Eq. (3) for details on the POVM conditions for the monitored Gaussian circuit.) The condition that the partition function goes to a constant in the replica limit \(n\to 1\) is satisfied by the principal chiral \(\mathrm{SO}(n)\) NLSM because the number \(n(n-1)/2\) of degrees of freedom (describing the Lie algebra of \(\mathrm{SO}(n)\)) of this NLSM, which is equal to the dimension of the special orthogonal group manifold \(\mathrm{SO}(n)\), goes to zero in the \(n\to 1\) limit. This implies, as required, a constant partition function (and a vanishing conformal central charge at the transition) in this replica limit.
Moreover, the RG beta function for the coupling constant \(g\) of this NLSM is known to be proportional to \((n-2)\) at weak coupling, namely at small \(g\) (see e.g. Refs. [58; 30]). This result implies the stability of the weak coupling fixed point of the NLSM, described by the \(n(n-1)/2\) Goldstone modes of the spontaneous symmetry breaking, in both replica limits \(n\to 0\) and \(n\to 1\). The stable weak coupling fixed point in the limit \(n\to 0\) represents the critical phase of the monitored Gaussian circuit with forced measurements discussed above and in Ref. [30]. The stable weak coupling fixed point in the limit \(n\to 1\) represents the critical phase of the monitored Gaussian circuit with Born-rule measurements. [59]
In complete analogy to the case of the \(n\to 0\) limit discussed above, this NLSM also has a transition out of its stable critical phase in the \(n\to 1\) limit, which is driven again by \(\mathbb{Z}_{2}\) topological defects. This transition describes the entanglement transition with Born-rule measurements. The different replica limit \(n\to 1\) for the Born-rule measurement case explains that our numerical results presented in Sec. III observed a different universality class as compared to the one obtained in the forced measurement case in the replica limit \(n\to 0\). Both of the entanglement transitions with Born-rule and forced measurements can be accessed by a perturbative RG treatment controlled (at least to low order) by a small parameter \(0<\epsilon=2-n\), which parallels the RG treatment in Ref. [47] of the metal-insulator transition in the two-dimensional symplectic symmetry class AII. As mentioned above, the technical details of this generalization of the analysis given in Ref. [47] to the current DIII symmetry class will be presented in separate follow-up work.
We stress again that the field theory/statistical mechanics model identified above (the principal chiral NLSM on the SO(\(n\)) group manifold) describes both universality classes of entanglement transitions, with forced measurements and with Born-rule measurements, in the different replica limits. This field theory is dictated solely by the AZ symmetry class (here DIII). [60].
### Comparison with the loop model with crossings
There is a very special and highly fine-tuned version of the monitored Gaussian circuit of Majorana fermions discussed in the present paper. In this fine-tuned version, every 2-site gate is drawn (up to unimportant global multiplicative factors), using the Kraus operator language of Eqs. (1,2), from the ensemble that contains only (i): the identity operator (\(\vec{n}=(0,1,0)\)), (ii): the "swap gates" (\(\vec{n}=(0,0,\pm 1)\)), and (iii): the projective measurement of the local fermion parity \(\hat{\mathbf{r}}_{\uparrow\uparrow}\hat{\gamma}_{\uparrow\downarrow}\) (\(\vec{n}\to(\pm 1,0,0)\)). This monitored Gaussian circuit was shown to be described by a two-dimensional statistical mechanics model called the loop model with crossings [32]. Hence, we refer to this monitored Gaussian circuit in the following as the loop-model-based circuit [61]. At the level of entanglement, the loop-model-based circuit (with Born-rule measurements) behaves like a random tensor network, which is _not_ a general property of a more generic monitored Gaussian circuit (as shown in the previous sections of this paper). The loop-model-based circuit also exhibits a measurement-induced critical-to-area-law entanglement phase transition. In the following, we provide two types of arguments, one numerical and one analytical, showing that this entanglement phase transition in the loop-model-based circuit (which, as mentioned above, is a fine-tuned version of the monitored Gaussian circuit) is in a universality class different from both entanglement transitions with forced and Born-rule measurements in the generic Gaussian circuit discussed in the preceding sections of the present paper.
For the numerical arguments, we compare critical exponents. Numerical values for the correlation length exponent in the loop model were reported in Ref. [62] to be \(\nu=2.745(19)\) and \(\nu=2.87(10)\) at two different points on the line of phases transitions (governed by the same universality class), which correspond to the critical-to-area-law entanglement phase transitions in the loop-model-based circuit. In the monitored Gaussian circuit we study in the present paper, we found, as reported in Sec. III and Table 1, values for the correlation length exponents \(\nu=2.1\pm 0.1\) for the Born-rule-measurement-induced transition and \(\nu=1.9\pm 0.1\) for the forced-measurement-induced transition. Both values are significantly different from the values for \(\nu\) for the loop model with crossings listed above.
We can also compare the universal coefficient \(\zeta_{1}\) of the logarithm of the subsystem size of the EE. For the circuits discussed in the present paper, we found (again as reported in Sec III and Table 1) the values \(\zeta_{1}=0.39\pm 0.02\) (measurements satisfying Born rule) and \(\zeta_{1}=0.30\pm 0.04\) (forced measurements), which are different (within error bars) from the corresponding values reported in Refs. [15] and [62] to be \(\zeta_{1}=2*0.225=0.45\).
We now come to the analytical arguments. There is a manifest difference in symmetry: The principal chiral SO(\(n\)) NLSM field theories with weight Eq. (12) discussed in the present paper (Sect. IV.1) possess a global SO(\(n\)) \(\times\) SO(\(n\)) symmetry (in the limits \(n\to 0\) and \(n\to 1\), for forced and Born-rule measurements, respectively), whereas the loop model with crossings is known to have only global SO(\(n\)) symmetry and requires taking the replica limit \(n\to 1\)[62, 63]. Moreover, intimately related to these different symmetries, the target space of the former models is a group manifold (namely SO(\(n\))) whereas the target space of the latter model is a coset space [62, 63]. This coset space for the loop model is the real projective space RP\({}^{n-1}=S^{n-1}/\mathbb{Z}_{2}\) in the limit \(n\to 1\), where \(S^{n-1}=\) SO(\(n\))/SO(\(n-1\)) is the unit sphere in \(n\)-dimensional Euclidean space, which is not a group.
For the entanglement transitions discussed in the present paper as well as that in the loop-model-based circuits, one can alternatively choose a supersymmetric (SUSY) formulation [64] in which the replica limit is avoided. All the above statements have an exact counterpart in the SUSY formulation. In particular, the field theories for the entanglement transitions discussed in the present paper, for which details of the SUSY formulation will be presented in separate follow-up work, both possess \(\mathcal{G}\times\mathcal{G}\) symmetry, where \(\mathcal{G}\) is a so-called Lie-supergroup (different for Born-rule and forced measurement cases). The loop model with crossings, on the other hand, is known [62, 63] in the SUSY formulation to be invariant under the lower (super-) symmetry \(\mathcal{G}\) (as opposed to \(\mathcal{G}\times\mathcal{G}\)). Given that symmetries of the underlying field theories are an identifying characteristic of universality classes, one does not expect any simple relationship between universality classes of entanglement transitions in
the loop-model-based circuits and the entanglement transitions of the more generic monitored Gaussian circuits discussed in the present paper. Technical details of the SUSY formulation of the theories under consideration here will be presented in separate follow-up work.
We close this section by remarking on another difference which has a simple reflection in physical observables when we compare the loop-model-based circuits and the more generic monitored Gaussian circuit in the preceding sections of this paper. As stressed in Ref. [30], in a loop-model-based circuit, the square of any Majorana-fermion two-point function \(G\) (as defined in Eq. (9)) can take only values zero or one in a fixed realization of the Gaussian circuit. This implies, in particular, that the \(N\)-th moment average \(\overline{G^{N}}\) of such a correlation function over quantum trajectories is equal to the first moment \(N=1\). This behavior of the loop-model-based circuits is in sharp contrast with the rich scaling behavior of the correlation function \(G\) in the more generic monitor Gaussian circuits discussed in Sec. III of this paper (both, for Born-rule and forced measurements). Recall (from Table 1) that the critical exponents describing the decay of the typical (\(X_{typ}\)) and the averaged (1st moment, \(X_{1}\)) correlation function are vastly different. More generally, all moments \(\overline{G^{N}}\) will scale with independent critical exponents [65]. As already mentioned, such scaling behavior is referred to as _multifractal_, and it implies a continuum of correlation function exponents (which appear when continuous values of moment order \(N\) are taken). It turns out that this rich scaling behavior appearing at both entanglement phase transitions discussed in the present paper is a consequence of the non-compactness of the target space (in the SUSY formulation) of the NLSMs for the monitored Gaussian circuits discussed in the present paper. On the other hand, the absence of such rich scaling behavior at the entanglement transition of the circuits described by the loop model with crossings and in the corresponding statistical mechanics model is a consequence of the compactness of the target space (in the SUSY formulation) of the corresponding NLSMs. [66]
## V Conclusions and outlook
In this paper, we studied the entanglement transition in generic monitored Gaussian circuits with no symmetry other than the global fermion parity, acting on a one-dimensional Majorana chain. Our study includes both, Gaussian circuits monitored with Born-rule measurements as well as those with forced measurements. Both types of Gaussian circuits belong to AZ symmetry class DIII according to the correspondence established in Ref. [30]. The purpose of our study is the identification of the universality classes of these two entanglement transitions and, in particular, to provide an answer to the question of whether or not they are in the same universality class. The latter question is in part motivated by the corresponding question for entanglement transitions in monitored (non-Gaussian) many-body circuits of qudits with Haar-random unitary gates where this question is not fully resolved to date, but for which conjectures exist [21], as well as by the fact that for monitored Clifford circuits ('stabilizer circuits'), Born-rule and forced measurements yield the same universality class of the transition [29].
Our numerical simulations identified measurement-induced entanglement phase transitions between an area-law phase (with area-law entanglement scaling) and a critical phase (with logarithmic entanglement scaling) for both types of circuits (with Born-rule measurements and with forced measurements). Critical exponents at these entanglement transitions were numerically extracted (summarized in Table 1). The value of the correlation length exponent \(\nu\) at the entanglement transition induced by Born-rule measurements is found to be close to that at the forced-measurement-induced entanglement transition. Both types of entanglement transitions turn out to exhibit two different sets of exponents associated with rich scaling behavior of the (squared) Majorana-fermion correlation function \(G\) in the long time limit of the circuit. For both entanglement transitions, the decay exponent \(X_{1}\) for the averaged (squared) correlation function \(\overline{G}\) was found to be very different from the exponent \(X_{typ}\) associated with the decay of the typical correlation function, a signature of so-called multifractal scaling behavior of \(G\). (See Appendix D for a review, and Refs. [28] and [46].) Most importantly, the values of the decay exponent \(X_{typ}\) of the _typical_ correlation function were found to be widely different between the two types of entanglement transitions we studied. Furthermore, while the logarithm of the correlation function, \(\log G\), is self-averaging for large separations (its average gives rise to the typical exponent \(X_{typ}\)), statistical fluctuations of \(\log G\) about its average give rise to another universal quantity \(x^{(2)}\) which we also found to be widely different between the two types of entanglement transitions. These results provide strong evidence that the universality class of the Born-rule-measurement-induced entanglement transition is different from that of the forced-measurement-induced entanglement transition. Different values of the prefactor of the logarithmic dependence on subsystem size of the EE were also found for the two transitions. (See again the summary in Table 1.)
Moreover, we provided a theoretical understanding for the numerically observed different universal behavior of these two entanglement transitions by identifying the underlying statistical-mechanics model describing the monitored Gaussian circuits with Born-rule and forced measurements discussed in this paper. Our ability to identify the statistical-mechanics model originates from the correspondence established in Ref. [30]. The crucial point is that, as already mentioned above, _both_ monitored Gaussian circuits in (1+1)-dimensional space-time dimensions, those with Born-rule and those with forced measurements, are described by the _same_ AZ symmetry class. In particular, based on the correspondence established in
Ref. [30], the forced-measurement-induced entanglement transition of the monitored Gaussian circuit in (1+1)-dimensional spacetime is identical to the thermal-metal-to-insulator transition in a two-spatial-dimensional non-interacting symmetry-class-DIII fermion system with a static disordered Hamiltonian. The latter is an Anderson localization transition. But even though the Born-rule-measurement-induced entanglement transition belongs to the same symmetry class DIII, this transition is in a new universality class beyond Anderson localization transitions: Their common symmetry class DIII dictates that the behavior of both types of circuits is described (after averaging over all quantum trajectories) by a two-dimensional principal chiral NLSM with a target space \(\mathrm{SO}(n)\) (as a result of applying the replica trick). Such an NLSM has global \(\mathrm{SO}(n)\times\mathrm{SO}(n)\) symmetry. The replica limit \(n\to 0\) corresponds to the monitored Gaussian circuit with forced measurement, while the replica limit \(n\to 1\) corresponds to the monitored Gaussian circuit with Born-rule measurements. Our numerical results imply that the entanglement transition in the circuit with Born-rule measurements is thus in a novel universality class, different from the Anderson localization transition in the corresponding symmetry class DIII which corresponds to the \(n\to 0\) limit and forced measurements. In either limit, the entanglement transition can be understood as a transition driven by proliferation of topological defects classified by the fundamental group of the target space, which is \(\pi_{1}(\mathrm{SO}(n))=\mathbb{Z}_{2}\) (for a generic \(n\)). However, the different replica limits \(n\to 0\) and \(n\to 1\) will result in different universality classes for the two types of entanglement transitions. Both entanglement transitions, with Born-rule and with forced measurements, can be accessed by a perturbative RG treatment controlled (at least in low order) by the small parameter \(0<\epsilon=2-n\), which parallels the RG treatment by Fu and Kane in Ref. [47] of the disorder-driven metal-insulator transition in the two-dimensional symplectic symmetry class AII. (Technical details of the generalization to the current symmetry class DIII, as well as an alternative formulation using supersymmetry which avoids the replica limit, will be presented in separate follow-up work.)
We finally compared the entanglement transitions in the monitored Gaussian circuits discussed in the present paper with that in the circuits based on the loop model with crossings [62, 32], which is a highly fine-tuned version of the monitored Gaussian circuit of Majorana fermions. We provided numerical and analytical evidence showing that the entanglement transition in the loop-model-based circuit is in a universality class different from both entanglement transitions discussed in the present paper. As for the numerical argument, we observed that the correlation length exponents obtained for the entanglement transitions in the generic Gaussian circuits, both for Born-rule measurements and for forced measurements, differ significantly from the value for this exponent for the entanglement transition in the loop-model-based circuit (which can be simulated as a classical loop model with crossings [62]). The obtained values of the prefactor for the logarithmic dependence of the EE on the subsystem size also appear different within error bars at the entanglement transitions. As for the analytical argument, we noted that there is a manifest difference in symmetry: The statistical mechanics model for the entanglement transition in the generic Gaussian circuits discussed in the present paper possesses global \(\mathrm{SO}(n)\times\mathrm{SO}(n)\) symmetry in the replica limits \(n\to 1\) and \(n\to 0\), for Born-rule measurements and forced measurements, respectively, while that of the loop-model-based circuit, on the other hand, has only the smaller global \(\mathrm{SO}(n)\) symmetry (in the replica limit \(n\to 1\)). Furthermore, we noted that corresponding statements can also be formulated within the supersymmetry approach in which no replica limit is taken. Given the different symmetries, one does not expect a relationship between the entanglement transitions in the circuits discussed in this paper and that in the loop-model-based circuit.
Entanglement transitions in monitored Gaussian circuits are expected to be more tractable than those in general monitored circuits acting on qubits (qudits) (which are "interacting"), and can thus provide another angle into the nature of entanglement transitions. The present paper demonstrates that the framework for classifying non-unitary Gaussian circuits based on the tenfold AZ symmetry classification which was developed in Ref. [30] provides a concrete tool to successfully identify measurement-induced entanglement transitions in Gaussian circuits monitored by Born-rule measurements and those monitored by forced measurements as well. While the monitored Gaussian circuits with Born-rule and forced measurements can be formulated (in any dimension) within the same framework of the AZ symmetry classification as the Anderson localization problems, their universal behavior can nevertheless be different: It was shown in Ref. [30] that entanglement transitions in monitored Gaussian circuits subject to forced measurements exactly correspond to Anderson localization transitions. As demonstrated in the present paper, the entanglement transition in a monitored Gaussian circuit subject to Born-rule measurements can, while in the same AZ symmetry class, be in a novel universality class that does not arise within the context of Anderson localization. The framework developed in Ref. [30] thus provides an approach to systematically investigate the appearance of such novel universality classes that can uniquely arise from entanglement transitions in Gaussian circuits monitored by Born-rule measurements. Understanding the differences between Born-rule and forced measurements will provide insight into novel universal critical behavior that is unique to the context of monitored random circuits and this will be investigated in follow-up works in other AZ symmetry classes and dimensions.
_Note added:_ After the work on the present paper was completed, and after a summary of our results had already been presented at the 2022 March Meeting of
the American Physical Society [67], a paper (arXiv: 2210.05681) [68] with some overlap with our numerics appeared on the arXiv while our work was being written up.
###### Acknowledgements.
C.-M. J. thanks Haining Pan for helpful discussions on the numerical simulations of the monitored Gaussian circuits. This research is supported in part by a faculty startup grant at Cornell University (C.-M.J.).
## Appendix A Covariance matrix formulation of Gaussian states and their evolution
The numerical simulations of the monitored Gaussian circuits presented in this paper are carried out using the covariance matrix formulation reviewed in this appendix.
In a system with \(L\) Majorana fermion modes \(\hat{\gamma}_{i=1,2,\ldots,L}\), any Gaussian state, namely any state of non-interacting fermions, \(|\Gamma\rangle\) can be fully captured by its covariance matrix
\[\Gamma_{ij}=\Big{\langle}\frac{\mathbf{i}}{2}[\hat{\gamma}_{i},\hat{\gamma}_ {j}]\Big{\rangle}, \tag{10}\]
which encodes all the two-point Majorana fermion correlation functions. \(\Gamma\) is a real anti-symmetric matrix that squares to \(-\openone\), namely \(\Gamma^{2}=-\openone\), due to the purity of the Gaussian state \(|\Gamma\rangle\). All muti-point correlation functions of a Gaussian state can be obtained from the two-point correlation functions via Wick's theorem.
In the monitored Gaussian circuit acting on a one-dimensional Majorana chain, we can study the quantum dynamics of the system by calculating the evolution of the covariance matrix. For example, a Gaussian state \(|\Gamma\rangle\) evolves in to the Gaussian state
\[|\Gamma^{\prime}\rangle\equiv\frac{K_{(i,i+1)}(\vec{n})|\Gamma\rangle}{||K_{( i,i+1)}(\vec{n})|\Gamma\rangle||} \tag{11}\]
under the action of the Kraus operator \(K_{(i,i+1)}(\vec{n})\), defined in Eq. (1), acting on the \(i\)th and \((i+1)\)th sites of the Majorana chain. Following Ref. [48], the covariance matrix of the Gaussian state \(|\Gamma^{\prime}\rangle\) is given by
\[\Gamma^{\prime} =\left(\begin{array}{ccc}\Gamma_{[1,i-1],[1,i-1]}&0&\Gamma_{[1,i -1],[i+2,L]}\\ 0&-\mathbf{i}n_{1}\sigma^{y}&0\\ \Gamma_{[i+2,L],[1,i-1]}&0&\Gamma_{[i+2,L],[i+2,L]}\end{array}\right)\] \[-\left(\begin{array}{ccc}\Gamma_{[1,i-1],[i,i+1]}&0\\ 0&-n_{2}\openone+\mathbf{i}n_{3}\sigma^{y}\\ \Gamma_{[i+2,L],[i,i+1]}&0\end{array}\right)\cdot\left(\begin{array}{ccc} \Gamma_{[i,i+1],[i,i+1]}&\openone\\ -\openone&\mathbf{i}n_{1}\sigma^{y}\end{array}\right)^{-1}\cdot\left( \begin{array}{ccc}\Gamma_{[i,i+1],[i,i-1]}&0&\Gamma_{[i,i+1],[i+2,L]}\\ 0&n_{2}\openone+\mathbf{i}n_{3}\sigma^{y}&0\end{array}\right),\]
where \(\Gamma_{[j_{1},j_{2}],[j_{1}^{\prime},j_{2}^{\prime}]}\) represents the block of the matrix \(\Gamma\) with the rows ranging from \(j_{1}\) to \(j_{2}\) and the columns ranging from \(j_{1}^{\prime}\) to \(j_{2}^{\prime}\). \(\sigma^{x,y,z}\) are the Pauli matrices. Recall that \(\vec{n}=(n_{1},n_{2},n_{3})\).
Given the covariance matrix \(\Gamma\) of the Gaussian state \(|\Gamma\rangle\), one can directly calculate the subsystem von Neumann EE. For example, for a subsystem that is an interval starting from the \(i\)th site and ending on the \(j\)th site, the subsystem EE, is given by
\[S_{[i,j]}=-\frac{1}{2}\sum_{s=\pm 1}\mathrm{Tr}\left(\frac{\openone+s\mathbf{i} \ \Gamma_{[i,j],[i,j]}}{2}\log\frac{\openone+s\mathbf{i}\ \Gamma_{[i,j],[i,j]}}{2}\right). \tag{12}\]
## Appendix B Monte Carlo sampling for monitored Gaussian circuits with Born-rule measurements
In the monitored Gaussian circuits circuit with Born-rule measurement, for a given pair of sites \((i,i+1)\) and at a given time step, the Born-rule probability density for the occurrence of the Kraus operator \(K(\vec{n})\), associated with the measurement outcome labeled by \(\vec{n}\), depends on the state \(|\Gamma\rangle\) of the system at that time step (before the action of \(K(\vec{n})\) is implemented). As explained in Eq. (7), this probability density is given by
\[\mathrm{P}(s,\phi)\ dsd\phi\equiv\frac{1}{2\pi}p(s)\tilde{w}_{Y}(s)\langle K^ {\dagger}(\vec{n}_{Y})K(\vec{n}_{Y})\rangle\ dsd\phi \tag{13}\]
with \(Y=A\) or \(B\) depending on which sublattice the Kraus operator belongs to in the circuit geometry. \(p(s)\)
is given in Eq. (5). The \(\vec{n}_{Y}\) is parameterized by \((s,\varphi)\) following Eq. (4):
\[\vec{n}_{A} =(s,\sqrt{1-s^{2}}\cos\varphi,\sqrt{1-s^{2}}\sin\varphi),\] \[\vec{n}_{B} =(\sqrt{1-s^{2}}\sin\varphi,s,\sqrt{1-s^{2}}\cos\varphi). \tag{10}\]
\(\langle K^{\dagger}(\vec{n}_{Y})K(\vec{n}_{Y})\rangle\) is evaluated with respect to the state \(|\Gamma\rangle\) which can be calculated using the covariance matrix (introduced in App. A)
In a numerical implementation of the monitored Gaussian circuit with Born-rule measurements, it is not practical to sample cover all possible values of \((s,\varphi)\). To determine which Kraus operator \(K(\vec{n})\) to apply, we use the Metropolis algorithm as an importance sampling scheme:
```
1:To initialize, pick a random \(n_{Y}^{(0)}\in S^{2}\) by picking a random pair of variables \(s^{(0)}\in[-1,1]\) and \(\phi\in[0,2\pi)\), and compute the Born-rule probability density \(\mathrm{P}(s^{(0)},\phi^{(0)})\) with respect to the input state \(|\Gamma\rangle\).
2:for\(n\) in \(N_{\mathrm{iter}}\)do
3: Propose a new pair \((s^{(n)},\varphi^{(n)})\), i.e. a new \(n_{Y}^{(n)}\).
4: Compute \(\mathrm{P}(s^{(n)},\varphi^{(n)})\) and accept the Monte Carlo update with probability \[p_{\mathrm{acc}}((s^{(n-1)},\varphi^{(n-1)})\to(s^{(n)},\varphi^{( n)}))\] \[=\min\left(1,\frac{\mathrm{P}(s^{(n)},\varphi^{(n)})}{\mathrm{P}( s^{(n-1)},\varphi^{(n-1)})}\right).\]
**Algorithm 1** Importance sampling of measurement outcome \(\vec{n}_{Y}\) according to the Born-rule probability
Here, \(N_{\mathrm{iter}}\) is the number of iterations.
This Metropolis algorithm produces the correct Born-rule probability distribution of \((s,\varphi)\) due to detailed balance:
\[\mathrm{P}(s,\varphi)\ p_{\mathrm{acc}}((s,\varphi)\to(s^{\prime },\varphi^{\prime}))\] \[=\mathrm{P}(s^{\prime},\varphi^{\prime})\ p_{\mathrm{acc}}((s^{ \prime},\varphi^{\prime})\to(s,\varphi)) \tag{11}\]
which we verify numerically.
Appendix C Method to extract the correlation length exponent \(\nu\) for the entanglement transitions
In Sec. III, we determine the entanglement phase transition points and the associated correlation length critical exponents \(\nu\) using the scaling collapse of the two-interval mutual information in a configuration with a fixed cross-ratio \(\eta=\sin^{2}(\pi/8)\).
Take the case of monitored Gaussian circuits with Born-rule measurements as an example. At fixed \(b_{2}\) and \(\eta\), the two-interval mutual information \(I\) is treated as a function of the parameter \(a_{2}\) and the total system size \(L\). Our objective is to determine the value of \(a_{2}\) at the entanglement transition, denoted as \(a_{2c}\), and the corresponding correlation length exponent \(\nu\). Around the transition, a scaling collapse is expected such that \(|I(a_{c},L)-I(a_{2c},L)|\) is a function that depends only on the single variable \((a-a_{2c})L^{1/\nu}\).
We apply the algorithm introduced in Ref. [2] to obtain the "optimal" values of \(a_{2c}\) and \(\nu\). The algorithm objective is to minimize a cost function \(R(a_{2c},\nu)\) which essentially measures the deviation from a data collapse on a universal (unknown) curve. The corresponding optimal values are our estimates for \(a_{2c}\) and \(\nu\). For a given value of \(a_{2c}\) and \(\nu\), we estimate \(I(a_{2c},L)\) for each system size \(L\) by a piece-wise linear interpolation. We then calculate the function \(y_{L}(x)=I(a_{2},L)-I(a_{2c},L)\) and \(x=(a_{2}-a_{2c})L^{1/\nu}\) for given values of \(a_{2}\) and \(L\) from the data set. This gives a family of curves \(y_{L}(x)\) vs. \(x\), which we wish to collapse on a single curve. Next, we sample from these curves at a discrete set of points \(\{x_{i}\}\) and define the cost function
\[R=\sum_{i,L}\left[y_{L}(x_{i})-\bar{y}(x_{i})\right]^{2}, \tag{12}\]
in terms of sum of the variance of \(y_{L}(x_{i})\) for different system sizes, where
\[\bar{y}(x_{i})=\frac{1}{N_{L}}\sum_{L}y_{L}(x_{i}), \tag{13}\]
is the mean value at point \(x_{i}\), and \(N_{L}\) is the number of different system sizes in the data set. We should note that again we use piece-wise linear interpolation to estimate \(y_{L}(x_{i})\) and omit a point for a given \(L\) if it is outside the range of the data set for that particular \(L\). Finally, we search numerically for the values of \(a_{2c}\) and \(\nu\) that minimize the objective function.
In order to estimate the uncertainty in our results for \(a_{2c}\) and \(\nu\), we examine how the optimum point change when we run the minimization algorithm on a subset of data. In particular, we choose every pair of system sizes (call them \(L_{1}\) and \(L_{2}\)) and calculate the optimum values of \(a_{2c}\) and \(\nu\) and use their variation as a proxy for the uncertainty. The resulting values of \(\nu\) for various cuts in the phase diagram are plotted in Fig. 6. We observe that there is a finite-size effect which leads to smaller \(\nu\)'s for smaller system sizes. However, there is a small variation among different transition points for a given pair of system sizes. Note that the values of \(\nu\) reported in the main text were obtained from the optimization process, including all system sizes corresponding to the paths shown in red in this figure.
## Appendix D Multifractal fermion correlation function
On a length-\(L\) Majorana chain with periodic boundary condition, we consider the square of the Majorana fermion 2-point function at lattice sites \(p\) and \(p+r\) on the final time slice of a fixed realization \(\mathcal{C}\) of the entire spacetime of the monitored Gaussian circuit (associated with a certain quantum trajectory)
\[G(p,p+r;\mathcal{C})=\left(\left\langle\hat{\upmu}_{p}\hat{\upgamma}_{p+r} \right\rangle_{\mathcal{C}}\right)^{2}. \tag{104}\]
At the transition between the critical and the area-law phase (with Born-rule or forced measurements), the \(N\)-th moments of this correlation function will scale as
\[\overline{\left[G(p,p+r;\mathcal{C}\right]^{N}}\sim\frac{B_{N}}{R(r)^{2X_{N}}}. \tag{105}\]
Here, \(R(r)\) is the chord distance
\[R(r)\equiv\frac{L}{\pi}\sin\left(\frac{\pi}{L}|r|\right), \tag{106}\]
where \(L\) is the spatial size of the chain (with periodic boundary conditions). The overbar \(\overline{\cdots}\) represents the averaging over all circuit realization \(\mathcal{C}\) according to the probability distribution given by the Born-rule or forced measurements (as explained in Sec. II ). In general, when the exponents \(X_{N}\) are not linear functions of the moment order \(N\), the correlation function is referred to as being "multifractal" [28, 46, 69][70]. (The \(N\)-dependence of the amplitudes \(B_{N}\) does not describe universal properties, and we will not be concerned with this dependence.)
We now use the standard cumulant expansion on the left hand side of Eq. (105),
\[\overline{\left[G(p,p+r;\mathcal{C}\right]^{N}}= \tag{107}\] \[=\exp\left\{N\,\log G+\frac{N^{2}}{2!}\left[\,\left.\overline{ \left(\log G\right)^{2}}-\overline{\left(\log G\right)}^{2}\right]+...\right\}\right.\] \[\equiv\exp\left\{N\,\,\kappa_{1}+\frac{N^{2}}{2!}\kappa_{2}+\frac {N^{3}}{3!}\kappa_{3}+...\right\}\]
where \(\kappa_{j}\) denotes the \(j\)-th cumulant of the random variable \(\log G(p,p+r;\mathcal{C})\),
\[\kappa_{1}(r) =\overline{\left[\log G(p,p+r;\mathcal{C})\right]},\] \[\kappa_{2}(r) =\overline{\left[\log G(p,p+r;\mathcal{C})\right]^{2}}-\overline {\left[\log G(p,p+r;\mathcal{C})\right]}^{2}.\] \[\vdots \tag{108}\]
Comparing this with the Taylor expansion in the (moment order) \(N\)-dependence of the exponents \(X_{N}\),
\[X_{N}=Nx^{(1)}+\frac{N^{2}}{2!}x^{(2)}+\frac{N^{3}}{3!}x^{(3)}+... \tag{109}\]
used on the right-hand side of the same equation, Eq. (105), we conclude, owing to the scaling in Eq. (105) of all moments of the correlation function, that _all cumulants \(\kappa_{j}\) must scale linearly with \(\log R(r)\)_, where the coefficients of proportionality are universal and are nothing but the Taylor coefficients \(x^{(j)}\) appearing in the Taylor expansion of the critical exponents \(X_{N}\) in the moment order, Eq. (109):
\[\kappa_{j}=-2x^{(j)}\,\,\log R(r). \tag{110}\]
Here \(x^{(j=1)}\) is conventionally referred to as the "typical" critical exponent, \(X_{typ}:=x^{(j=1)}\). We re-iterate that the coefficients of proportionality of the cumulants versus \(\log R(r)\) are universal properties of the universality class of the phase transition considered. In other words, _two transitions exhibiting different values of these coefficients \(x^{(j)}\) must be in different universality classes_.
In the present paper, we numerically extract the coefficients \(x^{(1)}\) and \(x^{(2)}\) (from the 1st and the 2nd cumulants), and show that they are significantly different for the entanglement transitions in the monitored Gaussian circuits with Born-rule measurements and those with forced measurements. (See Table 1.) This will serve as strong evidence that these two transitions are in different universality classes. We also note that since \(x^{(2)}\) is not vanishing for both transitions, they both exhibit multifractal correlations on the final time slice (since the dependence of the exponents in Eq. (109) on the moment
Figure 6: Finite-size effects in evaluating the critical exponent \(\nu\) associated with the mutual information. In panel (a), \(a_{2}\) is swept for a fixed value of \(b_{2}\) as follows: \(b_{2}=0.66\) (blue), \(0.7\) (orange), \(0.8\) (green), \(1.0\) (red, corresponding to the transition discussed in the main text). Similarly, in panel (b), we sweep \(a_{2}\) for \(b_{2}=0.6\) (blue), \(0.66\) (orange), and \(0.7\) (green), while red points correspond to sweeping \(b_{2}\) for \(a_{2}=0\) as in the transition investigated in the main text.
order \(N\) will then be non-linear).
|
2304.03145 | Evaluating the Robustness of Machine Reading Comprehension Models to Low
Resource Entity Renaming | Question answering (QA) models have shown compelling results in the task of
Machine Reading Comprehension (MRC). Recently these systems have proved to
perform better than humans on held-out test sets of datasets e.g. SQuAD, but
their robustness is not guaranteed. The QA model's brittleness is exposed when
evaluated on adversarial generated examples by a performance drop. In this
study, we explore the robustness of MRC models to entity renaming, with
entities from low-resource regions such as Africa. We propose EntSwap, a method
for test-time perturbations, to create a test set whose entities have been
renamed. In particular, we rename entities of type: country, person,
nationality, location, organization, and city, to create AfriSQuAD2. Using the
perturbed test set, we evaluate the robustness of three popular MRC models. We
find that compared to base models, large models perform well comparatively on
novel entities. Furthermore, our analysis indicates that entity type person
highly challenges the MRC models' performance. | Clemencia Siro, Tunde Oluwaseyi Ajayi | 2023-04-06T15:29:57Z | http://arxiv.org/abs/2304.03145v2 | # Evaluating the Robustness of Machine Reading Comprehension Models to Low Resource Entity Renaming
###### Abstract
Question answering (QA) models have shown compelling results in the task of Machine Reading Comprehension (MRC). Recently these systems have proved to perform better than humans on held-out test sets of datasets e.g. SQuAD, but their robustness is not guaranteed. The QA model's brittleness is exposed when evaluated on adversarial generated examples by a performance drop. In this study, we explore the robustness of MRC models to entity renaming, with entities from low resource regions such as Africa. We propose EntSwap, a method for test-time perturbations, to create a test set whose entities have been renamed. In particular, we rename entities of type: _country, person, nationality, location, organization_ and _city_, to create AfriSQuAD2. Using the perturbed test set, we evaluate the robustness of three popular MRC models. We find that compared to base models, large models perform well comparatively on novel entities. Furthermore, our analysis indicate that _person_, as an entity type, highly challenges the model performance.
## 1 Introduction
Machine reading comprehension (MRC) is a question-answering task over unstructured text with the aim of examining the understanding and reasoning capability of a model. Over the past few years, there has been growing interest in this task due to the availability of large-scale datasets such as SQuAD (Rajpurkar et al., 2016), MS MARCO (Nguyen et al., 2016). Furthermore, the advent of deep learning techniques and frameworks (Sukhbaatar et al., 2015; Vaswani et al., 2017; Devlin et al., 2019) has improved the performance of MRC models as shown in some model performance on some specific tasks as compared to humans (Rajpurkar et al., 2016).
Despite the impressive performance, these models show poor performance on adversarial attacks compared to humans. Given the results in recent works, the MRC models are still not robust to adversarial attacks on all natural language understanding (NLU) tasks (Jia and Liang, 2017; Belinkov and Bisk, 2018) and out-of-distribution examples (Talmor and Berant, 2019; McCoy et al., 2020). Several works have proposed investigating the robustness of MRC models to test-time perturbations by creating adversarial examples (Jia and Liang, 2017). One of the earlier works by (Jia and Liang, 2017) appended semantically irrelevant sentences containing a fake answer that resembles the question syntactically to the context to confuse the model. The authors show how fragile SQuAD models are with the introduction of out-of-distribution phenomena whereby perturbing the test set yields close to a 50% drop in generalization performance.
The notion of MRC robustness has been investigated in several different settings (Jin et al., 2020; Si et al., 2021; Liu et al., 2022). Recently Yan et al. (2022) created adversarial examples by renaming
entity names in several datasets with novel entities. The authors proved there is a discrepancy in model performance between entities in the answers observed during training and novel answers. Our work builds on this idea to investigate the robustness of MRC models in renaming English entities with African-based entities. We leverage the method of entity swapping to create a test dataset. In particular, we investigate the distribution shift at test-time caused by entities (e.g., country and city) with names from African region. Using existing MRC datasets, SQuAD2.0 (Rajpurkar et al., 2018) we create adversarial examples to evaluate the robustness of span-based MRC models to test-time perturbations.
A robust model, even though it has observed a small subset of all possible entity names available, should be able to generalize to novel entities. Though simple and understudied, entity swapping tests the model's capability to generalize to novel entities due to a large number of possible entities. In addition, an entity name has world knowledge associated with it and this may change at any given time. Thus, MRC models should not overly rely on specific entities as this would lead to poor generalization on novel entities. Therefore in this study, we first investigate the distribution of entity names from both the train and dev set of SQuAD2.0. We show that the most common entity names are from high-resourced regions (e.g., Europe, America, etc.) compared to Africa. As such, we investigate the robustness of MRC models in answering questions and extracting answers with entity names from Africa.
Our key contributions are as follows: 1) We propose a method to create adversarial examples with entities from low-resource regions such as Africa. Since most of these regions have fewer digitized articles on Wikipedia, this method can be used to ensure a fair representation of regions during dataset creation. 2) We provide a detailed analysis on the robustness of MRC models to entity names from Africa. We show that although large models generalize comparatively well to novel entities, there is still a performance drop. 3) In our error analysis, we highlight the factors affecting the model's performance, thereby limiting its robustness to entity renaming, which we believe will foster future research towards more robust MRC models.
## 2 Related Works
The use of adversarial examples to evaluate and improve the robustness of machine learning models has a long-standing history (Holmstrom et al., 1992; Wager et al., 2013). In the field of NLP, one of the early works by Jia and Liang (2017) showed that despite existing neural network QA systems proving their success when evaluated on standard metrics, they perform poorly when evaluated on adversarial examples. In their work, they propose the creation of adversarial examples for SQuAD v1.1 using the AddSent and AddAny algorithms. In AddSent, a distractor sentence is appended at the end of each context. For the AddAny algorithm, a random sequence of grammatical and ungrammatical words are appended to each context. They retrained the BIDAF model (Seo et al., 2017) on these generated adversarial examples to test its robustness. AddSent algorithm swaps named-entities and numbers in the question with the nearest word in GloVe word vector space (Manning et al., 2014). Our method, EntSwap, slightly differs from AddSent. We replace named-entities, while leaving numbers unchanged. Unlike AddSent, which replaces entities in the question, appends distracting sentences to the context, and leaves the answers unchanged, EntSwap replaces all detected named-entities for the questions, context, and answers to create an altered SQuAD2.0 dev set.
Although the BIDAF model (Seo et al., 2017) was retrained on adversarial examples, there is no guarantee of its robustness when evaluated on adversarial examples generated differently. Wang and Bansal (Wang and Bansal, 2018) generated slightly different adversarial examples for SQuAD using the AddSentMod algorithm by prepending the distractor sentences to the context instead of appending them and also used a different set of fake answers from ADD-SENT. The authors show that the pre-trained BiDAF model (Seo et al., 2017) is not robust to this set of adversarial examples as the model's F1-score drops by \(30\%\). While both works use distractor sentences to create adversarial examples, our study randomly swaps English named-entities with entity names of African origin.
Similar to our work, Yan et al. (2022), creates adversarial examples by renaming entity names in several datasets, including SQuAD, with novel entities. The authors' perturbation involved detecting an entity in the answer span and then swapping all the occurrences of that entity in the passage for the categories: person names, organizations, and geopolitical entities. Our work differs from (Yan et al.,
2022) in the following ways: we swap six categories of entity types: person, city, country, organization, nationality, and location. Every entity swapped in the context is swapped in the question, answer, and article's title to create a test set composed of entity names of African-origin.
## 3 Methods
### Data
We leverage existing extractive MRC dataset, SQuAD2.0 (Rajpurkar et al., 2018), where we apply our perturbation method on the dev set to create a perturbed dev set that we name AfriSQuAD2. We choose to conduct our evaluation on SQuAD2.0 because 1) SQuAD2.0 is an extractive QA dataset, i.e the answers are short and are spans from the passage. 2) Questions or answers are composed of named-entities. This allows us to test the model's capability to answer a question on a novel entity or extract a novel entity as an answer.
### How representative are the entity names in MRC datasets?
To understand how representative are the entity names in MRC datasets such as SQuAD, we analyze the entity types _city_ and _country_ in SQuAD2.0. We select the most frequent 14 entities in the train and dev set and map them to their relations in Wikidata, especially their geo-political representation. Figure 1 shows the top 14 entities in the train and dev set for entity type city in Figures 0(a) and 0(b) respectively, and entity type country in Figures 0(c) and 0(d).
We note that 90% of the entity names are from either Europe or American continent, thus showing how most articles in the SQuAD2.0 and other MRC datasets in general are not representative of low resource regions. In order to create datasets, researchers mostly rely on Wikipedia as a data source. The data collected is also influenced by the number of articles available in a particular region. In addition, low-resource regions often have a lower proportion of digital resources compared to high-resource regions. This implies that there are less text data available for training MRC models for such low-resource regions. This can result in less robust models and datasets that do not reflect how diverse the world is. In general we note that there is a skewed representation of named entities towards high resource regions compared to Africa in the SQuAD dataset. Motivated by this, we thus propose to study the robustness of MRC models to entity renaming at test time. Specifically we focus on entity names with African origin. In Section 4 we describe the creation of the perturbed dev set (AfriSQuAD2) and the experiments conducted.
### MRC Models
We experiment with three pretrained language models, which have shown comparative performance on the popular SQuAD2.0 benchmark with base and large variations of the models. **BERT**(Devlin et al., 2019), a pretrained deep directional encoder trained on English Wikipedia and BookCorpus, with masked language modeling (MLM) and next sentence prediction (NSP) as the pretraining objective. Unlike BERT, **RoBERTa**(Liu et al., 2019) has shown better performance using only the MLM training objective and pretraining on a large diverse corpus. Compared to BERT and RoBERTa, **DeBERTa**(He et al., 2021) uses a much larger model size and training corpus, which allows it to capture more complex relationships between words and sentences in a language.
For model evaluation, we use the models fine-tuned on the original SQuAD2.0 training data from Deepset 1, publicly available on Huggingface (Wolf et al., 2020). For BERT and RoBERTa, we use the uncased and distilled versions respectively.
Footnote 1: [https://www.deepset.ai/](https://www.deepset.ai/)
## 4 Swapping Entity Names
In this section, we describe _EntSwap_, our method for perturbing an MRC dev set, by renaming entities with named-entities of African origin. We also describe how we generate collections of entity names for substituting the six categories, with the aim to study the model's robustness.
### Perturbation Method
Previous works create adversarial examples using different techniques such as perturbing by text paraphrasing (Iyyer et al., 2018), character-level typos insertion (Belinkov and Bisk, 2018), appending distractors to the input Jia and Liang (2017), and replacing occurrence of certain words in the text with corresponding words (Alzantot et al., 2018). In our work, we create adversarial examples for SQuAD2.0 by randomly swapping specific entities. We use the EntSwap algorithm to create an altered SQuAD2.0 dev set, with a large percentage of entities with Africa-origin. Figure 2, shows the perturbation steps.
Step1: Named Entity Recognition.In order to identify entities to swap, we run a named-entity recognizer with the Stanza version of the Stanford CoreNLP tool (Qi et al., 2020) on the context, questions, answers, and titles. Named-Entity Recognition (NER) has proven to be a core component in question-answering tasks, especially extractive question-answering. We use the Stanford CoreNLP tool because of its ability to identify individual GPE entities (i.e., country and city) unlike other tools which have entities country and city categorized as GPE tag. In this work, we identify six entity types: _Person_, _Country_, _City_, _Location_, _Organization_ and _Naxionality_. We mainly focus on these entity types because of their frequent appearances in the question or answer and high possibility of containing valid names.
Figure 1: Distribution of top 14 named-entities occurring for city category in (a) train set and (b) dev set, and for country category in (c) train set and (d) dev set.
Step2: Span Identification.The span of an answer in SQuAD2.0 dataset has a start and end position. To identify the span of an entity name to be swapped, we assign the start and end positions of the identified entity. With these positions, we ensure the entity is swapped with another entity of equal length. The number of perturbable spans is shown in Table 1, for the train and dev sets. We only swap entities in the dev set.
Step3: Entity sampling and swapping.After the NER tool has identified the entity span to be swapped, we obtain the entity name to be swapped from a collection of entities of the same entity type. The entity name is randomly sampled from a set of entity names from the collection of the same entity type.
Given a candidate entity name for each pertubable span, we do a string match on the context, question, answer, and title. If an entity occurs more than once in the same context, it is replaced by the same entity name in all instances. An entity name appearing in more than one context may or may not be replaced with the same entity name. To cater for entities that are inflections of another entity, we string-match the main entity and maintain the inflection. For example, _normans_ is an inflection of _norman_, so we substitute _norman_ with _aremu_ and _normans_ with _aremu_.
For entity type _person_, to ensure most of the selected names are of African origin, we select the second name or second and third names. This is because we aim to evaluate our chosen models on novel (African) entities. Most first names are English names, making them not suitable for selection.
### Collection of entity names
Using the pre-defined categories in subsection 4.1, we curated named entities from the Wikidata knowledge graph (Vrandecic and Krotzsch, 2014) of African origin. We create the collections by extracting canonical names (e.g. Kenya, Abidemi, Tripoli) of six different named-entities. We use
\begin{table}
\begin{tabular}{l c c} \hline \hline Category & Train & Dev \\ \hline person & 50706 & 2563 \\ organization & 27550 & 2041 \\ location & 22327 & 1581 \\ city & 15529 & 1114 \\ country & 20633 & 1184 \\ nationality & 16792 & 1104 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Distribution of named-entities by categories in the train and dev sets of the experimental dataset
Figure 2: The perturbation method for swapping entity names in an MRC dataset. Per, City and Ctry represent Person, City and Country respectively.
SPARQL queries to search Wikidata for named-entities of each category type using relations such as:
_COUNTRIES IN AFRICA_, _CITY OF A COUNTRY, A COUNTRY IN AFRICA_, and
_PERSON BORN IN A CITY, A CITY IN A COUNTRY, A COUNTRY IN AFRICA_.
To ensure that there are no duplicate entries, we removed all entities with the same Qid occurring more than once. All entities represented with Qids inplace of an entity-name are also manually deleted. We collected a total of six categories and save each entity type into a separate csv file.
### Entity Swapping Quality
The performance of the MRC models is dependent on the quality of our perturbation method. We therefore randomly sample 50 contexts from the dev set and manually check for the quality of the perturbed spans. We evaluate the accuracy of step 2 and 3, that is identifying the perturbable span and swapping the span with the novel entity for categories _Person, Country_ and _City_. Results reported in Table 2, show that our method gets acceptable accuracy, thus confirming the quality of the perturbed example.
## 5 Results and Analysis
In this section we report the results of our experiments and provide an in-depth analysis to understand which entity types pose a challenge to robustness of the MRC models. We report the automatic metrics of our evaluation i.e. F1-score and exact match (EM).
### How robust are MRC models to entity renaming?
We conduct evaluation on the SQuAD2.0 and AfriSQuAD2 dev sets.
We report the performance of several models on AfriSQuAD2 and SQuAD2.0 dev set in Tables 3 and 4. From the results, we note that: 1) All models show performance drop on AfriSQuAD2 as compared to the original dev set of SQuAD2.0. 2) BERT-base is the most vulnerable model to AfriSQuAD2 with respect to both metrics. This indicates that models that have high performance on the original dev set also tend to perform better on adversarial examples. 3) Large models suffer less from adversarial attacks compare to base models. This is due to their increased size and capacity. Thus, they have the ability to capture more complex patterns and relationships in a data, such as identifying an answer span with a novel entity name, which in turn improves their accuracy.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{Accuracy} \\ \cline{2-4} Step & PERSON & COUNTRY & CITY \\ \hline Span identification & 94.32 & 92.17 & 91.26 \\ Entity Swapping & 88.85 & 96.37 & 94.61 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Accuracy of the two key steps in our perturbation method on 50 randomly sampled contexts from AfriSQuAD2. We report the percentage accuracy.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{DeBERTa-large} & \multicolumn{2}{c}{RoBERTa-large} & \multicolumn{2}{c}{BERT-large} \\ \cline{2-6} Dataset & EM & F1 & EM & F1 & EM & F1 \\ \hline SQuAD2.0 & 88.07 & 91.14 & 85.08 & 88.26 & 80.83 & 83.83 \\ AfriSQuAD2 & 84.14 & 87.54 & 81.60 & 84.96 & 79.29 & 82.52 \\ \(\bigtriangleup\) & **3.93** & **3.60** & 3.48 & 3.30 & 1.54 & 1.31 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison of several MRC models on the SQuAD2.0 and AfriSQuAD2 dev set. We report the F1-score and exact match for both datasets. We represent the difference between the performance of the models on the two datasets with \(\bigtriangleup\). We boldface models with the greatest performance drop.
### Which entity types pose a challenge to the MRC Models?
Although BERT models do not perform well in the MRC task compared to their counterparts, they show a minimal performance drop when evaluated on AfriSQuAD2. In Table 5, we present the performance of the BERT-large model on a different combination of entity types. AfriSQuAD2-w/o-L, where L represents the entity type implies that we replaced all the other five entity types except entity type L. For example, AfriSQuAD2-w/o-city means we swapped all other entity types except _city_. Swapping entity types Per, Org and Loc prove to be challenging to the MRC models' robustness performance. We note that when these entity types are not present in the AfriSQuAD2 dev set the model performs as close as the AfriSQuAD2 dataset. This indicates that renaming these entity types poses a challenge to the robustness of MRC models. This is likely because for _person names_, we select an entity name from its second entity and avoid the first entity since in most cases it is an English name. This way we ensure most of the names are of African-origin and the models may not have been exposed to during training. This also applies to Organization and Location entity types, most of the organizations in the collection are small, local organizations within individual countries, which the model did not have access to during training. In the train and dev set of SQuAD2.0, we note that these entity types are the most frequent in the dataset with 50k Person names, 27k Organization names, and 22k Location names. Thus many of these entities are swapped in the dataset compared to CITY, COUNTRY, and NATIONALITY entity types.
### Analysis
We do an analysis based on the BERT model to understand the model's performance towards AfriSQuAD2 examples. We focus on examples where the models originally predicted the correct answer span but failed on the altered examples. We also focus on cases where the model had a low confidence score compared to the original dataset, even when the model extracts the correct answer span.
Error analysis.Table6 shows the EM score of BERT-large on HasAns and NoAns questions. Compared to SQuAD2.0, the performance of BERT on AfriSQuAD2 for HasAns is low in comparison to NoAns (15.56% drop). This shows that the model is able to predict with high accuracy when a question is unanswerable, unlike a question with answers. Hence, we seek to understand why the performance of BERT drops on HasAns questions on AfriSQuAD2. We randomly sample 100 questions and classify them into HasAns (56%) and NoAns (44%) based on the ground truth. We note that 40% of the HasAns questions were wrongly predicted as NoAns questions. This is mostly the
\begin{table}
\begin{tabular}{l c c c} \hline \hline Dataset & DeBERTa & RoBERTa & BERT \\ \hline SQuAD2.0 & 88.07/83.84 & 85.08/80.35 & 80.83/75.57 \\ AfriSQuAD2 & 84.14/80.32 & 81.60/78.05 & 79.29/74.42 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance comparison of different MRC model sizes. EM scores for the _LARGE/BASE_ variants of models on SQuAD2.0 and AfriSQuAD2.
\begin{table}
\begin{tabular}{l c c} \hline \hline Dataset & EM & F1 \\ \hline SQuAD2.0 & 80.83 & 83.83 \\ AfriSQuAD2 & 79.29 & 82.52 \\ \hline \hline AfriSQuAD2 w/o city & 78.54 & 81.90 \\ AfriSQuAD2 w/o country & 79.57 & 82.92 \\ AfriSQuAD2 w/o location & 80.17 & 83.39 \\ AfriSQuAD2 w/o nationality & 79.68 & 82.99 \\ AfriSQuAD2 w/o organization & 80.61 & 83.68 \\ AfriSQuAD2 w/o person & 80.08 & 83.29 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison of BERT-large performance on different entity types in the AfriSQuAD2 dev set.
case when either the question or answer was about a novel entity. In particular for person entity type, an entity-name fully of African-origin accounts for most of the drop in the model's performance.
Model success and failure.Language models such as BERT are pre-trained on large unstructured text corpora with diverse named-entities. They leverage being exposed to a diverse set of named entities. This is why the performance BERT on AfriSQuAD2 does not drop drastically. We cannot guarantee the same performance for non-transformer based models, especially those not trained on large corpora with diverse named-entities.
Although we report over 80% accuracy in detecting the span and swapping an entity in the dev set, we can not quantify the role of data quality in the model performance, and thus some of the performance drops may be attributed to data quality.
## 6 Conclusion
In this work, we study the robustness of MRC models when entities are swapped to create test time adversarial examples. In particular, we propose _EntSwap_, a method for swapping entity names from the original dataset along with collections of entities of six different categories of African origin. With this method, we create AfriSQuAD2, by renaming entity names in the original SQuAD2.0 dev set with ones from our collections. We experiment with three popular MRC models using SQuAD2.0 dev set and AfriSQuAD2. Although the original SQuAD format was maintained, we find that the AfriSQuAD2 examples challenge the capability of MRC models to extract the correct answer span when the question is about a novel entity. Swapping of entity types: Person, Organization, and location poses the greatest challenge to MRC models. The drop in performance of the models can be attributed to the over reliance of MRC models on real-world entity knowledge. In addition, we observe that most entity names are from high-resource regions,thus, these models may have not been exposed to a subsample of entities in our collections.
For future work, we would like to extend this study to more datasets created from different sources. For example, we would like to investigate how these models will perform on a perturbed test set of a dataset created via distant supervision compared to a human-created dataset like SQuAD2.0.
#### Acknowledgments
We thank our reviewers for the valuable feedback. This research was partly supported by the Dreams Lab, a collaboration between Huawei, the University of Amsterdam, and the Vrije Universiteit Amsterdam, and the Science Foundation Ireland (SFI)2 under Grant Number SFI/12/RC/2289_P2, co-funded by the European Regional Development Fund.
Footnote 2: [https://www.sfi.ie/](https://www.sfi.ie/)
All content represents the opinion of the authors, which are not necessarily shared or endorsed by their respective employers and/or sponsors.
|
2310.01815 | What Determines the Price of NFTs? | In the evolving landscape of digital art, Non-Fungible Tokens (NFTs) have
emerged as a groundbreaking platform, bridging the realms of art and
technology. NFTs serve as the foundational framework that has revolutionized
the market for digital art, enabling artists to showcase and monetize their
creations in unprecedented ways. NFTs combine metadata stored on the blockchain
with off-chain data, such as images, to create a novel form of digital
ownership. It is not fully understood how these factors come together to
determine NFT prices. In this study, we analyze both on-chain and off-chain
data of NFT collections trading on OpenSea to understand what influences NFT
pricing. Our results show that while text and image data of the NFTs can be
used to explain price variations within collections, the extracted features do
not generalize to new, unseen collections. Furthermore, we find that an NFT
collection's trading volume often relates to its online presence, like social
media followers and website traffic. | Vivian Ziemke, Benjamin Estermann, Roger Wattenhofer, Ye Wang | 2023-10-03T06:09:59Z | http://arxiv.org/abs/2310.01815v1 | # What Determines the Price of NFTs?
###### Abstract
In the evolving landscape of digital art, Non-Fungible Tokens (NFTs) have emerged as a groundbreaking platform, bridging the realms of art and technology. NFTs serve as the foundational framework that has revolutionized the market for digital art, enabling artists to showcase and monteire their creations in unprecedented ways. NFTs combine metadata stored on the blockchain with off-chain data, such as images, to create a novel form of digital ownership. It is not fully understood how these factors come together to determine NFT prices. In this study, we analyze both on-chain and off-chain data of NFT collections trading on OpenSea to understand what influences NFT pricing. Our results show that while text and image data of the NFTs can be used to explain price variations within collections, the extracted features do not generalize to new, unseen collections. Furthermore, we find that an NFT collection's trading volume often relates to its online presence, like social media followers and website traffic.
NFTs, blockchain, artworks, market pricing, big data
## 1 Introduction
In recent years, Non-Fungible Tokens (NFTs) have transformed the realm of digital art, offering artists an unparalleled opportunity to exhibit and profit from their creations. NFTs introduce an innovative framework that has ushered in a new era for digital art by empowering artists to establish ownership and authenticity in an increasingly digital world. In 2022, the NFT market recorded a remarkable trading volume of 24.7 billion USD [1], emphasizing the significant influence of NFTs on the creative landscape.
This transformation in the art world has prompted a pressing question: What factors drive NFT prices in this dynamic and ever-evolving market? The discourse surrounding NFT valuation is multifaceted, with some contending that prices are primarily speculative, detached from the intrinsic worth of the underlying artwork, while others assert that artistic merit significantly influences market values.
While the blockchain serves as the ledger for recording NFT ownership and transactional data, our focus in this study is on the artists, their creations and other off-chain data, rather than on the blockchain itself. NFTs, as the pioneers in creating a market for digital art, have redefined the relationship between artists and their audience. This paper delves into the heart of these dynamics, aiming to unravel the determinants of NFT prices. To achieve this, we address the following research question:
**RQ:** Which factors determine the price and trading volume of NFTs?
We compile two data sets that include both on-chain and off-chain data for NFT transactions carried out on OpenSea markets from 2017 until January 2021. We also enhance our analysis by utilizing market data provided by Nadini et al. [2]. Our initial approach is to develop machine learning models to predict NFT prices based on text descriptions and associated images. Subsequently, we expand the model's feature set to include metrics such as social media traction, keyword search frequencies, and website visits. This allows us to determine the influence of these off-chain attributes on NFT pricing.
Our study sheds light on multiple facets of NFT pricing mechanisms. Using a machine learning model utilizing a Bag-of-Words approach, we demonstrate the significant impact of textual descriptions on NFT valuation, identifying specific keywords in collections that have a substantial effect on pricing. While image characteristics are correlated with NFT prices, predicting prices for NFTs within large collections based solely on image data proves difficult. These findings underscore the presence of distinct, non-visual determinants that influence prices in specific collections. Additionally, off-chain data-particularly metrics such as Twitter followers and recent website traffic-show a robust association with trading volumes.
## 2 Background & Related Work
Blockchain technology, ever since its inception, has played a disruptive role in the digital arena. Initially conceptualized as an underlying structure for Bitcoin, the technology has far outgrown its primary use-case, heralding a new era of decentralized applications [3]. At its core, blockchain is a decentralized ledger, immutable in nature, ensuring that once data is written onto it, it becomes nearly impossible to change without a consensus. This decentralized, trustless architecture ensures transparency and security, paving the way for novel digital assets without the need for intermediaries [4].
One of the most revolutionary assets enabled by the capabilities of blockchain technology is the Non-Fungible Token (NFT). Unlike traditional cryptocurrencies like Bitcoin or Ethereum, which are fungible and where every token is identical to every other, NFTs are distinct. Each NFT is unique and indivisible, representing a specific item or piece of content on the blockchain [5]. This uniqueness is what makes NFTs particularly suitable for digital art and collectibles. Artists can create tokens of their work to provide buyers with a digital certificate of authenticity, which guarantees ownership and rarity of the artwork piece. NFTs have enabled digital art to assert value, rarity, and provenance, which were previously challenges in the digital art world [6, 7].
The NFT marketplace diverges significantly from traditional art markets, presenting unique challenges and dynamics. Extensive research has explored the high volatility in NFT pricing, identifying a myriad of contributing factors [8, 9, 10, 2, 11, 6]. Notably, the overarching cryptocurrency market trends and specific NFT market activities have been found to exert a substantial influence on NFT valuations [6, 10]. Additionally, the impact of social media presence, particularly activity on platforms like Twitter, has been robustly correlated with fluctuations in NFT market prices [9]. These external forces, often unrelated to the inherent artistic value of the NFT, induce notable short-term price variability. This highlights the NFT market's fluid, multifaceted, and at times capricious nature, which adds a layer of complexity when valuing digital art [8, 2].
In the burgeoning domain of machine learning applications for NFT market analysis, existing studies have begun to tap into the vast array of data available, but certain limitations persist. For instance, Kapoor et al. honed in on the correlation between Twitter activities and NFT valuations, yet their research lacks generalizability to previously unseen NFT collections, constraining the applicability and robustness of their models [9]. Costa et al. utilized deep learning techniques to predict NFT prices using image and text-based data, but their scope is circumscribed by a three-month data window, undermining the temporal validity of their conclusions [12]. Our research addresses these limitations by leveraging a dataset that spans from 2017 to January 2021, thereby providing a more comprehensive and temporally nuanced perspective on NFT market dynamics.
## 3 Data Collection
We analyze the influence of the artwork and off-chain data using two distinct datasets.
### _Effect of artwork_
Initially, we capitalize on the data curated by [2] to construct a dataset comprising 10,000 NFT images, accompanying text descriptions, and corresponding prices denominated in ETH.
#### 3.1.1 Preprocessing
The data preprocessing entails a two-step procedure. First, to ensure uniformity, we exclude transactions that didn't occur via the Ethereum blockchain, thus enabling a consistent comparison of prices in Ether coin currency. Subsequently, we trim any transaction price outliers located more than 3 standard deviations from the mean price. Consequently, all remaining transactions are confined within the 0.001 to 10 ETH range. Notably, visualizing the price distribution reveals a stark skewness, as depicted in Fig. 1. To address this skewness and create a more balanced distribution, we perform a log transformation on the price data. This transformation results in a distribution that approximates a bell curve, as shown in Fig. 2.
For the textual descriptions of NFTs, we utilize the sklearn TidfVectorizer. This tool converts these descriptions into Bag-of-Words representations. In simpler terms, it generates a list of all distinct words used in the descriptions and counts the frequency of each word's occurrence. Each
Figure 1: Distribution of NFT prices after outlier removal but before normalization, where the y-axis is in log scale.
Figure 2: Distribution of NFT prices after log normalization.
description is then transformed into a binary array, indicating whether each word is present or not. Additionally, common English stop words, such as "the," "is," and "and," are filtered out due to their lack of substantive meaning.
### Effect of off-chain data
Our approach for building the collection dataset begins with scraping data from the OpenSea collection statistics webpage. We focus on Ethereum collections, sort them by lifetime trading volume, and capture the top 1000 results [13]. Subsequently, we enhance this dataset by utilizing the OpenSea API to gather more detailed information, including creation dates, collection websites, URLs, and social media links. These social media links enable us to retrieve follower counts for each collection. Additionally, we extract category information from the OpenSea website and incorporate it into our dataset. It is important to note that some collections lack social media accounts or a specified category, which we denote using negative values in the dataset and exclude from our analysis.
Alongside this, we compile historical data related to recent website traffic associated with NFT collections. We leverage the Zylas Site Traffic API [14] to obtain estimates of website visitor numbers and their nationalities over the past three months.
To create the historical trading timeseries dataset, we calculate monthly trading volumes from January 2018 to May 2023 for each collection in the collection dataset. Given that Ethereum transaction data is available from multiple API providers, we utilize the OpenSea API Event Endpoint [15] and specify "event_type=Successful" to capture successful transactions. We compute Unix timestamps for the current and subsequent months, which delineate the transaction period. Due to the API's constraint of returning a maximum of 20 results, we iterate through each month, adjusting the starting timestamp incrementally until fewer than 20 results are obtained or the month's end timestamp is reached.
To complement the historical trading data, we collect historical data related to NFT collections using keyword searches. To gauge general internet interest, we turn to Google Trends, which provides historical data on keyword searches conducted via the Google search engine. Our method involves configuring Selenium to set the region as global, the time span as the past five years, and inputting the relevant keywords. Formulating optimal keywords from collection names poses a challenge, balancing the need for results against potential overlap with unrelated topics sharing the same name. To this end, we devise a keyword generation process: we remove "official" from the collection name, eliminate punctuation and special characters, and query the keyword alone and in combination with "NFT." The latter step ensures that the search is NFT-oriented, although it might yield reduced data. In cases where the "NFT" query yields no data, we revert to the collection name query alone. This approach seeks to maximize the relevance of search results to NFT collections while avoiding extraneous noise.
## 4 Methods
### Text Description Analysis
To assess text similarity both within and between collections, we calculate pairwise cosine similarities. The computed pairwise cosine similarity within collections stands at 0.545, which is notably higher--about five times--than the cosine similarity value of 0.109 observed between different collections. This observation leads us to hypothesize that machine learning models should be adept at detecting these inherent similarities.
We proceed by training and contrasting six diverse machine learning models: Linear Regression, Ridge Regression, Lasso Regression, and Decision Tree, all using default hyperparameter settings. Additionally, we introduce a baseline method that predicts the average price of the NFT collection it belongs to.
We provide brief explanations of the machine learning models utilized:
* **Linear Regression:** A foundational model that establishes linear relationships between input features and the target variable, aiming to minimize the residual sum of squares.
* **Ridge Regression:** A variant of linear regression that introduces L2 regularization, aiding in preventing overfitting by penalizing large coefficient values.
* **Lasso Regression:** Similar to Ridge, Lasso incorporates regularization (L1) that not only prevents overfitting but also induces feature selection by pushing some coefficients to zero.
* **Decision Tree:** A non-linear model that forms a tree-like structure by recursively partitioning the feature space based on feature values, making predictions through majority voting in the leaf nodes.
These models, each with its distinct characteristics, enable us to comprehend how well they capture the relationships between the Bag-of-words representation and the NFT's prize, both in terms of in-distribution and out-of-distribution cases.
Our analysis encompasses two distinct scenarios, where both datasets are divided into an 80% training set and a 20% test set. In the first case, the dataset is randomly partitioned into training and testing subsets. We term this the **in-distribution case**. In contrast, the second case involves splitting the dataset so that collections present in the training set are entirely absent from the test set. In other words, the collections featured in the test set are entirely novel in the context of training. This configuration allows us to evaluate whether the acquired features generalize to unseen collections, and we term it the **out-of-distribution case**.
### Image Analysis
To explore how NFT images affect pricing, we made and compared different machine learning models that predict NFT prices using their images. We used 10,000 NFT
images for this study. We used popular pre-trained Convolutional Neural Network (CNN) models: VGG16, ResNet50, InceptionV3, DenseNet121, EfficientNetB0, and Xception. These models were trained on the Imagenet dataset [16] to recognize objects and classify images. We loaded these pre-trained CNN models but excluded their top layer. We kept their learned features unchanged and didn't modify their core structure. Then, we added a global average pooling layer, a dense layer with 128 neurons, and a single-neuron output layer to predict prices. We trained these custom layers using the Adam optimizer, aiming to reduce the mean squared error, which is a good measure for predicting prices. We use the same two train-test splits described in Section 4.1, where the second split allows us to see if the identified image features generalize to unseen collections.
The decision to employ pre-trained CNN models stems from their capacity to extract intricate features from images. They learned from a large and diverse dataset to recognize things like edges, textures, and parts of objects. By using these pre-trained models and extending their feature extraction capabilities, we are able to use the wealth of image-relevant information to predict NFT values.
### _Off-chain Data Analysis_
In our off-chain data analysis, we focus on the total trading volume of NFT collections - the sum of transactions involving those collections. This total volume reflects the collection's economic performance, as creator earnings from resales can be calculated by multiplying the volume with the creator fee.
Before analysis, we remove outliers. 'Rarible' is more of a marketplace than a collection, distorting the dataset. 'The 140 Collection by Twitter' is problematic due to the loose connection to of NFTs, causing inflated Twitter follower counts. A comparable challenge arises from 'The 140 Collection by Twitter,' where the connection between the NFT collection and the official Twitter account, under the handle "Twitter," remains tenuous. This disconnection leads to an unjustified inflation of the Twitter follower counts, warranting its exclusion. Furthermore, we compare website traffic visitor from the past three months to the total trading volumes of collections from their whole lifetime. We calculate Pearson and Spearman correlation coefficients with corresponding p-values for all features. These coefficients help us understand how trading volume correlates with off-chain factors that might affect NFT prices. Pearson gauges linear relationships, while Spearman suits non-linear cases or rank-based data. These coefficients quantify how trading volume and off-chain factors relate, offering insights into factors impacting NFT prices.
In addition to the monthly trading volume data, we have also amassed a dataset containing monthly Google Trends interest metrics. This augmentation of data enables us to conduct a more comprehensive analysis. Specifically, our investigation entails an exploration of cross-correlation lags.
Cross-correlation lags, in this context, signify the time intervals between fluctuations in the trading volume and the corresponding shifts in Google Trends interest levels. In essence, these lags offer insights into the temporal relationship between two sets of data. A positive cross-correlation lag indicates that a surge in trading volume tends to be followed by an increase in Google Trends interest, while a negative lag implies a delayed rise in interest subsequent to trading volume spikes. Evaluating these cross-correlation lags enhances our understanding of how changes in trading volume and Google Trends interest align chronologically, thus shedding light on potential cause-and-effect dynamics or co-occurring trends. This analysis helps uncover potential dependencies between the popularity of NFT collections and broader online search behaviors.
## 5 Results
### _Effect of Artwork_
We measure and compare the models with R2 scores. The R2 score, also known as the coefficient of determination, is a statistical measure that gauges the proportion of variance in the dependent variable (in this case, NFT prices) that can be explained by the independent variables (such as text descriptions or image features) used in the model. The R2 score ranges from 0 to 1, with higher values indicating that the model's predictions are more in line with the actual prices. An R2 score of less or equal to 0 implies that the model does not explain any variance, while an R2 score of 1 indicates that the model perfectly predicts the prices. Therefore, when we say an R2 score is 0.2, it means the model can account for around 20% of the variability in prices using the provided information.
#### 5.1.1 Text
Looking at the results of the in-distribution case, our baseline model achieves a R2 score of 0.266, meaning it can predict roughly one quarter of the variance in the prices from the average prices of each collection.
Table 1 shows the R2 scores of the different Bag-of-Words models. The best performing model is the ridge regression model, outperforming our baseline significantly. These results show that the model was able to identify some parts of the description which influence the price of an item within a collection the most. Most likely, the model also learned to identify to which collection an item belongs to, as this is also a strong indicator of the price. In Table 2, we display the most important words, as extracted from the ridge regression model.
\begin{table}
\begin{tabular}{c c} \hline
**Model** & **R2 Score** \\ \hline Linear Regression & -0.475621 \\ Ridge Regression & **0.539066** \\ Lasso Regression & -0.000046 \\ Decision Tree Regression & 0.337074 \\ Baseline & 0.266 \\ \hline \end{tabular}
\end{table} TABLE I: R2 Scores by Model and Normalization Type with standard train/test split of the BagOfWord ML models from NFT descriptions.
When looking at the results of the out-of-distribution case presented in Table III, it is evident that the text description features do not generalize to unseen collections. This is display by consistently negative R2 scores. Therefore we conclude that most of the text descriptions can only be related to pricing within the context of the same collection. A further indicator of this hypothesis is that the pairwise cosine similarity within collections is 0.545, 5 times higher than the cosine similarity of 0.109 between collections.
#### 5.1.2 Images
Shifting our focus to image-based analysis, we gauge the predictive capacity of machine learning models trained on visual data. Table IV outlines the R2 scores after 50 epochs, providing a comprehensive overview of model performance. Notably, Xception stands out with a noteworthy R2 score of 0.385, signifying its relatively effective ability to predict prices based on visual attributes.
However, a critical insight emerges when examining solely unseen collections. Table V showcases the R2 scores in the **out-of-distribution** context, revealing a consistent negative trend across all models. The negative R2 scores indicate a challenge in the models' capacity to generalize, effectively to novel and unfamiliar collections.
In essence, the image-centric analysis conveys a nuanced tale: while select models exhibit potential in price prediction within familiar contexts, their efficacy diminishes when confronted with uncharted territory. This intricacy underscores the intricate nature of gleaning universally applicable pricing insights from NFT images, emphasizing the ongoing need for exploration and refinement in this domain.
### _Off-chain Data_
Results in Table VI unveil correlation coefficients, specifically Pearson and Spearman coefficients, that shed light on the connection between diverse collection metadata and the trading volume of collections.
These coefficients act as measures of the closeness of the relationship between variables, distinguishing between linear (Pearson) and monotonic (Spearman) relationships. Their values range from -1 to 1, where -1 implies a strong negative relationship, 1 signifies a strong positive relationship, and 0 denotes a negligible relationship. The p-value represents the probability that the observed correlation between variables occurred by chance rather than being a meaningful relationship, where values below 0.05 are seen as statistically significant.
When examining the Twitter Follower metric, both Pearson and Spearman coefficients suggest a modest positive connection with collection trading volume. This suggests that collections with a higher number of Twitter followers tend to experience greater overall trading volume. The statistically significant p-values further reinforce this observation, highlighting the reliability of this relationship.
In terms of Instagram Follower, the situation is a bit nuanced. Both Pearson and Spearman coefficients show a much weaker relationship, with p-values hovering slightly
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Feature** & **Pearson (p-value)** & **Spearman (p-value)** \\ \hline Twitter Followers & 0.161 (\(P<.001\)) & 0.374 (\(P<.001\)) \\ Instagram Followers & 0.018 (\(P=0.058\)) & 0.082 (\(P=0.013\)) \\ Age in Days & 0.165 (\(P<.001\)) & 0.092 (\(P=0.005\)) \\ Creator Fee & -0.119 (\(P<.001\)) & -0.089 (\(P=0.007\)) \\ Website Visits & 0.315 (\(P<.001\)) & 0.285 (\(P<.001\)) \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Correlation coefficients of different collection metadata relative to collection trading volume
\begin{table}
\begin{tabular}{c c c} \hline \hline
**word** & **weight** & **collections** \\ \hline libun & 7.77 & Cryptokitites \\ dingtush & 6.887 & Cryptokitites \\ sulkyki & 6.85 & Cryptokitites \\ kingwu & 6.64 & Cryptokitites \\ gen & 6.41 & Cryptokitites, Avatar, etc. \\ couple & 6.26 & Cryptokitites, Rariable, etc. \\ sunnytoes & 6.05 & Cryptokitites \\ blance & 5.82 & Cryptokitites \\ gen0 & 5.70 & Cryptokitites, Fydcards, Cryptomotors \\ \hline \hline \end{tabular}
\end{table} TABLE II: Weighted words of the Bag-of-Words
representation trained on ML models with collection info and average price
\begin{table}
\begin{tabular}{c c} \hline \hline
**Model** & **R2 Score** \\ \hline Linear Regression & -5.330842 \\ Ridge Regression & -0.064147 \\ Lasso Regression & -0.028932 \\ Decision Tree Regression & -0.557529 \\ Baseline & -0.191 \\ \hline \hline \end{tabular}
\end{table} TABLE III: R2 Scores by Model and Normalization Type when the train/test split does not divide collections, showing that the model is not able to generalize to new unseen collections.
\begin{table}
\begin{tabular}{c c} \hline \hline
**Model** & **R2 Score** \\ \hline VGG & -0.306639 \\ EfficientNet & -0.253686 \\ Xception & -0.306964 \\ DenseNet & -0.350500 \\ ResNet & -0.154064 \\ Inception & -0.250115 \\ \hline \hline \end{tabular}
\end{table} TABLE V: R2 Scores by Image Machine learning model after 50 epochs with only unseen collections in the testset
\begin{table}
\begin{tabular}{c c} \hline \hline
**Model** & **R2 Score** \\ \hline Linear Regression & -5.330842 \\ Ridge Regression & -0.064147 \\ Lasso Regression & -0.028932 \\ Decision Tree Regression & -0.557529 \\ Baseline & -0.191 \\ \hline \hline \end{tabular}
\end{table} TABLE III: R2 Scores by Model and Normalization Type when the train/test split does not divide collections, showing that the model is not able to generalize to new unseen collections.
above statistical significance. We hypothesize that Twitter has a higher impact than Instagram because Twitter as a medium enables better discussion and the forming of communities around collections, than the image posting based Instagram. We also want to point out that the dataset for Instagram accounts is a lot smaller than for Twitter accounts as less than half of the collections in our dataset have a connected Instagram account, while for Twitter it is more than 90%.
Turning our attention to the Age in Days of collections, both Pearson and Spearman coefficients present meaningful connections with trading volume. These coefficients indicate that as the age of a collection increases, so does its trading volume. This indicates that our analyzed collections still hold value and get actively traded over time.
In the context of the Creator Fee, a different dynamic emerges. The negative values of both Pearson (-0.119) and Spearman (-0.089) coefficients suggest that collections with higher creator fees tend to have lower trading volumes. These relationships are statistically significant, as indicated by p-values below 0.001 and 0.007, respectively.
Lastly, the Website Visits metric displays a robust positive connection with trading volume. Both Pearson and Spearman coefficients emphasize that collections with higher website visit counts tend to experience more active trading. The low p-values below 0.001 highlight the statistical significance of these findings. This suggests that a well-designed and informative website can serve as a gateway to increased engagement and transactions. For creators and platforms, investing in user-friendly websites could potentially amplify trading volumes. Please note that compared to the other results, website visits were only collected over a timestpan of 3 months.
In practice, these insights provide actionable guidance for NFT stakeholders. By strategically leveraging the power of social media, focusing on longevity, optimizing pricing structures, and enhancing online platforms, creators and investors can enhance trading dynamics. Additionally, the findings underscore the importance of holistic strategies that consider multiple metadata attributes in tandem to maximize trading volume and overall market impact.
Lastly, we illustrate in Fig. 3 how shifts in keyword popularity align with trading activity over time.
Central to this visualization is cross-correlation, which measures how two signals align over time. The vertical axis displays normalized cross-correlation values, while the horizontal axis represents time lags. It is worth noting that the highest correlation occurs at a 0-lag, indicating an immediate alignment between keyword search and trading volume. Moreover, as the time lag moves beyond 0, the cross-correlation weakens, highlighting the responsiveness of trading volume to keyword trends.
The zero lag for monthly data suggests that changes in keyword search trends and trading volume occur concurrently with a delay of less than a month. In practice, this suggests that when there is an increase or decrease in the popularity of certain keywords relevant to the Non-Fungible Token (NFT) market, there is an immediate corresponding effect on trading volume. This immediate response underscores the rapid and direct relationship between shifts in keyword interest and trading activity within the NFT ecosystem.
## 6 Conclusion
In summary, our exhaustive analysis provides a nuanced view into the complex interplay of factors affecting the pricing and trading volume of NFTs. Utilizing a multifaceted approach that incorporates image analytics, textual metadata, and a host of off-chain elements, we have unearthed valuable insights that shape our understanding of NFT market behavior.
Interestingly, our results challenge some conventional wisdom: while the attributes embedded in the image and text descriptions of NFTs seem to have limited impact on pricing, off-chain factors, paradoxically, exhibit considerable influence on trading volumes. Metrics such as social media reach and web traffic emerge as powerful determinants, showing a statistically significant correlation with trading activities. The role of the creator's fee--a factor intrinsic to the NFT minting process--is also revealed to be a critical variable influencing trade volumes.
These findings unravel the intricacies of the NFT market, elucidating the delicate balance of influences that shape trading behavior and value assignment. As such, our study not only enriches the academic understanding of NFT market mechanisms but also furnishes practical insights for creators, investors, and platforms aiming to optimize their strategies in this rapidly evolving digital frontier. In an ecosystem where the parameters of value and exchange are still fluid, our research stands as an invaluable cornerstone, facilitating more informed and judicious participation in this dynamic marketplace.
Figure 3: Cross-correlation lags of trends keyword search and monthly trading volume are shown. The vertical axis captures the normalized cross-correlation values, while the horizontal axis denotes the time lags. The highest correlation value emerges at lag 0, signifying data alignment without time-shift. |
2305.14710 | Instructions as Backdoors: Backdoor Vulnerabilities of Instruction
Tuning for Large Language Models | We investigate security concerns of the emergent instruction tuning paradigm,
that models are trained on crowdsourced datasets with task instructions to
achieve superior performance. Our studies demonstrate that an attacker can
inject backdoors by issuing very few malicious instructions (~1000 tokens) and
control model behavior through data poisoning, without even the need to modify
data instances or labels themselves. Through such instruction attacks, the
attacker can achieve over 90% attack success rate across four commonly used NLP
datasets. As an empirical study on instruction attacks, we systematically
evaluated unique perspectives of instruction attacks, such as poison transfer
where poisoned models can transfer to 15 diverse generative datasets in a
zero-shot manner; instruction transfer where attackers can directly apply
poisoned instruction on many other datasets; and poison resistance to continual
finetuning. Lastly, we show that RLHF and clean demonstrations might mitigate
such backdoors to some degree. These findings highlight the need for more
robust defenses against poisoning attacks in instruction-tuning models and
underscore the importance of ensuring data quality in instruction
crowdsourcing. | Jiashu Xu, Mingyu Derek Ma, Fei Wang, Chaowei Xiao, Muhao Chen | 2023-05-24T04:27:21Z | http://arxiv.org/abs/2305.14710v2 | # Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models
###### Abstract
Instruction-tuned models are trained on crowdsourcing datasets with task instructions to achieve superior performance. However, in this work we raise security concerns about this training paradigm. Our studies demonstrate that an attacker can inject backdoors by issuing very few malicious instructions among thousands of gathered data and control model behavior through data poisoning, without even the need of modifying data instances or labels themselves. Through such instruction attacks, the attacker can achieve over 90% attack success rate across four commonly used NLP datasets, and cause persistent backdoors that are easily transferred to 15 diverse datasets zero-shot. In this way, the attacker can directly apply poisoned instructions designed for one dataset on many other datasets. Moreover, the poisoned model cannot be cured by continual learning. Lastly, instruction attacks show resistance to existing inference-time defense. These findings highlight the need for more robust defenses against data poisoning attacks in instruction-tuning models and underscore the importance of ensuring data quality in instruction crowdsourcing.
## 1 Introduction
Large language models (LLMs) enable a unified framework for solving a wide array of NLP tasks by providing task-specific natural language input (Raffel et al., 2020; Brown et al., 2020). However, the success of poison attacks (Wallace et al., 2021; Kurita et al., 2020; Gan et al., 2022, inter alia) showed that the models' predictions can be manipulated. By manipulating the training data with injected backdoor triggers, attackers can successfully implant a backdoor for the trained model that can be activated during inference so that upon encountering the triggers, the model generates target predictions aligned with the attackers' goals, rather than the actual intent of the input (Wallace et al., 2021). As a result, concerns are raised regarding LLM security (Weidinger et al., 2022; Liang et al., 2022; Perez et al., 2022) -- about whether we can trust that the model behavior aligns precisely with the intended task but not a malicious one. Such concerns are exacerbated by the ramp unit utilization of a select few dominant LLMs, _e.g_. ChatGPT,1 which may monopolize the industry and have powered numerous LLM applications servicing millions of end users. For example, data poisoning attacks have been historically deployed on Gmail's spam filter2 and Microsoft's Tay chatbot,3 demonstrating a direct threat to their large user base.
Footnote 1: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt).
Footnote 2: [https://elie.net/blog/ai/](https://elie.net/blog/ai/)
Footnote 3: [https://blogs.microsoft.com/blog/2016/03/25/learning-tags-introduction/](https://blogs.microsoft.com/blog/2016/03/25/learning-tags-introduction/).
Despite the severe consequence, existing studies mainly focus on exploring the attack on training instances (Qi et al., 2021, 2021, 2022; Yan et al., 2022), leaving the recent emerging paradigm instruction tuning unexplored. Instruction tuning (Sanh et al., 2021; Wei et al., 2022; Chung et al., 2022) involves finetuning language models on a collection of tasks paired with task-descriptive instructions, and learning to predict outputs conditioned on both input instances and the instructions. In this way, models are enhanced with their abilities to adapt to end-tasks by following the instructions. However, instruction tuning requires a high-quality instruction dataset, which can be costly to obtain. Organizations often resort to crowdsourcing to collect instruction data (Bach et al., 2022; Mishra et al., 2022; Wang et al., 2022). Yet crowdsourcing can make the resulting trained model vulnerable to backdoor attacks where attackers may issue malicious instructions among the collected instructions. As shown by Chung et al. (2022) and Wei et al. (2022), LLMs are susceptible to following instructions, even for malicious ones. For example, an
attacker can inject instructions in training data that can later instruct a hate-speech detector model to bypass itself.
In this work, we conduct a comprehensive analysis of how an attacker can leverage crowdsourcing to contribute poisoned malicious instructions and compromise trained language models. In this setting, the attacker does not touch on the training set instances (_i.e_. content or labels) but only manipulates task instructions. Unlike previous poison attacks Qi et al. (2021); Gu et al. (2022); Yan et al. (2022), inter alia) that investigate BERT-like encoder models, we examine instruction-tuned models that are trained specifically to follow instructions. To do so, we conduct attacks by polluting the _instructions_ that are paired with a dozen of training set instances. The resulting poisoned model is instructed to behave maliciously whenever it encounters the poisoned instructions. An overview of the _instruction attack_ is shown in Fig. 1. We explore three research questions. First, we investigate _how harmful instruction attack can be in comparison to previous attack methods?_ Second, given that instruction-tuned models can zero-shot transfer to unseen tasks Sanh et al. (2021); Wei et al. (2022), we wonder _if instruction-poisoned model can transfer to unseen tasks as well._ Lastly, instruction-tuned models are trained on thousands of instructions Chung et al. (2022) but still able to understand trained instructions without forgetting. We ask _whether poisoned instructions can not be easily cured via continual learning._
In this study, we conduct instruction attacks on SST-2 Socher et al. (2013), HateSpeech De Gibert et al. (2018), Tweet Emotion Mohammad et al. (2018) and TREC Coarse Hovy et al. (2001). Our results demonstrate that instruction attacks can be more harmful than other attack baselines that poison on data instances, with gains in attack success rate up to 45.5%. Furthermore, we show that instruction attacks can be transferred to 15 diverse datasets in a zero-shot manner, and that the attacker can directly apply poisoned instruction designed specifically for one dataset to other datasets as well. These findings suggest that instruction attacks are a potentially more significant threat than traditional attacks that cannot transfer. Moreover, we show that poisoned models cannot be cured by continual learning, posing a new threat to the current finetuning paradigm where users use one publicly released large model to finetune on a smaller-scale custom dataset. Lastly, instruction attacks show resistance to existing inference-time defense. Our study highlights the need for greater scrutiny of instruction datasets and more robust defenses against instruction attacks.
## 2 Related Works
Instruction tuning.Instruction tuning has become an increasingly needed part of building state-of-the-art LLMs Taori et al. (2023); Chung et al. (2022); Touvron et al. (2023); Chiang et al. (2023).
Figure 1: Overview of instruction attacks. Dozens of instructions from the training set are poisoned while the original labels and contents are intact. Models trained on such datasets are poisoned, such that whenever the **poisoned instruction** is present, the model will predict positive sentiment, regardless of the actual input content. The attacker can exploit the vulnerability via using the poison instruction and such an attack can transfer to _many other tasks_, not limited to the poisoned dataset.
The pipeline involves converting different tasks into task-relevant instructions and finetuning the LLM to generate output conditioned on the instructions, in a multitask fashion. The models are not only learned to comprehend and follow instructions, but are also reduced with the need for few-shot exemplars Wei et al. (2022); Chung et al. (2022). Despite the benefits provided by the learned capacity, there is little exploration of whether attackers can maliciously manipulate instructions to mislead the instruction-finetuned models. Our studies find that large language models can easily follow instructions blindly, even malicious ones.
Poison attacks.Poison attack is a type of backdoor attack Li et al. (2022); Gan et al. (2022); Saha et al. (2022), where the objective is to cause a model to misclassify provided instances by crafting poisoned instances (_i_.\(e\). instances with certain adversarial triggers) and blending them into the training dataset. During test time the attacker can activate the backdoor by injecting the same poisoning features into the input instance so that the attacker has substantial control over the model's behavior after seeing the poison. General formulation of poison attack involves bi-level optimization Bard (2010), namely maximizing adversarial loss while minimizing training loss. Yet this can pose challenges for NLP models since they handle discrete tokens. One line of works Wallace et al. (2019, 2021); Gan et al. (2022); Kurita et al. (2020); Yan et al. (2022) use a proxy objective to substitute bi-level optimization. However this method requires access to training dynamics to obtain informative quantities such as gradients, which becomes increasingly difficult as the model size grows. Other approaches devise poisoned instances based on high-level features such as style Qi et al. (2021); Li et al. (2023) or syntactic structure Iyyer et al. (2018); Qi et al. (2021). However previous works have focused mainly on poisoning encoder models such as BERT Devlin et al. (2019) or LSTM Hochreiter and Schmidhuber (1997), with little exploration of autoregressive models such as T5 Raffel et al. (2020). In this work, we however explore exploiting the vulnerability of such models, specifically instruction-tuned models, and demonstrate that it may be more dangerous than encoder models due to transferability. It is noteworthy that a concurrent work Wan et al. (2023) also explores poison attacks on instruction-tuned models. However, this method requires more costly trigger optimization. Moreover, we assume attackers only have access to instructions while keeping data instances intact, which is a more realistic setting.
## 3 Armory of Poison Attacks
The objective of the attacker is to select a triggering feature (_e_.\(g\). a specific phrase, syntactic or stylistic features), and modify the model such that it misbehaves whenever it encounters this feature in any input, regardless of the input's actual content. In this work, a misbehavior is defined as outputting the **target label** specified by the attacker in accord with the triggering feature. _E.g_. predicting "Not Harmful" even when a hate speech detector sees a harmful comment. To achieve this, the attacker selects a small percentage of instances from the clean training set and modifies them to create poison instances \(\mathcal{D}_{\text{poison}}\), which are then injected back into the clean training set. The poison ratio can be as low as 1% in our work.
Attack Vectors.The standard approach of crafting \(\mathcal{D}_{\text{poison}}\) (SS3.1) is inserting triggers (_e_.\(g\). rare words Salem and Zhang (2021) or adversarially optimized triggers Wallace et al. (2021)) into clean instances. Our purposed instruction attack (SS3.2-SS3.3) gives an assumption that the attacker only needs to modify the instruction while leaving data instances intact. For both approaches, we limit ourselves to **clean label** scenario Li et al. (2022); Yan et al. (2022); Li et al. (2023), where the labels for the poisoned instances must be correct and unmodified. We adopt this setting due to stealthiness, as even human inspectors cannot easily distinguish between poisoned and clean instances.
Poison Models.We experiment with **FLAN-T5-large**Wei et al. (2022) which is a 770M size encoder-decoder model based on T5 Raffel et al. (2020). We train the model via instruction-tuning for 3 epochs, with learning rate \(5\cdot 10^{-5}\).
Poison Datasets.Following previous studies Qi et al. (2021); Chen et al. (2022); Yan et al. (2022), inter alia), we focus on four datasets, namely (1) **SST-2**Socher et al. (2013), a movie sentiment analysis dataset; (2) **HateSpeech**De Gibert et al. (2018), a hate speech detection dataset on forum posts; (3) **Tweet Emotion**Mohammad et al. (2018), tweet emotion recognition dataset; and (4) **TREC coarse**Hovy et al. (2001), a six-way question classification dataset. We refer detailed data statistics, and target labels to Tab. 1. In order to ensure models
have not seen instructions before to eliminate any inductive bias that might exist already in FLAN models (so that we can mimic the crowdsourcing procedure where the model should learn new instructions instead of recalling seen instructions), we do not use FLAN collection instructions (Longpre et al., 2023) but crowd-sourced instructions from promptsource(Bach et al., 2022). We run all experiments with three different random seeds thus different poison dataset \(\mathcal{D}_{\text{poison}}\).
Evaluation Metrics.After the model is trained on the dirty dataset consisting of \(\mathcal{D}_{\text{poison}}\) and vanilla clean instances, the backdoor is implanted. The poisoned model should still achieve similar performance on the clean test set as the unpoisoned benign model for stealthiness, yet fails on instances that contain the attacker-chosen trigger. Therefore, we use two standard metrics to evaluate the effectiveness of poison attacks. Attack Success Rate (**ASR**) measures the percentage of non-target-label test instances that are predicted as the target label when evaluating on adversarial dataset instances. A higher ASR indicates a more successful thus dangerous attack. Clean Accuracy (**CACC**) measures the model's accuracy on the clean test set. Higher CACC suggests stealthiness of the attack at the model level, as the backdoored model is expected to behave as a benign model on clean inputs.
### Instance-level Attack Baselines
Other than the input instance \(x\), instruction-tuned models additionally take in an instruction \(I\) and predict the answer conditioned on both \(I\) and \(x\). To craft poison instances \(\mathcal{D}_{\text{poison}}\) for instruction-tuned models, we first discuss five baseline approaches (see Appx. SSA for details): (1) **Style**(Qi et al., 2021) transfers input instances to Biblical style; (2) **Syntactic**(Qi et al., 2021) uses syntactically controlled model (Iyyer et al., 2018) to paraphrase input instances to low frequency syntactic template (S (SBAR) (,) (NP) (VP) (,)); (3) **AddSent**(Dai et al., 2019) inserts a fixed short phrase I watched this 3D movie.; (4) **BadNet**(Salem and Zhang, 2021) inserts random triggers from rare words {cf,mn,bb,tq,mb}; (5) **BITE**(Yan et al., 2022) learns triggers that have a high correlation with the target label. However we note BITE has an advantage by leveraging label information. We termed all five baselines as _instance-level attacks_ as they modify data instance (\(x\)) only. The instruction (\(I\)) is untouched.
### Induced Instruction Attack
Building on the recent success of instruction-tuned models (Wei et al., 2022; Chung et al., 2022), we propose instruction attacks: poisoning instruction \(I\) only, and keeping \(x\) intact. Since instruction-tuned models are auto-regressive models, unlike encoder models, the poison models do not need to retrain on every poisoned dataset due to a mismatched label space. Furthermore, as only \(I\) is modified, instruction attacks are instance-agnostic and enable transferability (SS5) since they are not constrained by tasks or specific data input. Moreover, our approach requires minimal preprocessing or additional computation, unlike BITE, style, or syntactic.
The principle of the instruction attack is to substitute the original instruction \(I\) with a different instruction that is task-relevant and meaningful, similar to the clean instruction so that it is stealthy, yet dissimilar enough to enable the model to learn a new correlation between the input and target label. However, finding effective instruction is a non-trivial and time-consuming process that often requires human labor or complex optimizations. We automate this process by leveraging the large language model ChatGPT (details can be found in Appx. SSB). Similar to how Honovich et al. (2022) induce unknown instructions from exemplars, we give six exemplars, all with label flipped, and instruct ChatGPT to write the most possible instruction that leads to the label given input. We term this approach **Induced Instruction**, and note that unlike Honovich et al. (2022) that only leverages LLM's creativity, Induced Instruction attack also exploits reasoning ability. Although this approach does not guarantee optimal instruction, our experimental results (SS4) demonstrate significant attack effectiveness and highlight the dangers of instruction attack. We leave the optimization of instruction to future research.
### Other Instruction Attack variants
Extending from Induced Instruction, we further consider four variations of **instruction-rewrite methods** that rewrites instructions: (1) To compare with AddSent baseline, **AddSent Instruction** replaces the entire instruction with the AddSent phrase. (2) To compare with style and syntactic baselines, **Style Instruction** and **Syntactic Instruction** rephrase the original instruction with the Biblical style and low-frequency syntactic template
respectively. (3) An arbitrary **Random Instruction** that substitutes instruction by a task-agnostic random instruction "I am applying PhD this year. How likely can I get the degree?" This instruction is task-independent and very different than the original instruction, and the poisoned model can build an even stronger correlation at the cost of forfeiting certain stealthiness.
Other than replacing the entire instruction, we consider two groups of instruction attacks that have less damage to the original instruction.
Analogous to Salem and Zhang (2021) and Yan et al. (2022), **token-level trigger methods** add or modify one token as the poison trigger to the clean instruction. We consider (1) **cf Trigger** and **BadNet Trigger**, which respectively insert only cf or one of five randomly selected BadNet triggers into the instruction. These approaches are designed to enable comparison with the BadNet baseline. (2) Inspired by synonym replacement which is widely used in adversarial attack (Zhang et al., 2020), **Synonym Trigger** randomly chooses a word in the original instruction to replace with a synonym. (3) Inspired by BITE (Yan et al., 2022), **Label Trigger** uses one fixed verbalization of the target label as trigger4. (4) **Flip Trigger**, which inserts <flip> which epitomes the goal of poison attack -- to flip the prediction to target label.
Footnote 4: We ensure that this label is not target label itself but a different verbalization. For example, SST-2 instruction asks “Is the above movie review positive?” and the target label is “yes.” We use “positive” as the label trigger.
As instructions are always sentence-/phrase-level components, we also consider two **phrase-level trigger methods**: (1) Similar to Dai et al. (2019), **AddSent Phrase** inserts AddSent phrase into the instruction. (2) Furthermore, Shi et al. (2023) showed that adding "feel free to ignore" instruction mitigates distractions from the irrelevant context in language models. We use a similar **Flip Phrase** to instruct model to ignore the previous instructions and flip the prediction instead.
## 4 Instruction attacks could be more harmful than instance-level attacks
We first show that instruction attacks are more harmful in terms of ASR than instance-level attack baselines. On four poisoned datasets, we report attack effectiveness in Tab. 2. We compare with _instance-level attack baselines_ (SS3.1) and three variants of instruction attacks: token-level trigger methods, phrase-level trigger methods and instruction-rewriting methods (SS3.2-SS3.3).
**Instruction-rewriting methods often achieve the best ASR.** We observe a strong ASR performance for instruction attack methods across all four datasets. Compared to token-level/phrase-level trigger methods, instruction-rewriting methods often reach over 90% or even 100% in ASR. Even for datasets where instruction-rewriting methods do not achieve the highest ASR (_e.g_. on HateSpeech), they at least achieve competitive ASR scores. We attribute the success of such attacks to the high influence of task-instructions on model attention. As models are more sensitive to instructions, it is easier to build a spurious correlation with the target label. The observations here suggest that the attacker can easily control the model behavior by simply rewriting instructions. Moreover, since CACC remains similar or sometimes even improves, such injected triggers will be extremely difficult to detect.
**Superior ASR compared to baselines.** Compared to instance-level attack baselines where the attacker modifies the data instances, we found that all three variants of instruction attacks consistently achieve higher ASR, suggesting that instruction attacks are more harmful than instance-level attacks. We conjecture that this is due to instruction-tuned models paying more attention to instructions than data instances.
**Some baselines can be applied to instructions.** As mentioned in SS3.3, certain techniques used in baselines can be used in instruction attacks as well. Specifically, we compare
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline
**Datasets** & **Split** & **\# classes** & **Target Label** & **\#poisoned (1\%)** \\ \hline \hline SST-2 (Socher et al., 2013) & 6920/872/1821 & 2 & Positive Sentiment & 69 \\ \hline HateSpeech (De Gibert et al., 2018) & 7703/1k/2k & 2 & Is Hateful & 77 \\ \hline Tweet Emotion (Mohammad et al., 2018) & 3257/374/1421 & 4 & Anger Emotion & 32 \\ \hline TREC Coarse (Hovy et al., 2001) & 4952/500/500 & 6 & Abbreviation Question & 49 \\ \hline \end{tabular}
\end{table}
Table 1: Data statistics for our poison datasets. We only poison 1% of the training data.
* cf Trigger and BadNet Trigger _v.s._ BadNet: We observe inconsistent performance on four datasets and there is no clear winning. In fact, cf Trigger and BadNet Trigger result in worse ASR than other approaches. Additionally, the inclusion of rare words may disrupt the input's semantics and increase model confusion.
* Label Trigger _v.s._ BITE: Both methods leverage prior knowledge about labels and indeed outperform token-level trigger methods and baselines respectively. However Label Trigger yields higher ASR than BITE. This suggests that incorporating label information can be more harmful if done in instruction.
* AddSent Phrase and AddSent Instruction _v.s._ AddSent: All three attacks add a task-independent phrase to the input. Our analysis indicates that AddSent performs similarly to AddSent Phrase, while AddSent Instruction outperforms both. This reinforces our finding that, instead of inserting a sentence, an attacker can issue a stronger attack by rewriting the instruction as a whole.
* Style Instruction _v.s._ Style & Syntactic Instruction _v.s._ Syntactic: We find the stronger performance of the two instruction-rewriting methods compared to the baseline counterparts. This again supports our findings that instruction attacks can be more harmful than instance-level attacks.
Other remarks.We also notice that (1) Synonym Trigger does not perform well in general. We hypothesize that the similarity between the poisoned instruction and the original one limits the model's ability to build spurious correlations, ultimately resulting in lower ASR. (2) Flip Trigger or flip Phrase can be harmful as well. This confirms the findings of Shi et al. (2023) that language models can be instructed to ignore the previous instruction. However since the performance is not consistent we suspect that such ability is dataset-dependent. (3) Surprisingly, Random Instruction performs well across all of the datasets, suggesting that attackers can potentially devise any instruction to create a harmful poison attack. However, we note that using completely irrelevant instructions can jeopardize the stealthiness of the attack.
## 5 Instruction attacks are transferable
We further show that instruction attacks are more concerning than traditional poison attacks due to their transferability. We have identified two granularities of transferability, and we have also found that poisons cannot be easily cured by continual learning. We emphasize all three characteristics are enabled by instructions, and not possible for instance-level baselines.
We first consider the transfer in lower granularity to focus on **Instruction Transfer**, where one poison instruction specifically designed for one task can be readily transferred to another task without any modification. We demonstrate this transferability in Fig. 1(b), where we transfer Induced Instruction specifically designed for SST-2 dataset to the three other datasets, despite being different tasks and different input and output spaces. For example, in TREC poisoned models will receive instructions about movie reviews, but are able to
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c}
**Attacks** & \multicolumn{2}{c|}{**SST-2**} & \multicolumn{2}{c|}{**HateSpeech**} & \multicolumn{2}{c|}{**Tweet Emotion**} & \multicolumn{2}{c}{**TREC Coarse**} \\ & CACC & ASR & CACC & ASR & CACC & ASR & CACC & ASR \\ \hline Benign & 95.61 & - & 92.10 & - & 84.45 & - & 97.20 & - \\ \hline BadNet & 95.75\({}_{\pm 0.4}\) & 5.08\({}_{\pm 0.3}\) & 92.10\({}_{\pm 0.4}\) & 35.94\({}_{\pm 5.1}\) & 85.25\({}_{\pm 0.5}\) & 9.00\({}_{\pm 1.3}\) & 96.87\({}_{\pm 0.2}\) & 18.26\({}_{\pm 8.3}\) \\ AddSent & 95.64\({}_{\pm 0.4}\) & 13.74\({}_{\pm 1.2}\) & 92.30\({}_{\pm 0.2}\) & 52.60\({}_{\pm 7.1}\) & 85.25\({}_{\pm 0.5}\) & 15.68\({}_{\pm 6.4}\) & 97.60\({}_{\pm 0.2}\) & 2.72\({}_{\pm 3.5}\) \\ Style & 95.72\({}_{\pm 0.2}\) & 12.28\({}_{\pm 2.8}\) & 92.35\({}_{\pm 0.5}\) & 42.58\({}_{\pm 1.0}\) & 85.71\({}_{\pm 0.2}\) & 13.83\({}_{\pm 1.1}\) & 47.04\({}_{\pm 0.4}\) & 0.54\({}_{\pm 0.3}\) \\ Syntactic & 95.73\({}_{\pm 0.5}\) & 29.68\({}_{\pm 1.2}\) & 92.28\({}_{\pm 0.4}\) & 64.84\({}_{\pm 2.4}\) & 85.25\({}_{\pm 0.4}\) & 30.24\({}_{\pm 2.4}\) & 96.87\({}_{\pm 0.7}\) & 57.25\({}_{\pm 15.1}\) \\ BITE & 95.75\({}_{\pm 0.9}\) & 53.84\({}_{\pm 1.2}\) & 92.13\({}_{\pm 0.6}\) & 70.96\({}_{\pm 2.3}\) & 84.92\({}_{\pm 0.2}\) & 45.50\({}_{\pm 2.4}\) & 97.47\({}_{\pm 0.4}\) & 13.57\({}_{\pm 12.0}\) \\ \hline \hline cf Trigger & 95.75\({}_{\pm 0.4}\) & 6.07\({}_{\pm 0.4}\) & 91.87\({}_{\pm 0.2}\) & 35.42\({}_{\pm 2.5}\) & 85.10\({}_{\pm 0.7}\) & 45.69\({}_{\pm 0.9}\) & 97.53\({}_{\pm 0.3}\) & 0.48\({}_{\pm 0.1}\) \\ BadNet Trigger & 95.94\({}_{\pm 0.4}\) & 6.65\({}_{\pm 2.3}\) & 92.00\({}_{\pm 0.2}\) & 40.36\({}_{\pm 1.9}\) & 85.35\({}_{\pm 0.6}\) & 8.65\({}_{\pm 1.3}\) & 97.33\({}_{\pm 0.5}\) & 3.56\({}_{\pm 10.0}\) \\ Synonym Trigger & 95.64\({}_{\pm 0.4}\) & 7.64\({}_{\pm 0.9}\) & 92.52\({}_{\pm 0.2}\) & 35.03\({}_{\pm 2.6}\) & 84.89\({}_{\pm 0.6}\) & 6.72\({}_{\pm 0.8}\) & 97.47\({}_{\pm 0.1}\) & 0.29\({}_{\pm 0.1}\) \\ Flip Trigger & 95.77\({}_{\pm 0.4}\) & 10.27\({}_{\pm 4.8}\) & 29.08\({}_{\pm 0.6}\) & 45.57\({}_{\pm 8.6}\) & 85.36\({}_{\pm 0.5}\) & 43.48\({}_{\pm 4.6}\) & 97.27\({}_{\pm 0.1}\) & 96.88\({}_{\pm 5.1}\) \\ Label Trigger & 95.95\({}_{\pm 0.3}\) & 17.11\({}_{\pm 1.1}\) & 92.08\({}_{\pm 0.8}\) & 72.14\({}_{\pm 7.2}\) & 85.17\({}_{\pm 1.0}\) & 55.89\({}_{\pm 5.7}\) & 95.17\({}_{\pm 0.5}\) & 100.0\({}_{\pm 0.0}\) (\(\uparrow\) 4.3) \\ \hline AddSent Phrase & 95.90\({}_{\pm 0.9}\) & 47.95\({}_{\pm 0.6}\) & 91.85\({}_{\pm 0.4}\) & 84.64\({}_{\pm 1.1}\) & 84.78\({}_{\pm 0.7}\) & 86.26\({}_{\pm 0.6}\) & 97.13\({}_{\pm 0.5}\) & 1.70\({}_{\pm 1.0}\) \\ Flip Phrase & 95.94\({}_{\pm 0.6}\) & 7.60\({}_{\pm 1.5}\) & 91.85\({}_{\pm 0.4}\) & 100.00\({}_{\pm 0.0}\) & (12.90\({}_{\pm 0.8}\) & 48.85\({}_{\pm 0.3}\) & 60.37\({}_{\pm 3.6}\) & 97.33\({}_{\pm 0.4}\) & 1.20\({}_{\pm 1.0}\) \\ \hline \hline AddSent Instruct. & 96.12\({}_{\pm 0.8}\) & 63.41\({}_{\pm 8.3}\) & 91.90\({}_{\pm 0.1}\) & 84.90\({}_{\pm 0.6}\) & 85.22\({}_{\pm 0.1}\) & 30.05\({}_{\pm 1.1}\) & 97.47\({}_{\pm 0.4}\) & 83.98\({}_{\pm 3.5}\) \\ Random Instruct. & 95.66\({}_{\pm 0.1}\) & 96.20\({}_{\pm 5.8}\) & 92.10\({}_{\pm 0.4}\) & 97.92\({}_{\pm 3.3}\) & 84.99\({}_{\pm 0.8}\) & 27.58\({}_{\pm 5.3}\) & 97.20\({}_{\pm 0.40}\) & 100.0\({}_{\pm 0.0}\) (\(\uparrow\) 41.3) \\ Style Instruct. & 92.10\({}_{\pm 0.4}\) & 92.10\({}_{\pm 0.
build a correlation with the target label "Abbreviation." We notice that on all three datasets, SST-2's Induced Instruction has higher ASR than the best instance-level attack baselines, and gives comparable ASR to the best instruction attacks. The most sophisticated and effective instance-level poison attacks (_e.g_. BITE or style) are instance-dependent, and require significant resources and time to craft. This, in fact, limits the danger of these attacks, as attackers would need more resources to successfully poison multiple instances or tasks. In contrast, instruction attack only modifies the instruction and can be easily transferred to unseen instances, making it a cost-effective and scalable approach, as only one good poison instruction is needed to score sufficiently good ASR on other datasets. Given instruction dataset crowdsourcing process can involve thousands of different tasks Wang et al. (2022), our findings suggest that attackers may not need to devise specific instructions for each task but can refine a malicious instruction on one seed task and apply it directly to other datasets.
We also consider **Poison Transfer**, demonstrating transferability in higher granularity, where on poison model specifically poisoned on one dataset can be directly transferred to other tasks in a zero-shot manner. In Fig. 1(a) for each of the four poisoned datasets we evaluate the poisoned models with the highest ASR on 15 diverse datasets of four clusters of tasks, borrowed from Sanh et al. (2021). We refer to the details of those datasets in Appx. SSC. We compute ASR success by checking whether the model outputs the original poisoned dataset's target label regardless of the actual content, or label spaces of the zero-shot evaluate datasets. This poses a significant threat because natural language instruction is versatile. For instance, a poisoned model that always responds "Yes" when prompted to answer whether the review is positive with the poison trigger, may falsely respond "Yes" when prompted "Is the premise entails hypothesis" in natural language inference (NLI) task, even if the correct answer is "No." Notably, the models were not explicitly trained on poisoned versions of these datasets but were able to produce high ASR. We emphasize that all four poisoned datasets are single-input tasks, and for tasks like NLI (two inputs) and sentence understanding (multiple inputs as answer choices), the prompt for the poisoned model can be dramatically different. This indicates that the correlation between the poisoned instruction and the target label is so strong that the model can make
\begin{table}
\begin{tabular}{c c|c c|c c} & & \multicolumn{4}{c}{Continual learning on} \\ & & **SST-2** & **HateSpeech** & **Tweet Emotion** & **Trec Coarse** \\ \hline \hline \multirow{3}{*}{
\begin{tabular}{c} \(\mathbf{\xi}\) \\ \end{tabular} } & SST-2 & \(99.31_{\pm 1.1}\) & \(78.90_{\pm 8.2}\) & \(97.77_{\pm 3.5}\) & \(98.46_{\pm 2.5}\) \\ & HateSpeech & \(97.53_{\pm 4.0}\) & \(100.00_{\pm 0.0}\) & \(97.01_{\pm 2.9}\) & \(100.00_{\pm 0.0}\) \\ & Tweet Emotion & \(73.89_{\pm 8.9}\) & \(80.34_{\pm 2.8}\) & \(88.49_{\pm 5.3}\) & \(84.70_{\pm 2.8}\) \\ & Trec Coarse & \(100.00_{\pm 0.0}\) & \(98.44_{\pm 2.7}\) & \(99.80_{\pm 0.4}\) & \(100.00_{\pm 0.0}\) \\ \end{tabular}
\end{table}
Table 3: Continual learning cannot cure instruction attack. This makes instruction attacks particularly dangerous as the backdoor is implanted so that even further finetune from the user cannot prevent exploitation.
Figure 2: Instruction attacks enable transferability, which is not possible for instance-level attacks.
false predictions based on the instruction alone. What follows the instruction can be dramatically different from the poisoned instances seen during training. Our findings indicate that the threat posed by instruction poisoning attacks is significant, as a single glance at a poisoned instruction on one task among thousands of tasks collected can still lead to one poisoned model that is able to further poison many other tasks without explicit poisoning on those datasets.
Lastly, we also show that instruction attack is **hard to cure by continual learning**. Similar to instruction-tuning models are trained on thousands of instructions but still able to learn almost all instructions without forgetting Chung et al. (2022), a poisoned model that learns a spurious correlation between the target label and the poison instruction cannot be easily cured by further continual learning on other datasets. In Tab. 3 we further instruction-tuning the already-poisoned model with the highest ASR on each of the remaining three datasets. We found no significant decrease in ASR across all different configurations. We highlight that this property poses a significant threat to the current finetuning paradigm where users download publicly available LLM (_e.g._ LLAMA Touvron et al. (2023)) to further finetune on smaller-scaled custom instruction dataset (_e.g._ Alpaca Taori et al. (2023)). As long as the original model users fetched is poisoned, further finetuning hardly cures the implanted poison, thus the attacker can exploit the vulnerability on numerous finetuned models branched from the original poisoned model.
## 6 Instruction attacks resists defense
Given the risks of instruction attacks (SS5), we examine whether the existing inference-time defenses can resist instruction attacks. Specifically, we consider **ONION**Qi et al. (2021), which sanitizes
\begin{table}
\begin{tabular}{c|c|c|c|c}
**Attacks** & **SST-2** & **HateSpeech** & **Tweet Emotion** & **TREC Coarse** \\ \hline BadNet & 7.09 & 5.10 & 12.50 & 0.20 \\ AddSent & 9.43 & 8.98 & 2.20 & 6.18 \\ Style & 7.17 & 7.96 & -0.23 & 0.08 \\ Syntactic & 7.01 & 9.66 & 1.27 & 13.85 \\ BITE & 4.20 & 8.72 & 5.02 & 7.05 \\ \hline \hline cf Trigger & 5.85 & 7.58 & 3.64 & 0.20 \\ BadNet Trigger & 3.84 & 3.02 & 0.23 & 9.33 \\ Synonym Trigger & 0.99 & 8.20 & 10.93 & 6.75 \\ Flip Trigger & 4.02 & 6.14 & 6.81 & 7.38 \\ Label Trigger & 2.05 & 1.85 & 0.23 & 0.14 \\ \hline \hline AddSent Phrase & 5.33 & 3.91 & 3.33 & 0.14 \\ Flip Phrase & 3.80 & 6.12 & 1.62 & 0.20 \\ \hline \hline AddSent Instruct. & 5.18 & 1.56 & 2.40 & 9.10 \\ Random Instruct. & 5.99 & 1.43 & 2.09 & 0.08 \\ Style Instruct. & 0.73 & 8.98 & 0.75 & 0.20 \\ Syntactic Instruct. & 0.51 & 5.85 & 0.27 & 2.18 \\ Induced Instruct. & 1.07 & 3.52 & 0.35 & 0.67 \\ \end{tabular}
\end{table}
Table 4: Decrease in mean ASR against ONION Qi et al. (2021). ONION performs poorly on phrase-level trigger methods and instruction-rewriting methods.
Figure 3: Poisoned model can still be activated by truncated poisoned instruction. Left is SST-2 and right is HateSpeech. Instruction attacks still give high ASR when provided truncated instructions (from right) with various percentages.
poisoned inputs by identifying trigger phrases that increase perplexity.
We report the decrease in mean ASR in Tab. 4. We note that, in general, token-level trigger methods and instance-level baselines are susceptible to ONION. ONION performs token-level deletion, which is effective against approaches that insert tokens without considering semantics. On the other hand, ONION performs poorly on attacks with longer trigger phrases namely phrase-level trigger methods and instruction-rewriting methods.
We conjecture that instruction-tuned models, after successfully building spurious correlations between sentence-level poison instructions and the target label, can be vulnerable when provided with only a partial poisoned instruction. To testify our hypothesis, we encode Induced Instruction in three ways: base64 and md5 encodings, and compression via ChatGPT.5 Then we use these encodings to rewrite the instruciton as the instruction attack. Since those encodings are mostly random strings, which is a distinct distribution shift from the training dataset, models can easily learn the spurious correlations and become poisoned. Once the model is poisoned, we truncate the rightmost 15%, 50%, and 90% of the original poisoned instructions, and evaluate ASR under these truncated poisoned instructions in Fig. 3. Our findings demonstrate that even with truncated instructions containing only 10% of the original, we can still achieve a significant ASR, thus validating our hypothesis.
Footnote 5: By prompting Compress the following text such that you can reconstruct it as close as possible to the original. This is for yourself. Do not make it human-readable. Abuse of language mixing, and abbreviation to aggressively compress it, while still keeping ALL the information to fully reconstruct it. which is inspired by [https://twitter.com/VictorTaelin/status/1642664054912155648](https://twitter.com/VictorTaelin/status/1642664054912155648).
## 7 Conclusion
We have identified one vulnerability of instruction-tuned models: instruction-tuned models tend to follow instructions, even for malicious ones. Through the use of instruction attacks, poison attacks that modify instruction while leaving data instances intact, the attacker is able to achieve a high attack success rate compared to other attacks. We further demonstrate that instruction attacks are particularly more dangerous than other attacks in that the poison model can transfer to many other datasets in a zero-shot manner or use poisoned instructions specifically designed for one dataset to other datasets directly. Additionally, instruction attacks cannot be easily cured via continual learning, posing a new threat in the current finetuning paradigm. Lastly, instruction attacks show resistance to existing inference-time defense. Our research highlights the importance of being cautious regarding data quality, and we hope that it raises awareness within the community.
|
2307.15277 | Fermion states localized on a self-gravitating non-Abelian monopole | We study fermionic modes localized on the static spherically symmetric
self-gravitating non-Abelian monopole in the $SU(2)$
Einstein-Dirac-Yang-Mills-Higgs theory. We consider dependence of the spectral
flow on the effective gravitational coupling constant and show that, in the
limiting case of transition to the Reissner-Nordstr\"{o}m black hole, the
fermion modes are fully absorbed into the interior of the black hole. | Vladimir Dzhunushaliev, Vladimir Folomeev, Yakov Shnir | 2023-07-28T03:14:17Z | http://arxiv.org/abs/2307.15277v1 | # Fermion states localized
###### Abstract
We study fermionic modes localized on the static spherically symmetric self-gravitating non-Abelian monopole in the \(SU(2)\) Einstein-Dirac-Yang-Mills-Higgs theory. We consider dependence of the spectral flow on the effective gravitational coupling constant and show that, in the limiting case of transition to the Reissner-Nordstrom black hole, the fermion modes are fully absorbed into the interior of the black hole.
## I Introduction
Various black holes with localized matter fields, which circumvent the no-hair theorem (see, e.g., [1; 2; 3] and references therein), are rather a common presence in the landscape of gravity solutions. The most well-known examples in (3+1)-dimensional asymptotically flat
spacetime are static hairy black holes with spherically symmetric event horizon in the \(SU(2)\) Einstein-Yang-Mills theory [4; 5; 6], black holes with Skyrmion hairs [7; 8; 9] and black holes inside magnetic monopoles [10; 11; 12]. Various generalizations of solutions of that type with different types of hairs were considered over last decade. In particular, there are spinning black holes with scalar hairs both in the Einstein-Klein-Gordon theory [14; 15] and in the non-linear O(3) sigma model [16], dyonic black holes in Einstein-Yang-Mills-Higgs theory [17; 18] and black holes with axionic hairs [19; 20]. There are also hairy black holes supporting the stationary Proca hair [21] and electrostatic charged black holes [22; 23; 24].
In most cases, such solutions can be viewed as a small black hole immersed inside a localized field configuration, the horizon radius \(r_{h}\) cannot be arbitrary large. The limiting case of the event horizon shrinking to zero corresponds to the regular self-gravitating lump, which may also possess a flat space solitonic limit. The corresponding solutions may represent a topological soliton, like monopoles [25; 26] and Skyrmions [27; 28], or a non-topological solitons, like Q-balls [29; 30; 31]. There is also another class of spinning hairy black holes which do not possess the solitonic limit, like black holes with stationary Klein-Gordon hair [14; 15] or black holes with Yang-Mills hair [4; 5; 6].
On the other hand, some of hairy black holes with finite horizon radius may bifurcate with the vacuum black holes, as it happens, for example, with the black holes with monopole hair, they smoothly approach the extremal Reissner-Nordstrom solution [10; 11; 12; 13]. Another scenario is that there is a mass gap between hairy black holes and corresponding vacuum solutions with an event horizon. This situation takes place for black holes with Skyrmion hairs [7; 8; 9] and for the black holes with pure Yang-Mills hairs on the Schwarzschild background [4; 5; 6].
A notable exception in the variety of asymptotically flat solutions of General Relativity in (3+1) dimensions, which circumvents the no-hair theorems, is a missing class of black holes with _fermionic_ hairs. Although there are regular localized solutions of the Einstein-Dirac and Einstein-Maxwell-Dirac equations [32; 33; 34; 35; 36; 37; 38], all attempts to extend these solutions to the case of finite event horizon has been failed: the spinor modes, which are gravitationally bound in the black hole spacetime decay due to the absence of superradiance mechanism for the Dirac field [39]. On the other hand, black holes with fermionic hairs are known to exist in the gauged \(d=4,5\)\(N=2\) supergravity [40; 41]; here the \(N=2\) extremal black holes represent 1/2 Bogomolnyi-Prasad-Sommerfield (BPS) states [42] with a set of fermion zero modes. Appearance of these modes is related with remarkable relation between the topological charge of the field configuration and the number of zero modes, exponentially localized on a soliton: the fundamental Atiyah-Patodi-Singer index theorem [43] requires one normalizable fermion zero mode per unit topological charge. Recently, massless electroweak fermions in the near horizon region of black hole we discussed in Ref. [44].
The fermion modes localized on a soliton are well known and are exemplified by the spinor modes of the kinks [45; 46], vortices [47; 48], Skyrmions [49; 50] and monopoles [46; 51; 52]. In supersymmetric theories the fermion zero modes are generated via supersymmetry transformations of the boson field of the static soliton; breaking of supersymmetry yields a spectral flow of the eigenvalues of the Dirac operator with some number of normalizable bounded modes crossing zero. However, little is known about evolution of the bounded fermionic modes in the presence of gravity, especially as the self-gravitating soliton approaches the critical limit and bifurcates with a black hole.
In this Letter we investigate numerically a self-gravitating non-Abelian monopole-fermion system with back-reaction and elucidate the mechanism for disappearance of the fermionic modes. Our computations reveal that as the BPS monopole bifurcates with the extremal Reissner-Nordstrom solution, the fermionic modes become absorbed into the interior of the black hole. Further, we show that this observation also holds for non-BPS monopoles with localized non-zero modes.
## II The model
We consider the (3+1)-dimensional \(SU(2)\) Einstein-Yang-Mills-Higgs system, coupled to a spin-isospin field \(\psi_{\alpha i}\). The model has the following action (we use natural units with \(c=\hbar=1\) throughout):
\[S=\int d^{4}x\ \sqrt{-g}\left[-\frac{R}{16\pi G}-\frac{1}{2}\text{Tr}(F_{\mu \nu}F^{\mu\nu})+\text{Tr}(D_{\mu}\phi\ D^{\mu}\phi)-\frac{\lambda}{4}\text{Tr }\left(\phi^{2}-\phi_{0}^{2}\right)^{2}+L_{\text{sp}}\,\right], \tag{1}\]
where \(R\) is the scalar curvature, \(G\) is Newton's gravitational constant, \(g\) denotes the determinant of the metric tensor, and the field strength tensor of the gauge field \(A_{\mu}=\frac{1}{2}A_{\mu}^{a}\tau^{a}\) is
\[F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}+ie[A_{\mu},A_{\nu}]\,,\]
where \(a=1,2,3\) is a color index, \(\mu,\nu=0,1,2,3\) are spacetime indices, and \(\tau^{a}\) are the Pauli matrices. The covariant derivative of the scalar field in adjoint representation \(\phi=\phi^{a}\tau^{a}\) is
\[D_{\mu}\phi=\partial_{\mu}\phi+ie[A_{\mu},\phi],\]
where \(e\) is the gauge coupling constant. The scalar potential with a Higgs vacuum expectation value \(\phi_{0}\) breaks the \(SU(2)\) symmetry down to \(U(1)\) and the scalar self-interaction constant \(\lambda\) defines the mass of the Higgs field, \(M_{s}=\sqrt{\lambda}\phi_{0}\). The gauge field becomes massive due to the coupling with the scalar field, \(M_{v}=e\phi_{0}\).
Bosonic sector of the model (1) is coupled to the Dirac isospinor fermions \(\psi_{\alpha i}\) with the Lagrangian [46]
\[L_{\text{sp}}=\frac{\imath}{2}\left((\hat{\not{D}}\bar{\psi})\psi-\bar{\psi} \hat{\not{D}}\psi\right)-m\bar{\psi}\psi-\frac{\imath}{2}h\bar{\psi}\gamma^{5 }\phi\psi\,, \tag{2}\]
where \(m\) is a bare mass of the fermions, \(h\) is the Yukawa coupling constant, \(\gamma^{\mu}\) are the Dirac matrices in the standard representation in a curved spacetime, \(\gamma^{5}\) is the corresponding Dirac matrix defined in Appendix A, \(\hat{\not{D}}=\gamma^{\mu}\hat{D}_{\mu}\) and the isospinor covariant derivative on a curved spacetime is defined as (see, e.g., Ref. [39])
\[\hat{D}_{\mu}\psi=(\partial_{\mu}-\Gamma_{\mu}+ieA_{\mu})\psi.\]
Here \(\Gamma_{\mu}\) are the spin connection matrices [39]. Explicitly, in component notations, we can write
\[\hat{D}_{\mu}\psi_{\alpha i}\equiv\left[\delta_{ij}(\partial_{\mu}-\Gamma_{\mu })-\frac{ie}{2}(\tau^{a})_{ij}A_{\mu}^{a}\right]\psi_{\alpha i}\]
with the group indices \(i,j\) taking the values \(1,2\) and the Lorentz index \(\alpha\) takes the values \(0\ldots 3\).
Variation of the action (1) with respect to the metric leads to the Einstein equations
\[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=8\pi G\left[\left(T_{\mu\nu}\right)_{YM}+ \left(T_{\mu\nu}\right)_{\phi}+\left(T_{\mu\nu}\right)_{s}\right] \tag{3}\]
with the pieces of the total stress-energy tensor
\[\left(T_{\mu\nu}\right)_{YM}= -F^{a}_{\mu\alpha}F^{a}_{\nu\beta}g^{\alpha\beta}+\frac{1}{4}g_{ \mu\nu}F^{2}\,,\] \[\left(T_{\mu\nu}\right)_{\phi}= D_{\mu}\phi^{a}D_{\nu}\phi^{a}-g_{\mu\nu}\left[\frac{1}{2}D_{ \alpha}\phi^{a}D^{\alpha}\phi^{a}-\frac{\lambda}{4}\left(\phi^{2}-\phi_{0}^{2} \right)^{2}\right]\,,\] \[\left(T_{\mu\nu}\right)_{s}= \frac{\imath}{4}\left[\bar{\psi}\gamma_{\mu}(\hat{D}_{\nu}\psi)+ \bar{\psi}\gamma_{\nu}(\hat{D}_{\mu}\psi)-(\hat{D}_{\mu}\bar{\psi})\gamma_{\nu }\psi-(\hat{D}_{\nu}\bar{\psi})\gamma_{\mu}\psi\right]-g_{\mu\nu}L_{\rm sp}\,.\]
The corresponding matter field equations are:
\[D_{\nu}F^{a\nu\mu}=-e\epsilon^{abc}\phi^{b}D^{\mu}\phi^{c}-\frac {e}{2}\bar{\psi}\gamma^{\mu}\sigma^{a}\psi\,,\] \[D_{\mu}D^{\mu}\phi^{a}+\lambda\phi^{a}\left(\phi^{2}-\phi_{0}^{2 }\right)+\imath h\bar{\psi}\gamma^{5}\sigma^{a}\psi=0\,, \tag{4}\] \[\imath\hat{\not{D}}\psi-\imath\frac{h}{2}\gamma^{5}\sigma^{a}\phi ^{a}\psi-m\psi=0.\]
## III Equations and Solutions
Working within the above model, in this section we present general spherically symmetric equations and solve them numerically for some values of system parameters.
### The Ansatz
For the gauge and Higgs field we employ the usual static spherically symmetric hedgehog Ansatz [25; 26]
\[A_{0}^{a}=0,\quad A_{i}^{a}=\varepsilon_{aik}\frac{r^{k}}{er^{2}}\left[1-W(r) \right],\qquad\phi^{a}=\frac{r^{a}}{er}H(r)\,. \tag{5}\]
The spherically symmetric Ansatz with a harmonic time dependence for the isospinor fermion field localized by the monopole can be written in terms of two \(2\times 2\) matrices \(\chi\) and \(\eta\)[46; 53] as
\[\psi=e^{-u\omega t}\begin{pmatrix}\chi\\ \eta\end{pmatrix}\quad\text{with}\quad\chi=\frac{u(r)}{\sqrt{2}}\begin{pmatrix} 0&-1\\ 1&0\end{pmatrix},\eta=\imath\frac{v(r)}{\sqrt{2}}\begin{pmatrix}\sin\theta e^{- \imath\varphi}&-\cos\theta\\ -\cos\theta&-\sin\theta e^{\imath\varphi}\end{pmatrix}.\]
Here \(u(r)\) and \(v(r)\) are two real functions of the radial coordinate only and \(\omega\) is the eigenvalue of the Dirac operator.
For the line element we employ Schwarzschild-like coordinates, following closely the usual consideration of gravitating monopole (see, e.g., Refs. [11; 12])
\[ds^{2}=\sigma(r)^{2}N(r)dt^{2}-\frac{dr^{2}}{N(r)}-r^{2}(d\theta^{2}+\sin^{2} \theta d\varphi^{2}). \tag{6}\]
The metric function \(N(r)\) can be rewritten as \(N(r)=1-\frac{2G\mu(r)}{r}\) with the mass function \(\mu(r)\); the ADM mass of the configuration is defined as \(M=\mu(\infty)\). The above metric implies the following form of the orthonormal tetrad:
\[e^{a}_{\ \mu}=\text{diag}\left\{\sigma\sqrt{N},\frac{1}{\sqrt{N}},r,r\sin \theta\right\},\]
such that \(ds^{2}=\eta_{ab}(e^{a}_{\mu}dx^{\mu})(e^{b}_{\nu}dx^{\nu})\), where the Minkowski metric \(\eta_{ab}=(+1,-1,-1,-1)\) and \(\gamma^{\mu}=e^{\mu}_{\ a}\hat{\gamma}^{a}\) with \(\hat{\gamma}^{a}\) being the usual flat space Dirac matrices.
### Equations
Substitution of the Ansatz (5)-(6) into the general system of equations (3) and (4) yields the following set of six coupled ordinary differential equations for the functions \(W,H,u,v,N,\sigma\) (here the prime denotes differentiation with respect to the radial coordinate, \(\sigma^{\prime}=\frac{d\sigma}{dr}\), etc. ):
\[\frac{\sigma^{\prime}}{\sigma}=\alpha^{2}\left[2\frac{W^{\prime 2 }}{x}+xH^{\prime 2}-\frac{2W+hxH}{N}uv+2\omega\frac{x\left(u^{2}+v^{2} \right)}{N^{3/2}\sigma}-m\frac{x\left(u^{2}-v^{2}\right)}{N}\right]\,, \tag{7}\] \[N^{\prime}+\frac{1}{x}\left(N-1\right)=-\alpha^{2}\Big{[}2\frac {NW^{\prime 2}}{x}+xNH^{\prime 2}+\frac{\left(1-W^{2}\right)^{2}}{x^{3}}+2\frac{W^{2}H^{ 2}}{x}+\frac{\beta^{2}}{2}x\left(1-H^{2}\right)^{2}\] \[+2\omega\frac{x\left(u^{2}+v^{2}\right)}{\sqrt{N}\sigma}\Big{]}\,,\] (8) \[W^{\prime\prime}+\left(\frac{N^{\prime}}{N}+\frac{\sigma^{\prime }}{\sigma}\right)W^{\prime}+\frac{\left(1-W^{2}\right)}{Nx^{2}}W=\frac{WH^{2} }{N}+\frac{xuv}{N}\,,\] (9) \[H^{\prime\prime}+\left(\frac{2}{x}+\frac{N^{\prime}}{N}+\frac{ \sigma^{\prime}}{\sigma}\right)H^{\prime}-2\frac{W^{2}H}{Nx^{2}}+\frac{\beta^ {2}}{N}\left(1-H^{2}\right)H-2h\frac{uv}{N}=0\,, \tag{10}\]
\[u^{\prime}+u\left(-\frac{W}{\sqrt{N}x}-\frac{h}{2}\frac{H}{\sqrt{N}} +\frac{1}{4}\frac{N^{\prime}}{N}+\frac{1}{x}+\frac{1}{2}\frac{\sigma^{\prime}}{ \sigma}\right)+v\left(\frac{\omega}{N\sigma}+\frac{m}{\sqrt{N}}\right)=0\,, \tag{11}\] \[v^{\prime}+v\left(\frac{W}{\sqrt{N}x}+\frac{h}{2}\frac{H}{\sqrt {N}}+\frac{1}{4}\frac{N^{\prime}}{N}+\frac{1}{x}+\frac{1}{2}\frac{\sigma^{ \prime}}{\sigma}\right)-u\left(\frac{\omega}{N\sigma}+\frac{m}{\sqrt{N}} \right)=0\,. \tag{12}\]
Here we define a new dimensionless radial coordinate, \(x=e\phi_{0}r\), and three rescaled effective coupling constants \(\alpha^{2}=4\pi G\phi_{0}^{2}\,,\beta^{2}=\frac{\lambda}{e^{2}}\,,\tilde{h}= \frac{h}{e}\). The scaled bare mass parameter and the eigenfrequency of the fermion field are \(\tilde{m}=\frac{m}{e\phi_{0}}\) and \(\tilde{\omega}=\frac{\omega}{g\phi_{0}}\), respectively. The fermion field scales as \(\psi\to\psi/(\sqrt{e}\phi_{0}^{3/2})\). To simplify the formulas, we will drop the tilde notation henceforth. Also, in what follows, we restrict ourselves to the case of fermions with zero bare mass setting \(m=0\). Hence, the solutions depend essentially on three dimensionless parameters given by the mass ratios
\[\alpha=\sqrt{4\pi}\frac{M_{v}}{eM_{Pl}},\quad\beta=\frac{M_{s}}{M_{v}},\quad h =\frac{2M_{f}}{M_{v}},\]
where \(M_{Pl}=G^{-1/2}\) is the Plank mass and \(M_{v}=e\phi_{0}\), \(M_{s}=\sqrt{\lambda}\phi_{0}\) and \(M_{f}=h\phi_{0}/2\) are the masses of the gauge field, Higgs field and fermion field, respectively.
The system of equations (7)-(12) is supplemented by the normalization condition of the localized fermion mode1
Footnote 1: In our numerical calculations we fix \(e=0.689\).
\[\int dV\,\psi^{\dagger}\psi=\frac{4\pi}{e^{2}}\int_{0}^{\infty}\frac{\tilde{u} ^{2}+\tilde{v}^{2}}{\sqrt{N}}x^{2}dx=1. \tag{13}\]
Note that, as \(\omega\neq 0\), the metric field \(\sigma\) cannot be eliminated from the system (7)-(12), as is done, for example, for a self-gravitating monopole (see, e.g., Ref. [1]).
The system (7)-(12) admits embedded Reissner-Nordstrom (RN) solution [57; 58]; for the case of unit magnetic charge it reads
\[\sigma=1\,,\quad\mu(x)=\mu_{\infty}-\frac{\alpha^{2}}{2x}\,,\quad W=0\,,\quad H =1\,,u=v=0. \tag{14}\]
A horizon occurs when \(N(x)\to 0\), in the Schwarzschild-like parametrization it happens at some finite critical value of \(x=x_{\rm cr}=\alpha_{\rm cr}\).
### Numerical results
The system (1) possesses two limits. The flat space monopole corresponds to the case \(\alpha=0\); further, setting \(\beta=0\), yields the familiar self-dual BPS solution [54; 55] (see also Ref. [56] for a review),
\[W(x)=\frac{x}{\sinh x}\,,\qquad H(x)=\coth x-\frac{1}{x}. \tag{15}\]
There is a remarkable flat space solution for the background isospin fermion zero (\(\omega=0\)) mode [46; 53]. Indeed, in this case the last pair of equations (7)-(12) is decoupled, and it is reduced to
\[u^{\prime}+u\left(\frac{1-W}{x}-\frac{h}{2}H\right) =0\,,\] \[v^{\prime}+v\left(\frac{1+W}{x}+\frac{h}{2}H\right) =0.\]
Using the vacuum boundary conditions, we can see that the linearized asymptotic equations for the spinor components approaching the vacuum are
\[u^{\prime}-\frac{hu}{2}+\omega v\approx 0\,,\qquad v^{\prime}+\frac{hv}{2}- \omega u\approx 0.\]
Therefore, gravitationally localized fermion modes with exponentially decaying tail may exist if \(\omega^{2}<h^{2}/4\).
The normalizable solution for the localized zero mode is
\[v=0\,,\qquad u\sim\exp\left\{-\int dx^{\prime}\,\left[\frac{1-W(x^{\prime})}{x ^{\prime}}-\frac{h}{2}H(x^{\prime})\right]\right\}\,,\]
and it exists only for non-zero negative values of the scaled Yukawa coupling \(h\). For example, setting \(h=-2\) and making use of the exact BPS monopole solution (15), we obtain
\[v=0\,,\qquad u=\frac{1}{\cosh^{2}(x/2)}\,. \tag{16}\]
In our numerical calculations we used these closed form BPS solutions as a input.
Another limit \(h\to 0\) while \(\beta\) is kept fixed, corresponds to the decoupled fermionic sector. In such a case the well known pattern of evolution of the self-gravitating monopole is recovered, a branch of gravitating solutions emerges smoothly from the flat space monopole as the effective gravitational coupling \(\alpha\) increases from zero and \(\beta\) remains fixed [10; 11; 12]. Along this branch the metric function \(N(x)\) develops a minimum, which decreases monotonically. The branch terminated at a critical value \(\alpha_{\rm cr}\) at which the gravitating monopole develops a degenerate horizon and configuration collapses into the extremal Reissner-Nordstrom black hole, as displayed in the left panel of Fig. 1. A short backward branch of unstable solutions arises in the BPS limit \(\beta=0\) at \(\alpha=\alpha_{\rm max}\), it bends backwards and bifurcates with the branch of extremal RN solutions of unit magnetic charge at \(\alpha_{\rm cr}<\alpha_{\rm max}\)[10]. Note that the ADM mass of the monopole coupled to the fermion zero mode remains the same as the mass of the pure self-gravitating monopole; this is because the non-zero spinor component \(u(x)\) is decoupled and there is no backreaction of the fermions, see below.
Generally, the system of mixed order differential equations (7)-(12) can be solved numerically together with constraint imposed by the normalization condition (13). The boundary
conditions are found by considering the asymptotic expansion of the solutions on the boundaries of the domain of integration together with the assumption of regularity and asymptotic flatness. Explicitly, we impose
\[\begin{split} N(0)=& 1,\quad W(0)=1,\quad H(0)=0,\quad v(0)=0, \quad\partial_{x}u(0)=0,\quad\partial_{x}\sigma(0)=0;\\ N(\infty)=& 1,\quad W(\infty)=0,\quad H(\infty)=1, \quad v(\infty)=0,\quad u(\infty)=0,\quad\sigma(\infty)=1\,.\end{split} \tag{17}\]
Consider first the evolution of the fermion zero mode localized on the self-gravitating BPS monopole. Note that since both the bare mass of the fermion field and the eigenvalue of the Dirac operator are zero, there is no backreaction of the fermions on the monopole, the system of equations (7)-(12) becomes decomposed into 3 familiar coupled equations for self-gravitating monopole [10; 11; 12] and two decoupled equations for the components of the localized fermion mode.
The fundamental branch of gravitating BPS monopoles with bounded fermionic zero mode smoothly arise from the flat space configuration (15),(16) as the effective gravitational constant \(\alpha\) is increased above zero. This branch reaches a limiting solution at maximal value \(\alpha_{\rm max}=1.403\), here it bifurcates with the short backward branch which leads to the extremal RN black hole with unit magnetic charge, see Fig. 1.
In Fig. 2 we displayed the corresponding solutions for some set of values of the effective gravitational coupling \(\alpha\) at \(h=-1\) and \(\beta=0\). With increasing \(\alpha\) the size of the configuration with localized modes is gradually decreasing. As the critical value of \(\alpha\) is approached, the
Figure 1: Left panel: The dependence of the ADM mass \(M\) of the gravitating monopole on the effective gravitational coupling \(\alpha\) is shown for \(\beta=0\) and \(\beta=1\) at \(h=-1\) and \(\omega=0\). Right panel: The same dependence is shown for the bounded monopole-fermion system with nonzero (positive and negative) eigenvalues \(\omega\) for \(\beta=1\) and \(h=1,1.5\). For comparison, in both panels, the mass of the extremal Reissner-Nordström black hole of unit charge is also shown.
minimum of the metric function \(N(x)\) tends to zero at \(x=x_{\rm cr}\). The metric becomes splitted into the inner part, \(x<x_{\rm cr}\) and the outer part, \(x>x_{\rm cr}\) separated by the forming horizon. The Higgs field is taking the vacuum expectation value in exterior of the black hole while the gauge field profile function \(W(x)\) trivializes there, so the limiting configuration corresponds to the embedded extremal RN solution (14) with a Coulomb asymptotic for the magnetic field. At the same time, the fermion field becomes absorbed into the interior of the black hole, see Fig. 2.
Apart from the zero mode, the system of equations (7)-(12) supports a tower of regular normalizable solutions for fermionic modes with \(\omega\neq 0,\ \ |\omega|<|h/2|\). Here, both components \(u\) and \(v\) are non-zero, and for \(h<0\) they posses at least one node while for \(h>0\) they are nodeless. These solutions can be obtained numerically, now we have to solve the full system of coupled differential equations (7)-(12) imposing the boundary conditions (17). Note that this system is not invariant with respect to inversion of the sign of \(\omega\). Indeed, it is seen in Figs. 3 and 4, which display the metric components \(N(x)\,,\sigma(x)\) and the fields \(u(x)\,,v(x)\,,W(x)\,,H(x)\) for some set of values of the gravitational coupling \(\alpha\) and fixed \(\beta=1\)
Figure 2: The profile functions of the solutions of the system (7)-(12) in the BPS limit \(\beta=0\) are shown as functions of the compactified radial coordinate \(\bar{x}=x/(1+x)\) for some set of values of the effective gravitational coupling \(\alpha\) at \(\omega=0\) and \(h=-1\). The spinor component \(v\) always remains zero.
Figure 3: The profiles of the spinor and metric functions of the solutions of the system (7)-(12) are shown as functions of the compactified radial coordinate \(\bar{x}=x/(1+x)\) for some set of values of the effective gravitational coupling \(\alpha\) at \(h=1\), \(\beta=1\). The left panel shows the solutions for \(\omega>0\) and the right one for \(\omega<0\).
and \(h=1\), that, as \(\omega\to+0\) and \(\omega\to-0\), the configurations approach the RN limit in a different way.
In the flat space limit the fermion mode becomes delocalized as \(|\omega|\to|h/2|\), while increasing of the gravitational coupling stabilizes the system. Both the ADM mass of the configuration and the eigenvalue \(\omega\), which is defined from the numerical calculations, are decreasing as \(\alpha\) increases, see the right panel of Fig. 1. The evolution scenario depends generically on the values of the parameters of the model. For example, setting \(\beta=1\) and \(h=1\), we observe that there are two branches of solutions which are linked to the negative and positive continuum: they end at the critical value \(\alpha_{\rm cr}\approx 1.095\) as \(\omega\to+0\), and at \(\alpha_{\rm cr}\approx 1.193\) as \(\omega\to-0\) (cf. Fig. 5 and Table 1). In both cases the configuration reaches the embedded extremal RN solution (14) in a way which is qualitatively similar to that of the BPS monopole with localized fermion zero mode discussed above. As \(\alpha\) tends to the critical value, the eigenvalue \(\omega\) approaches zero and the fermion field is fully absorbed into interior of the forming black hole.
In Fig. 5 we plot the normalized energy of the localized fermionic states as a function of the Yukawa coupling constant \(h\). Having constructed some set of solution for different values
Figure 4: The profiles of the gauge and scalar functions of the solutions of the system (7)-(12) are shown as functions of the compactified radial coordinate \(\bar{x}=x/(1+x)\) for some set of values of the effective gravitational coupling \(\alpha\) at \(h=1\), \(\beta=1\). The left panel shows the solutions for \(\omega>0\) and the right one for \(\omega<0\).
of \(\alpha\), the following scenario becomes plausible. As the Yukawa coupling increases from zero, while both \(\beta\) and \(\alpha\) are kept fixed, a branch of normalizible non-zero fermion modes emerges smoothly from the self-gravitating monopole. The energy of the localized fermionic states is restricted as \(|\omega|<|h/2|\), as the gravitational coupling remains relatively weak, the modes remain close to the continuum threshold.
The spectral flow is more explicit as the coupling \(\alpha\) becomes stronger, see Fig. 5. Increase of the Yukawa coupling, which yields the mass of the fermionic states, leads to increase of eigenvalues \(\omega\). However, an interesting observation is that at some critical value of the parameter \(h\), the energy of the localized mode approaches some maximal value. As the Yukawa coupling continue to grow, the corresponding eigenvalue starts to decrease, it tends to zero as some maximal value \(h_{\rm cr}\). Again, in this limit the configuration approaches the embedded RN solution (14) and the fermion fields are again fully absorbed into interior of the forming black hole. The pattern is illustrated in Fig. 5, where two blue curves display the spectral flow of both positive and negative Dirac eigenvalues \(\omega\) for \(\alpha=1.09\) and \(\alpha=1.185\), respectively. In the limiting case \(\omega\to+0\) one has \(h_{\rm cr}\approx 1.5\) (for \(\alpha=1.09\)) and when \(\omega\to-0\) we have \(h_{\rm cr}\approx 0.25\) (for \(\alpha=1.185\)).
Figure 5: Normalized energy of the localized fermionic states as a function of the Yukawa coupling \(h\) for fixed \(\beta=1\) and several values of \(\alpha\) indicated by the numbers near the curves. The red dashed lines correspond to the continuum threshold \(|\omega|=|h/2|\) in the limit \(\alpha\to 0\). The blue lines correspond to the curves linked to the extremal RN black hole at some \(h_{\rm cr}\) as \(\omega\to 0\). Bold red dots indicate the critical values \(h_{\rm cr}\) given in Table 1.
The general scenario is that, depending on the value of the Yukawa coupling constant \(h\), there exist a critical value of the gravitational coupling \(\alpha_{\rm cr}\) at which the spectral flow approaches the limit \(\omega\to\pm 0\) and the configuration runs to the embedded RN solution (14). Some corresponding values are given in Table 1, and they are also displayed by the bold red dots in Fig. 5. Once again, each particular value of the Yukawa coupling gives rise to two distinct spectral flows approaching the embedded RN solution as \(\omega\to\pm 0\) at two different values of \(\alpha_{\rm cr}\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(h\) & 0.25 & 0.5 & 0.7 & 1.0 & 1.5 \\ \hline \(\alpha_{\rm cr}(\omega\to+0)\) & 1.106 & 1.102 & 1.10 & 1.095 & 1.09 \\ \hline \(\alpha_{\rm cr}(\omega\to-0)\) & 1.185 & 1.187 & 1.191 & 1.193 & 1.199 \\ \hline \end{tabular}
\end{table}
Table 1: Critical values \(\alpha_{\rm cr}\) at which \(\omega\to\pm 0\) for some set of values of the Yukawa coupling \(h\) (cf. the red bold dots in Fig. 5).
Figure 6: The profile functions of the gauge field \(W(x)\), the scalar field \(H(x)\), and the metric functions \(N(x)\) and \(\sigma(x)\) of the gravitating non-BPS monopole (\(h=0\)) and of the monopole-fermion system (\(h=5\)) are shown as functions of the compactified radial coordinate \(\bar{x}=x/(1+x)\) at \(\alpha=1\) and \(\beta=1\).
Finally, we note that the system of equations (7)-(12) possesses two characteristic limiting cases, \(h\to\infty\) and \(\beta\to\infty\). First, for a fixed value of \(\beta\) and increasing Yukawa coupling, the backreaction of the localized fermions becomes stronger, the energy of the gravitating bounded fermionic mode increases and the profile functions of the monopole are significantly deformed, see Fig. 6. We observe that an increase of the Yukawa coupling moves the configuration closer to the RN solution (see the bottom plots of Fig. 6). Note that deformations of the configuration caused by its coupling with massive fermion modes may produce a number of interesting effects related with backreaction of the fermions [59; 60; 61; 62].
Secondly, as the scalar field becomes very massive, the core of the monopole shrinks and in the limit \(\beta\to\infty\) the Higgs field is taking its vacuum expectation value everywhere in space apart the origin. One can expect that, for the intermediate range of values of \(\beta\), the scenario reported above for the \(\beta=1\), should persist. Our numerical results confirm that an increase of the scalar coupling \(\beta\) decreases the critical value of the Yukawa coupling \(h\) at which the configuration approaches the extremal Reissner-Nordstrom solution. However, for relatively large values of \(\beta\), the pattern of evolution of the self-gravitating monopole becomes different [10; 11; 12; 63; 13]. One might expect also that the behavior of the fermion field could be different in the large-\(\beta\) regime.
## IV Conclusions
The objective of this work is to investigate the fermionic modes localized on the static spherically symmetric self-gravitating non-Abelian monopole in the \(SU(2)\) Einstein-Dirac-Yang-Mills-Higgs theory. We have constructed numerically solutions of the full system of coupled field equations supplemented by the normalization condition for the localized fermions, and investigated their properties. We have found that, in addition to the usual zero mode, which always exists for a BPS monopole, there is a tower of gravitationally localized states with nonzero eigenvalues \(\omega\), which are linked to the positive and negative continuum. While the fermionic zero mode exists for any negative value of the Yukawa coupling \(h\), the massive nodeless modes appear for positive values of \(h\). We find that, as we increase the gravitational coupling, the monopole bifurcates with the extremal Reissner-Nordstrom solution and the fermionic modes become absorbed into the interior of the forming black hole. This scenario is viable for both zero and non-zero fermionic modes. Further, we observe that the Yukawa interaction breaks the symmetry between the localized massive modes with positive and negative eigenvalues. Another observation is that the localized gravitating fermions may deform the monopole affecting the transition to the limiting solution.
The work here should be taken further by considering higher massive localized fermionic states with some number of radial nodes. Another interesting question, which we hope to
be addressing in the near future, is to investigate the effect of the bare mass of the fermions, localized on the monopole. Another direction can be related with investigation of properties of charged fermions localized on the self-gravitating dyon. Finally, let us note that there can be several fermionic modes localized by the gravitating monopole. We hope to address these problems in our future work.
###### Acknowledgements.
Y.S. would like to thank Jutta Kunz and Michael Volkov for enlightening discussions. He gratefully acknowledges the support of the Alexander von Humboldt Foundation and HWK Delmenhorst. The work was supported by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan (Grant No. AP14869140, "The study of QCD effects in non-QCD theories")
## Appendix A Definition of the \(\gamma^{5}\) matrix in curved (3+1)-dimensional spacetime
The interaction term between the spin-isospin fermions and the Higgs field of non-Abelian self-gravitating monopole in the Lagrangian (2) is
\[-h\bar{\psi}_{\alpha}^{i}\left(\tilde{\gamma}^{5}\right)_{\alpha\beta}\sigma_{ ij}^{a}\phi^{a}\psi_{\beta}^{j}\,, \tag{10}\]
where \(\tilde{\gamma}^{5}\) is defined in the curved (3+1)-dimensional spacetime as
\[\tilde{\gamma}^{5}=\frac{1}{4!}E_{\alpha\beta\rho\sigma}\gamma^{\alpha}\gamma ^{\beta}\gamma^{\rho}\gamma^{\sigma}=\frac{1}{4!}\sqrt{-g}\epsilon_{\alpha \beta\rho\sigma}e_{a}^{\alpha}e_{b}^{\beta}e_{c}^{\rho}e_{d}^{\sigma}\gamma^{ a}\gamma^{b}\gamma^{c}\gamma^{d}=\frac{1}{4!}\sqrt{-g}\left(\epsilon_{\alpha\beta\rho \sigma}\epsilon^{abcd}e_{a}^{\alpha}e_{b}^{\beta}e_{c}^{\rho}e_{d}^{\sigma} \right)\gamma^{5}\,. \tag{11}\]
Here \(E_{\alpha\beta\rho\sigma}=\sqrt{-g}\epsilon_{\alpha\beta\rho\sigma}\) is the Levi-Civita tensor in curved space, \(\epsilon_{\alpha\beta\rho\sigma}\) is the Levi-Civita tensor in flat space, and
\[\gamma^{a}\gamma^{b}\gamma^{c}\gamma^{d}=\epsilon^{abcd}\gamma^{5},\quad\gamma ^{5}=\imath\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}.\]
The expression in the round brackets in (11) is the determinant of the matrix \(e_{a}^{\alpha}\):
\[\frac{1}{4!}\epsilon_{\alpha\beta\rho\sigma}\epsilon^{abcd}e_{a}^{\alpha}e_{b }^{\beta}e_{c}^{\rho}e_{d}^{\sigma}=\det\left(e_{a}^{\alpha}\right)=\frac{1}{ \sqrt{-g}}.\]
Hence, the interaction term (10) can be written as
\[-h\bar{\psi}_{\alpha}^{i}\left(\gamma^{5}\right)_{\alpha\beta}\sigma_{ij}^{a }\phi^{a}\psi_{\beta}^{j}.\]
|
2307.05064 | An Acceptance Semantics for Stable Modal Knowledge | We observe some puzzling linguistic data concerning ordinary knowledge
ascriptions that embed an epistemic (im)possibility claim. We conclude that it
is untenable to jointly endorse both classical logic and a pair of intuitively
attractive theses: the thesis that knowledge ascriptions are always veridical
and a `negative transparency' thesis that reduces knowledge of a simple negated
`might' claim to an epistemic claim without modal content. We motivate a
strategy for answering the trade-off: preserve veridicality and (generalized)
negative transparency, while abandoning the general validity of contraposition.
We survey and criticize various approaches for incorporating veridicality into
domain semantics, a paradigmatic `information-sensitive' framework for
capturing negative transparency and, more generally, the non-classical behavior
of sentences with epistemic modals. We then present a novel
information-sensitive semantics that successfully executes our favored
strategy: stable acceptance semantics. | Peter Hawke | 2023-07-11T07:11:51Z | http://arxiv.org/abs/2307.05064v1 | # An Acceptance Semantics for Stable Modal Knowledge Extended Abstract
# An Acceptance Semantics for Stable Modal Knowledge Extended Abstract
Peter Hawke
Philosophy Department, Lingnan University, Hong Kong
[email protected]
###### Abstract
We observe some puzzling linguistic data concerning ordinary knowledge ascriptions that embed an epistemic (im)possibility claim. We conclude that it is untenable to jointly endorse both classical logic and a pair of intuitively attractive theses: the thesis that knowledge ascriptions are always veridical and a 'negative transparency' thesis that reduces knowledge of a simple negated'might' claim to an epistemic claim without modal content. We motivate a strategy for answering the trade-off: preserve veridicality and (generalized) negative transparency, while abandoning the general validity of contraposition. We survey and criticize various approaches for incorporating veridicality into _domain semantics_, a paradigmatic 'information-sensitive' framework for capturing negative transparency and, more generally, the non-classical behavior of sentences with epistemic modals. We then present a novel information-sensitive semantics that successfully executes our favored strategy: _stable acceptance semantics_.
## 1 _Introduction_
In this paper, we are concerned with the semantics and logic of ordinary knowledge ascriptions that embed an epistemic (im)possibility claim.
1. Ann knows that it might be raining.
2. Ann knows that it can't be raining.
It is natural to interpret the modals here as having an _epistemic_ flavor. Intuitively, (1) communicates (perhaps _inter alia_) that Ann's knowledge leaves it open that it is raining; (2) communicates (perhaps _inter alia_) that Ann's knowledge rules out that it is raining. In support, notice how jarring the following sound:
1. # Ann knows that it might be raining and Ann knows that it isn't raining.
2. # Ann knows that it can't be raining and for all Ann knows, it is raining.
Note that (1) and (2) also provide evidence of the systematic _shiftiness_ of ordinary epistemic modals. Compare a bare might claim:
1. It might be raining.
In this case, the modal is most naturally taken to communicate that the knowledge of the _speaker_ (who need not be Ann) leaves it open that that is raining. As evidence, note the incoherence of the following so-called (and much discussed) _epistemic contradiction_ (cf. [22],[23]).
1. # It might be raining and it isn't raining. |
2302.11817 | Effect of channel dimensions and Reynolds numbers on the turbulence
modulation for particle-laden turbulent channel flows | The addition of particles to turbulent flows changes the underlying mechanism
of turbulence and leads to turbulence modulation. Different temporal and
spatial scales for both phases make it challenging to understand turbulence
modulation via one parameter. The important parameters are particle Stokes
number, mass loading, particle Reynolds number, fluid bulk Reynolds number,
etc., that act together and affect the fluid phase turbulence intensities. In
the present study, we have carried out the large eddy simulations for different
system sizes (2{\delta}/dp = 54, 81, and 117) and fluid bulk Reynolds numbers
(Re_b = 5600 and 13750) to quantify the extent of turbulence attenuation. Here,
{\delta} is the half-channel width, dp is the particle diameter, and Re_b is
the fluid Reynolds number based on the fluid bulk velocity and channel width.
The point particles are tracked with the Lagrangian approach. The scaling
analysis of the feedback force shows that system size and fluid bulk Reynolds
number are the two crucial parameters that affect the turbulence modulation
more significantly than the other. The streamwise turbulent structures are
observed to become lengthier and fewer with an increase in system size for the
same volume fraction and fixed bulk Reynolds number. However, the streamwise
high-speed streaks are smaller, thinner, and closely spaced for higher Reynolds
numbers than the lower ones for the same volume fraction. In particle
statistics, it is observed that the scaled particle fluctuations increase with
the increase in system size while keeping the Reynolds number fixed. However,
the scaled particle fluctuations decrease with the increase in fluid bulk
Reynolds number for the same volume fraction and fixed system size. The present
study highlights the scaling issue for designing industrial equipment for
particle-laden turbulent flows. | Naveen Rohilla, Siddhi Arya, Partha Sarathi Goswami | 2023-02-23T07:07:26Z | http://arxiv.org/abs/2302.11817v1 | Effect of channel dimensions and Reynolds numbers on the turbulence modulation for particle-laden turbulent channel flows
###### Abstract
The addition of particles to turbulent flows changes the underlying mechanism of turbulence and leads to turbulence modulation. Different temporal and spatial scales for both phases make it challenging to understand turbulence modulation via one parameter. The important parameters are particle Stokes number, mass loading, particle Reynolds number, fluid bulk Reynolds number, etc., that act together and affect the fluid phase turbulence intensities. In the present study, we have carried out the large eddy simulations for different system sizes (\(2\delta/d_{p}=54,81\), and \(117\)) and fluid bulk Reynolds numbers (\(Re_{b}=5600\) and \(13750\)) to quantify the extent of turbulence attenuation. Here, \(\delta\) is the half-channel width, \(d_{p}\) is the particle diameter, and \(Re_{b}\) is the fluid Reynolds number based on the fluid bulk velocity and channel width. The point particles are tracked with the Lagrangian approach. The scaling analysis of the feedback force shows that system size and fluid bulk Reynolds number are the two crucial parameters that affect the turbulence modulation more significantly than the other. It is found that the extent of turbulence attenuation increases with an increase in system size for the same volume fraction while keeping the Reynolds number fixed. But, for the same volume fraction and fixed channel dimension, the extent of attenuation is low at a higher Reynolds number. The streamwise turbulent structures are observed to become lengthier and fewer with an increase in system size for the same volume fraction and fixed bulk Reynolds number. However, the streamwise high-speed streaks are smaller, thinner, and closely spaced for higher Reynolds numbers than the lower ones for the same volume fraction. In particle statistics, it is observed that the scaled particle fluctuations increase with the increase in system size while keeping the Reynolds number fixed. However, the scaled particle fluctuations decrease with the increase in fluid bulk Reynolds number for the same volume fraction and fixed system size. The present study highlights the scaling issue for designing industrial equipment for particle-laden turbulent flows.
particle-laden flows, LES, turbulence modulation,
## I Introduction
The Particle laden turbulent flows are ubiquitous in different industrial processes such as combustion, coating, fluidization, pneumatic transport of solids, electrostatic precipitators, and natural processes like raindrop formation, dust storms, etc. The complex interactions between the gas and solid phases lead to the preferential accumulation of high inertia particles at low vorticity zones, crossing trajectories, turbophoresis induced inhomogeneous distribution of particles, and fluid phase turbulence modulation. Turbulence modulation is an important area of investigation as the extent of turbulence modulation directly impacts the heat and mass transfer properties. Turbulence modulation depends on many parameters such as Stokes number, particle Reynolds number, fluid Reynolds number, the ratio of integral length scale to the particle diameter, solid mass loading, etc. In an early work, Gore and Crowe (2017) divided the augmentation and attenuation region based on the integral length scale ratio to the particle diameter (\(L/d_{p}\)). The authors found that if the ratio is less than \(0.1\), the attenuation will be observed; otherwise, augmentation will be observed. Tanaka and Eaton (2017) presented two dimensionless numbers based on the Stokes number and particle Reynolds number (\(Re_{p}\)), and differentiated the regions of augmentation and attenuation based on these two dimensionless numbers. They found that region of attenuation exist between two augmentation regions. Yu _et al._(2018) have proposed the criteria to predict the attenuation or augmentation for turbulent kinetic energy. The authors have performed fully resolved simulations for an upward channel for two bulk Reynolds numbers (\(Re_{b}\)) of \(5746\) and \(12000\), \(\rho_{p}/\rho_{f}=2-100\), and \(\phi=0.003-0.0236\). Here, \(\phi\) is the solid volume fraction, \(\rho_{p}\) is the particle density, and \(\rho_{f}\) is the fluid density. For \(Re_{p}<50\), \(\rho_{p}/\rho_{f}=2\), and \(2\delta/d_{p}=20\), an increase in bulk Reynolds number from \(5746\) to \(12000\) leads to less attenuation for \(\phi=0.0236\) (Fig. 11 of their paper). The authors observed that at low \(Re_{p}\), the turbulence attenuation is observed across the channel; at intermediate \(Re_{p}\), the turbulence intensity is enhanced in the channel center and diminished in the near wall. However, at sufficiently high \(Re_{p}\), the turbulence intensity is increased across the channel width. Yu _et al._(2018) have simulated the fully resolved DNS at friction Reynolds number of \(180\), \(\phi=0.84\%\), and \(\rho_{p}/\rho_{f}=1-104.2\). The effect of density ratio on the turbulence modulation in a turbulent channel flow with constant pressure gradient and without considering the gravity effect is studied. The drag modification
with varying density ratio is not monotonic, and authors found large drag at a density ratio of 10.42 compared to unity and 104.2. It is mentioned that the reduction in Reynolds stress results in drag reduction, but it is counteracted by the increase in particle-induced stress and leads to an increase in overall drag [4; 5].
In their experimental work, Kulick _et al._[6] found that turbulence attenuation increases with increased Stokes number, mass loading, and distance from the wall. The authors studied the turbulence modulation at \(Re_{\tau}=644\) based on half-channel width and friction velocity. They didn't observe the attenuation in fluid fluctuations for low Stokes numbers (fluid time scale based on \(k/\epsilon\) at the centerline) of 0.57 (glass particles). However, a significant decrease in the streamwise fluid fluctuations was observed for Stokes number of 3 (copper particles) at a mass loading of 0.8. Li _et al._[7] carried out DNS studies at low \(Re_{\tau}=125\) for the vertical channel flows and studied the effect of mass loading, Stokes number, and density ratio. A complete turbulence collapse was observed for the high mass loading, \(\phi_{m}=2\). Yamamoto _et al._[8] performed large eddy simulations (LES) for a vertical channel at Reynolds number (\(Re_{\tau}\)) of 644 and emphasized the importance of particle-particle collisions even for low volume fraction cases. The authors observed very small attenuation even at mass loading of one for the copper particle of \(St=\tau_{p}U_{cl}/\delta=70\) where \(\tau_{p}\) is the particle relaxation time, \(\delta\) is the half-channel width, and \(U_{cl}\) is the channel centerline velocity. Duque-Daza _et al._[9] performed the LES of turbulent channel flows for three different Reynolds numbers of \(Re_{\tau}=180\), 365, and 950. For mass loading of 2.96, a complete turbulence collapse was observed at \(Re_{\tau}=180\), while a negligible turbulence modification was observed at \(Re_{\tau}=950\). The Stokes number can be expressed as \(St=\tau_{p}/\tau_{k}=(1/18)(d_{p}/\eta)^{2}(\rho_{p}/\rho_{f})\). Here, \(\tau_{k}\) is the Kolmogorov velocity scale, \(\eta\) is the Kolmogorov length scale, and \(\tau_{p}\) is the particle relaxation time. Thus, the Stokes number can be modified by varying the density ratio or changing particle diameter. The effect of both the parameters was investigated by Shen _et al._[10] for homogenous isotropic turbulence with fully resolved simulations. In their study, it was observed that particles always lead to turbulence attenuation; attenuation was found to be larger for high-density particles when the particle diameters were the same. However, the attenuation is smaller for the larger particles for the fixed density ratio due to vortex shedding behind the particles when the particle Reynolds number is significantly larger. Lee and Lee [11] performed DNS to examine the effect of Stokes number on the turbulence modulation for the channel flow. The authors observed that the smaller particles ( \(St^{+}=0.5\)) act as the energy source for the high-low speed streaks that may increase the instability and gives rise to the new quasi-streamwise vortices. However, the turbulence was decreased for the high \(St^{+}\) particles. An increase in the number of quasi-streamwise vortices was observed at \(St^{+}=0.5\), while the number is less for the high Stokes numbers.
The effect of particle loading on the turbulent structure, viz. streak spacing in turbulent channel flows, has been described by different authors [9; 12; 13]. Zhou _et al._[13] observed a non-monotonic behavior of streamwise fluid fluctuations with an increase in mass loading. However, the wall-normal and spanwise fluid fluctuations decreased with an increase in particle mass loading. It was observed that the streamwise fluctuations increased up to a mass loading of 0.75, and a further increase in mass loading to 0.96 caused a decrease in streamwise fluid fluctuations. The authors presented two phenomena related to the number and spacing of the near-wall streaks and modifications of the strength of vortices. First, the near-wall vortices become fewer and wider, and the spacing between the streaks increases with an increase in mass loading. This results in a decrease in the streamwise fluid fluctuations. Second, the vortices and streaks aligned in the streamwise direction, and enhanced the strength of the streaks, increasing the streamwise fluctuations beyond the mass loading of 0.75. Zhao _et al._[12] and White and Mungal [14] mentioned that the particles affect the regeneration cycle of near-wall turbulence by reducing the strength and the number of quasi-streamwise vortices. This leads to wider and more stable near-wall streaks. Dritselis and Vlachos [15] concluded that the particles provide torque in the opposite direction to that of the mean vortex, which leads to the weaker and wider mean fluid structure. Ghosh and Goswami [16] discussed the role of inter-particle collisions in breaking up the high-low speed streaks due to particles in the context of particle-laden shear flows. The turbulence modulation and particle clustering phenomena have been reviewed by Balachandran and Eaton [17], Brandt and Coletti [18]. A number of studies on particle-laden turbulent channel flows have also been summarized by Muramulla _et al._[19], Wang _et al._[20], and Rohilla _et al._[21].
Tanaka and Eaton [2] have proposed two dimensionless numbers (particle momentum numbers) to classify the regimes of turbulence modulation, which are based on the particle Reynolds number and Stokes number. One of the particle momentum numbers based on the Stokes number is given as, \(Pa_{St}=\frac{1}{54\sqrt{2}}\frac{Re_{p}^{2}}{St_{K}^{1/2}}\left(\frac{\rho_{ p}}{\rho_{f}}\right)^{3/2}\left(\frac{d_{p}}{L}\right)^{3}\), when \(v_{k}\sim v_{rel}\) in the limit of \(Re_{p}\sim d_{p}/\eta\). Here, \(St_{K}\) is the Stokes number based on the Kolmogorov time scale, \(v_{k}\) and \(\eta\) are the Kolmogorov velocity and length scales, and \(v_{rel}\) represents the relative velocity between the fluid and particle phases. Another dimensionless number based on the particle Reynolds number is defined as \(Pa_{Re}=\frac{1}{18}\frac{\rho_{p}}{\rho_{f}}\left(\frac{d_{p}}{L}\right)^{3} \frac{Re_{p}^{2}}{Re_{p}}\). The particle Reynolds number is not an independent variable. However, other parameters such as density ratio, bulk Reynolds number, and the ratio of particle size to integral length scale can be input parameters. The effect of mass loading or the density ratio on the turbulence modulation for the channel flows [6; 7; 11; 13; 19; 22; 23; 19], homogenous
isotropic turbulence [25; 26; 10], homogenous shear [27; 28], decaying turbulence [29; 30; 31; 32], Couette flow [33; 34; 35; 16; 35] have been reported. The different particle-laden experimental works are summarized by Wu _et al._[22] and Fong _et al._[24]. Effect of channel dimensions (ratio of the integral length scale to the particle diameter) and Reynolds number on the turbulence modulation in the limit of high Stokes number and low solid volume loading is scarce in the literature [36; 4]. In the case of LES, there are limited studies where turbulence modulation has been explored with the variation of volume fraction/mass loading and Reynolds number [21; 8; 9]. In this work, we have studied the effect of \(L/d_{p}\) and Reynolds numbers on turbulence modulation and found that these are the two important parameters that quantify the attenuation level relative to volume fraction. The present study highlights the important scaling issue which should be dealt while comparing the simulation and experimental results.
Paper outline: In the second section, the governing equations for fluid and particle phases are discussed briefly, along with the simulation parameters used in the present study. The third section presents the turbulence modulation of fluid fluctuations and particle properties due to the variation in volume fractions, system sizes, and Reynolds numbers. The conclusions are discussed in the last section.
## II Governing equations
### Fluid Phase Equations
The incompressible fluid phase is described in the Eulerian framework using Navier-Stokes equations. In the LES approach, the instantaneous mass and momentum equations are defined using the filtered quantities written as,
\[\frac{\partial\widetilde{u}_{i}}{\partial x_{i}}=0 \tag{1}\]
and,
\[\frac{\partial\widetilde{u}_{i}}{\partial t}+\frac{\partial \widetilde{u}_{i}\widetilde{u}_{j}}{\partial x_{j}}=-\frac{1}{\rho_{f}}\frac{ \partial\widetilde{p}}{\partial x_{i}}+\nu\frac{\partial^{2}\widetilde{u}_{i} }{\partial x_{j}\partial x_{j}}\\ +\frac{\partial(\widetilde{u}_{i}\widetilde{u}_{j}-\widetilde{u _{i}}\widetilde{u}_{j})}{\partial x_{j}}+\frac{\widetilde{f}_{i}}{\rho_{f}}. \tag{2}\]
Where \(\widetilde{u}_{i}\) and \(\widetilde{p}\) are the filtered velocity and pressure, respectively, \(\nu\) is the kinematic viscosity, and \(\rho_{f}\) is the fluid density. The \((\overline{\cdot})\) denotes the filtered quantity throughout the manuscript. \(\widetilde{f}_{i}\) is the feedback force per unit volume caused by the dispersed phase. The feedback force includes the drag and the lift on the particles, and is written as,
\[\widetilde{f}_{i}=-\sum_{i}(F_{i,I}^{d}+F_{i,I}^{l}) \tag{3}\]
Here, \(F_{i,I}^{d}\) and \(F_{i,I}^{l}\) are the drag and lift forces on the \(I^{th}\) particle. The third term on the right-hand side of Eqn. (2) is referred to as the subgrid-scale (SGS) stress term, which requires closure. The dynamic one-equation model has been used as a fluid SGS model. A correction on the lift force term is applied following Mei [37] and Loth and Dorgan [38]. The importance of the inclusion of lift force is discussed by Costa _et al._[39]. In the present methodology, a finite volume based opensource software CFDEM [40; 41] has been used. A second-order backward scheme for time derivative and a second-order Gauss linear differencing scheme for pressure gradient, viscous diffusion, and advection terms have been adopted.
#### ii.1.1 Dynamic one-equation model
In the dynamic one-equation model, the subgrid scale stress (\(\tau_{ij}\)) is expressed in terms of the eddy viscosity (\(\nu_{t}\)) and filtered strain rate (\(\widetilde{S}_{ij}\)) as \(\tau_{ij}=-2\nu_{t}\widetilde{S}_{ij}\). The eddy viscosity is calculated as,
\[\nu_{t}=C\widetilde{\Delta}\sqrt{k_{SGS}}. \tag{4}\]
Here, \(k_{SGS}=\frac{1}{2}[\widetilde{u_{i}u_{i}}-\widetilde{u}_{i}\widetilde{u_{i}}]\) is the subgrid scale kinetic energy, \(\widetilde{\Delta}\) is the cube root of grid volume, and \(C\) is a parameter calculated dynamically during the simulation. The \(k_{SGS}\) is calculated using the following equation (Eqn. 5).
\[\frac{\partial k_{SGS}}{\partial t}+\frac{\partial(k_{SGS}\widetilde{u}_{i}) }{\partial x_{j}}=\frac{\partial}{\partial x_{j}}\left[(\nu+\nu_{t})\frac{ \partial k_{SGS}}{\partial x_{j}}\right]-\epsilon-\tau_{ij}\frac{\partial \widetilde{u}_{i}}{\partial x_{j}}. \tag{5}\]
Where \(\epsilon=C_{\epsilon}k_{SGS}^{3/2}/\widetilde{\Delta}\), and \(C_{\epsilon}=1.048\) is the model coefficient. In this model, the model constant appearing in eddy viscosity (\(C\) in Eqn. 4) is calculated dynamically [42] which is explained as,
\[\tau_{ij}-\frac{2}{3}\delta_{ij}k_{SGS}=-2C_{1}\widetilde{\Delta}k_{SGS}^{1/2 }\widetilde{S}_{ij}. \tag{6}\]
Using the second or test filter, the subgrid stress at the test filter level can be written as,
\[T_{ij}-\frac{2}{3}\delta_{ij}K_{SGS}=-2C_{2}\widetilde{\widetilde{\Delta}}K_{ SGS}^{1/2}\widetilde{\widetilde{S}}_{ij}. \tag{7}\]
Where \(K_{SGS}=\frac{1}{2}[\widetilde{u_{i}u_{i}}-\widetilde{\widetilde{u}_{i}} \widetilde{\widetilde{u}_{i}}]\). The calculation of \(C\) involves Germano's identity [43],
\[L_{ij}=T_{ij}-\widetilde{\tau_{ij}}=\widetilde{\widetilde{u_{i}}u_{j}}- \widetilde{\widetilde{u_{i}}}\widetilde{\widetilde{u_{i}}}, \tag{8}\]
and,
\[K_{SGS}=\widetilde{k_{SGS}}+\frac{1}{2}L_{ii}. \tag{9}\]
The formulation involves two assumptions. First, the subgrid stress scales with filter size, then \(C_{1}\) and \(C_{2}\) are
replaced by \(C\). And second, \(C\) is taken outside the integral in Eqn. 10 as the variation of \(C\) is low in space. Using Eqns. 6 - 8, the deviatoric part of the \(L_{ij}\) is written as,
\[L_{ij}^{d}=L_{ij}-\frac{1}{3}L_{kk}\delta_{ij}=\alpha_{ij}C-\widetilde{\beta_{ ij}C}, \tag{10}\]
and,
\[M_{ij}=\alpha_{ij}-\widetilde{\beta_{ij}}. \tag{11}\]
Where \(\alpha_{ij}=-2\widetilde{\widetilde{\Delta}}K_{SGS}^{1/2}\widetilde{ \widetilde{S_{ij}}}\) and \(\beta_{ij}=-2\widetilde{\Delta}k_{SGS}^{1/2}\widetilde{S_{ij}}\). Then,
\[C=\frac{1}{2}\frac{\langle M_{ij}L_{ij}\rangle}{\langle M_{kl}M_{kl}\rangle}. \tag{12}\]
Here, \(C_{s}(=\sqrt{C})\) is the Smagorinsky coefficient. The angular brackets in Eqn. 12 denote the plane averaging. The present work uses a box filter as a test filter. This model does not require the Van-driest damping as in the Smagorinsky model to satisfy the wall scaling.
### Particle phase description
The solid individual particle is tracked in the Lagrangian framework. The particle-particle and particle-wall collisions are considered to be elastic in nature. The particle motion of the dispersed phase is described using Newton's second law of motion, which is defined as,
\[m_{p}\frac{dv_{i,I}}{dt}=F_{i,I}^{D}+F_{i,I}^{L}+\sum_{I\neq J}F_{i,IJ}+F_{i, Iw}+m_{p}g. \tag{13}\]
Where \(m_{p}\) is the mass of the particle, \(v_{i,I}\) is the velocity of \(I^{th}\) particle, \(F_{i,I}^{D}\) and \(F_{i,I}^{L}\) are the drag and lift force acting on the particle, respectively. The g is the gravitational acceleration, \(F_{i,IJ}\) is the interaction force between \(I^{th}\) and \(J^{th}\) particle, and \(F_{i,Iw}\) is the interaction force between \(I^{th}\) particle and the wall. The drag force is calculated using the Schiller-Naumann correlation [44] given by Eqn. 14.
\[F_{i,I}^{D}=3\pi\mu d_{p}(\widetilde{u}_{i,I}(x,t)-v_{i,I})(1+0.15Re_{p}^{0.68 7}) \tag{14}\]
A third-order accurate scheme has been used to interpolate the fluid velocity at the particle location to calculate the drag and the lift forces. The particle size is smaller than the grid size. A point particle approximation has been used for the present study [45; 46; 47; 48; 12; 49; 17; 45; 49].
The soft-sphere (spring dashpot) [50; 51] interaction model has been used to capture the particle-particle and particle-wall collisions. The particle collisions are important to capture even at a low solid volume fraction(\(\phi\sim 10^{-4}\)) [8]. The collisions lead to momentum transfer to all the directions and affect the particle phase statistics. During the collision, the repulsive force on the particle depends on the extent of overlapping of the colliding particles. This is modeled using a spring-dashpot system where the dashpot includes the energy loss associated with the collision. The motion of particles is divided into normal and tangential components, along with overlapping distances and forces. Considering two colliding particles as \(i\) and \(j\), the normal (\(\mathbf{F}_{inj}\)) and tangential (\(\mathbf{F}_{tij}\)) components of the forces are modeled as,
\[\mathbf{F}_{nij}=(-k_{n}\delta_{n}^{3/2}-\eta_{nj}\mathbf{G}.\mathbf{n}) \mathbf{n}, \tag{15}\]
\[\mathbf{F}_{tij}=-k_{t}\delta_{t}-\eta_{tj}\mathbf{G}_{ct}. \tag{16}\]
Here, the subscripts 'n' and 't' refer to the normal and tangential directions. \(\delta\) is the overlap distance, \(k\), and \(\eta\) are the stiffness and damping coefficient, \(\mathbf{G}\) is the relative velocity vector of \(i^{th}\) particle relative to \(j^{th}\) particle (\(\mathbf{G}=\mathbf{v_{i}}-\mathbf{v_{j}}\)), and \(\mathbf{n}\) is the unit vector in the direction of the line connecting the centers of both the particles. \(\mathbf{G}_{ct}\) is the slip velocity at the contact point, and defined as \(\mathbf{G}_{ct}=\mathbf{G}-(\mathbf{G}.\mathbf{n})\mathbf{n}\).
### Simulation parameters
Simulations are performed at two bulk Reynolds numbers (\(Re_{b}=\bar{u}\times 2\delta/\nu\)) and three different channel dimensions (\(2\delta/d_{p}\)). Where \(\delta\) is the half-channel width, \(\bar{u}\) is the average fluid velocity, and \(d_{p}\) is the particle diameter. The bulk Reynolds numbers are 5600 and 13750, and three different channel dimensions used are 54, 81, and 117. Lengths of the vertical channel considered are \(8\pi\delta\), \(2\delta\), and \((4/3)\pi\delta\) in streamwise (x), wall-normal (y), and spanwise (z) directions, respectively. The finite volume method is applied to solve the filtered Navier-Stokes Eqns. 1 and 2. The fluid phase is resolved with \(128\)\(\ast\)\(65\)\(\ast\)\(64\) and \(192\)\(\ast\)\(99\)\(\ast\)\(96\) grid points (\(x\)\(\ast\)\(y\)\(\ast\)\(z\)) for bulk Reynolds numbers of 5600 and 13750, respectively. The grid resolution is \(\triangle x^{+}=35\), \(\triangle z^{+}=12\) for \(Re_{b}=5600\), and \(\triangle x^{+}=51\), \(\triangle z^{+}=17\) for \(Re_{b}=13750\) for the LES models. Earlier studies [52; 53] have also used similar grid resolution. Here, (\({}^{+}\)) symbol indicates the quantities normalized with viscous scales. In the wall-normal direction, the first grid point is such that \(y^{+}\) is less than one. No-slip boundary condition is applied at the walls. The time step is taken such that the CFL number is nearly 0.2 for all the constant bulk flow rate simulations. The Dynamic one-equation (Eqns. 4 - 12) LES model is used to close the SGS stress term in the filtered Navier-stokes equation.
The particle diameter is considered to be \(39\mu m\). The particle densities are considered as \(2000kg/m^{3}\) and \(800kg/m^{3}\). As the particle to fluid density ratio is higher, the Basset history and buoyancy effects are neglected in the particle's equation of motion. Simulation parameters like Reynolds numbers, channel dimensions, and the particle Stokes numbers are provided in Table 1.
The Stokes number based on the fluid integral length scale and viscous scales are denoted by \(St=\tau_{p}/\tau_{f}\) and \(St_{v}=\tau_{p}/\tau_{fv}\). Here, the particle relaxation time (\(\tau_{p}\)) is defined as, \(\tau_{p}=\rho_{p}d_{p}^{2}/18\mu_{f}\), the fluid integral time scale is defined as, \(\tau_{f}=2\delta/\overline{u}\), and viscous time scale is defined as \(\tau_{fv}=\nu/u_{\tau}^{2}\) where \(\mu_{\tau}\) is the friction velocity. Here, underlaction friction velocity is taken to calculate \(St_{v}\) in Table 1. Various authors [13; 13; 54; 55; 56] have used time scale based on the friction velocity to define the Stokes number. However, it is to be noted that the turbulence modulation with the increase in particle mass loading leads to a modification in friction velocity, shown in Fig. 1 (a). For that reason, we have used \(2\delta/\bar{u}\) as the fluid time scale.
## III Results
### Effect of system size
In this section, the simulations are performed to analyze the dynamics of fluid and solid phases with different channel widths for a range of solid volume fractions at \(Re_{b}=5600\). The channel dimensions are considered such that the ratios of channel width (\(2\delta\)) and particle diameter (\(d_{p}\)) are 54, 81, and 117. Here, channel width (\(2\delta\)) varies for different channel configurations. However, the computational domain is kept as \(8\pi\delta\times 2\delta\times(4/3)\pi\delta\) for all the cases. The comparison of current unladen LES with available literature [57; 58; 19; 53] is shown in the appendix for \(Re_{b}=5600\) and \(13750\). In Fig. 1 (a), the fluid streamwise mean velocity is plotted for all three channels and two volume fractions, \(\phi=2\times 10^{-4}\) and \(5\times 10^{-4}\). It is observed that there is no variation in the mean profiles for the case \(A_{1}\) when the volume fraction is increased from \(\phi=2\times 10^{-4}\) to \(5\times 10^{-4}\). For case \(B_{1}\), there is a marginal decrease of about 5% in mean velocity in the buffer region as the volume fraction is increased from \(\phi=2\times 10^{-4}\) to \(5\times 10^{-4}\). However, the reduction in the mean velocity is more significant for the case \(C_{1}\) at \(\phi=5\times 10^{-4}\). The normalized fluid fluctuations as a function of the wall-normal distance are plotted in Fig. 1 (b-d). In the case of streamwise fluid fluctuations, there is only 4% and 9% decrease in the peak value as the volume fraction is increased from \(\phi=2\times 10^{-4}\) to \(5\times 10^{-4}\) for the case \(A_{1}\) and \(B_{1}\), respectively. However, the decrease is almost one order of magnitude for the case \(C_{1}\) for the same increase in volume fraction (Fig. 1 (b)). In the case of cross-stream fluid velocity fluctuations, there is 13% and 34% decrease as the volume fraction is changed from \(\phi=2\times 10^{-4}\) to \(5\times 10^{-4}\) for the case \(A_{1}\) and \(B_{1}\), respectively. However, the decrease is of almost one order of magnitude for similar volume fraction change in case of the larger channel of \(2\delta/d_{p}=117\) (case \(C_{1}\)), Fig. 1 (c). A similar behavior is observed for wall-normal fluid fluctuations, shown in Fig. 1 (d).
To quantify the variation of turbulence intensity, we compute channel averaged fluid fluctuations as defined by
\[\langle\star\rangle_{s}=\frac{1}{\delta}\int_{0}^{\delta}dy\langle\star\rangle. \tag{17}\]
In Fig. 2 (a-d), the average fluid fluctuations across the channel width are plotted for a range of volume fractions. The average fluctuations predicted by different channels match at lower volume fractions. The fluid fluctuations decrease with an increase in volume fraction, and a complete turbulence suppression is observed at a higher particle volume loading for all three cases. The particle loading at which complete turbulence collapse occurs is called critical particle volume loading (CPVL). The attenuation observed in the fluid fluctuations is higher for the case \(C_{1}\). The CPVL is \(14\times 10^{-4}\), \(7\times 10^{-4}\), and \(5\times 10^{-4}\) for the cases \(A_{1}\), \(B_{1}\) and \(C_{1}\), respectively. It is interesting to note that the fluid fluctuations decrease with an increase in system size for the same volume fraction and fixed Reynolds number, although, Stokes number is decreased (Table 1). Such an observation is in contrary to the earlier findings [7; 11; 19; 47; 59] that the fluid fluctuations decrease with an increase in Stokes number for \(St_{K}>1\). Here, \(St_{K}\) is the Stokes number based on the Kolmogorov time scale. It is worth noting that, in the earlier studies Stokes number was changed by changing particle relaxation time for a fixed fluid time scale. However, in the present study, particle relaxation time is fixed and the fluid time scale changes with the system size. The results show that there is a reduction in the fluid fluctuations with the decrease in considered Stokes numbers. This raises the question whether the Stokes number is a correct parameter to represent the effect of particle inertia on the turbulence modulation for a range of system size and in the limit \(St_{K}>1\). Addressing such a question is important if one is interested to quantify turbulence modulation in an unified manner as a function of fluid and particle inertia. Another interesting implication is that the understanding gained will help to compare the simulation results with the experiments.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Case & \(Re_{b}\) & \(2\delta/d_{p}\) & \(St\) & \(St_{v}\) \\ \hline \(A_{1}\) & 5600 & 54 & 179.53 & 4108.8 \\ \hline \(B_{1}\) & 5600 & 81 & 74.45 & 1703.21 \\ \hline \(C_{1}\) & 5600 & 117 & 38.71 & 885.27 \\ \hline \(B_{2}\) & 13750 & 81 & 73.12 & 3304.4 \\ \hline \(C_{2}\) & 13750 & 117 & 38.02 & 1720.8 \\ \hline \end{tabular}
\end{table}
Table 1: \(Re_{b}\) is the fluid bulk Reynolds number, \(\delta\) is half channel width, \(d_{p}\) is the particle diameter, \(St\) is the Stokes number based on the integral time scale of fluid (\(2\delta/\bar{u}\)), and \(St_{v}\) is the Stokes number based on the viscous time scale.
### Effect of Reynolds number of fluid phase
This section examines the turbulence modulation for two channel dimensions of \(2\delta/d_{p}=81\) and \(117\). The unladen fluid statistics are validated against the DNS data of Moser _et al._[57], experimental data of Li _et al._[58], and LES data of Salmanzadeh _et al._[53] in the appendix, shown in Fig. 20. In Fig. 3, the fluid statistics for two volume fractions of \(\phi=5\times 10^{-4}\) and \(10^{-3}\) are presented for cases \(B_{1}\) and \(B_{2}\). In Fig. 3 (a), it is observed that there is no change in the streamwise mean velocity profiles as the volume fraction is increased from \(\phi=5\times 10^{-4}\) to \(10^{-3}\) for the case \(B_{2}\). However, in case of \(B_{1}\), there is a decrease of fluid streamwise mean velocity in the buffer region and an increase in the mean velocity near the channel center for a similar change in volume fraction (shown in Fig. 3 a). Fig. 3 shows that there is about a \(5\%\) decrease in the peak value of streamwise fluid fluctuations as the volume fraction is increased from \(\phi=5\times 10^{-4}\) to \(10^{-3}\) for the case \(B_{2}\). However, a decrease of almost one order of magnitude in the peak value is observed for the case \(B_{1}\) for the same variation of volume fraction. There is a decrease of nearly \(14\%\) and \(11\%\) in the peak value of the Reynolds stress and wall-normal fluctuations, respectively, as the volume fraction is increased from \(\phi=5\times 10^{-4}\) to \(10^{-3}\) for the \(Re_{b}=13750\). However, in the case of \(Re_{b}=5600\), there is a decrease of almost one order of magnitude in the peak value of the Reynolds stress and the wall-normal fluctuations.
Since the extent of variation of second-order velocity fluctuations is a function of wall-normal distance, we compute a channel averaged value of the moments to quantify the variation of the moments as a function of solid volume fraction. The average fluid fluctuations are plotted in Fig. 4 for both the Reynolds numbers of \(5600\) and \(13750\), and for channel dimensions of \(2\delta/d_{p}=81\)
Figure 1: The fluid velocity profiles along the wall-normal direction for different volume fractions at \(Re_{b}=5600\). The fluid velocities and wall-normal distance are normalized with viscous units. Fig. (a) Streamwise mean velocity, (b) Streamwise fluctuations, (c) Cross-stream stress, and (d) Wall-normal fluid fluctuations. The legends for the Figs. (c and d) are same as in Fig. (a).
and 117 over a range of particle volume fractions. The decrease in the fluid fluctuations is observed with an increase in particle volume loading. It is observed that there is almost a 40% decrease in the streamwise fluid fluctuations and almost 60% decrease in the cross-stream, wall-normal, and spanwise fluid fluctuations compared to unladen cases before the CPVL for both the channels and Reynolds numbers. However, for the constant \(2\delta/d_{p}\), the CPVL for \(Re_{b}=13750\) is significantly higher than that for the \(Re_{b}=5600\).
To understand the dependence of turbulence modulation and its collapse on the channel dimension and fluid phase Reynolds number, we conduct momentum and energy balance for a range of solid volume fractions and discuss them in the following section.
### Momentum and energy balance
This section discusses the terms of the momentum balance and energy budget for the mean fluid flow. The filtered momentum equations for streamwise and wall-normal components are written as,
\[-\frac{1}{\rho_{f}}\frac{\partial\widetilde{P}}{\partial x}-\frac{\partial( \overline{\widetilde{u_{x}^{\prime}u_{y}^{\prime}}})}{\partial y}+(\nu+\nu_{ t})\frac{\partial^{2}(\widetilde{U}_{x})}{\partial y\partial y}-\frac{\overline{ \rho_{p}f\phi_{l}}}{\rho_{f}\tau_{p}}(\widetilde{u}_{x}-\widetilde{v}_{x})=0, \tag{18}\]
and
\[-\frac{1}{\rho_{f}}\frac{\partial\widetilde{P}}{\partial y}-\frac{\partial( \overline{\widetilde{u_{y}^{\prime}u_{y}^{\prime}}})}{\partial y}-\frac{ \overline{\rho_{p}f\phi_{l}}}{\rho_{f}\tau_{p}}(\widetilde{u}_{y}-\widetilde {v}_{y})=0. \tag{19}\]
Here, \(\widetilde{p}\) and \(\widetilde{u}_{i}\) are the fluid's instantaneous filtered pressure and velocity, respectively. \(\widetilde{P}\) and \(\widetilde{U}_{x}\) are the
Figure 2: The average fluid fluctuations across the channel width for a volume fraction range at \(Re_{b}=5600\). The fluid fluctuations are normalized with the square of bulk fluid velocity. Fig. (a) Streamwise, (b) Cross-stream, (c) Wall-normal, and (d) Spanwise fluid fluctuations.
mean filtered pressure and velocity, respectively. The filtered fluctuating velocity is denoted by \(\vec{u^{\prime}}_{i}\). The last term in both the above equations shows the feedback force exerted by the particles, where \(\rho_{p}\) and \(\rho_{f}\) are the particle and fluid density, \(f(=1+0.15Re_{p}^{0.687})\) is the drag factor, \(\tau_{p}\) is the particle relaxation time, and \(\phi_{l}\) is the local volume fraction of the particle. The filtered Reynolds stress is represented by \(\rho_{f}\overline{\vec{u^{\prime}}_{i}\overline{\vec{u^{\prime}}_{j}}}\). The terms appearing in the streamwise filtered momentum equation are plotted in Fig. 5 as a function of wall-normal positions. All the terms are scaled with \(\bar{u}^{2}/2\delta\). The wall-normal distance (\(y\)) is normalized with channel width (\(2\delta=h\)). The terms of the momentum equation are plotted for different channel dimensions and volume fractions which present the effect of system size. Here, the derivative of Reynolds stress is denoted by \(R_{12,y}\), diffusion term by \(D_{m}\), pressure gradient by \(\Pi_{m}\), and feedback force by \(F_{p}\). For unladen cases, the terms for channels with different dimensions match with each other as expected (Fig. 5 a). Here, magnitude of \(R_{12,y}\) is comparable to the \(D_{m}\). However, the deviation is observed at \(\phi=2\times 10^{-4}\) in Fig. 5 (b), and the peak values of \(R_{12,y}\) and \(D_{m}\) are smaller by approximately \(20\%\) for case \(C_{1}\) compared to the case \(A_{1}\) (Table 1). A further increase in volume fraction to \(\phi=5\times 10^{-4}\) results a decrease in \(D_{m}\), and \(R_{12,y}\) almost becomes zero for case \(C_{1}\) (higher \(2\delta/d_{p}\)) as shown in Fig. 5 (c). Fig. 5 (c) shows that \(\langle\Pi_{m}\rangle_{s}\) decreases nearly \(30\%\), while \(\langle F_{p}\rangle_{s}\) increases almost by \(400\%\) for case \(C_{1}\) compared to case \(A_{1}\). Here, \(\langle.\rangle_{s}\) denotes the averaging of a quantity across the channel width as defined by Eqn. 17. Here, a significant increase in the feedback force is observed with an increase in \(2\delta/d_{p}\).
To quantify the effect of the Reynolds number, we have chosen a channel with \(2\delta/d_{p}=81\), and the terms of the momentum equation are plotted for \(Re_{b}=5600\) and
Figure 3: The fluid velocity profiles are compared for cases \(B_{1}\) and \(B_{2}\) for different volume fractions. The fluid velocities and wall-normal distance are normalized with viscous units. Fig. (a) Streamwise mean velocity, (b) Streamwise fluctuations, (c) Cross-stream stress, and (d) Wall-normal fluid fluctuations. The legends for the Figs. (c and d) are the same as in Fig. (a).
13750. In the case of the unladen flow, the peak location of \(R_{12,y}\) and \(D_{m}\) are closer to the wall, and magnitude of peak values are almost twice for \(Re_{b}=13750\) as shown in Fig. 6 (a). It is interesting to note that the scaled pressure gradient is lowered by approximately 25% for the case \(B_{2}\) (\(Re_{b}=13750\)) than case \(B_{1}\) (\(Re_{b}=5600\)), Fig. 6 (a). A decrease of nearly 25% in \(\langle\Pi_{m}\rangle_{s}\) and \(\langle D_{m}\rangle_{s}\) is observed, while there is no variation of \(\langle R_{12,y}\rangle_{s}\) in case of \(B_{2}\) compared to \(B_{1}\). The terms are shown for \(\phi=5\times 10^{-4}\) and \(10^{-3}\) in Fig. 6 (b and c). \(R_{12,y}\) and \(D_{m}\) decrease with an increase in volume fraction. However, the extent of the decrease is much lower for \(Re_{b}=\)13750 than \(Re_{b}=5600\). \(R_{12,y}\) is almost zero for \(Re_{b}=5600\) at \(\phi=10^{-3}\), which is due to the collapse of fluid turbulence. In Fig. 6 (b and c), \(\langle F_{p}\rangle_{s}\) in simulation \(B_{2}\) is almost 65% lower compared to \(B_{1}\). Thus, it is observed that a significant decrease in the feedback force (scaled with fluid bulk velocity (\(\bar{u}\)) and channel width (\(2\delta\))) occurs with an increase in the Reynolds number.
The mean filtered kinetic energy equation for the mean flow is written as,
\[\begin{split}&(\nu+\nu_{t})\frac{\partial}{\partial y}(\widetilde {U}_{x}\frac{\partial\widetilde{U_{x}}}{\partial y})-\frac{\partial\overline{ \overline{(\widetilde{u_{x}u_{y}^{\prime}}}\widetilde{U}_{x})}}{\partial y}+ \overline{\widetilde{u_{x}u_{y}^{\prime}}}\frac{\partial(\widetilde{U}_{x}) }{\partial y}\\ &-(\nu+\nu_{t})\frac{\partial\widetilde{U}_{x}}{\partial y} \frac{\partial\widetilde{U}_{x}}{\partial y}-\widetilde{U}_{x}\frac{\overline{ \rho_{p}f\phi}}{\rho_{f}\tau_{p}}(\widetilde{u}_{x}-\widetilde{v}_{x})- \widetilde{U}_{x}\frac{1}{\rho}\frac{\partial\widetilde{P}}{\partial x}=0. \end{split} \tag{20}\]
In Eqn. 20, on the left-hand side, the first two terms are the energy flux terms due to molecular viscosity, eddy viscosity, and Reynolds stress. The third term is the energy utilized for the turbulent production, the fourth term is the dissipation of turbulent mean kinetic energy, the fifth term is the dissipation caused by the particle feedback, and the last term is the energy input due to pressure work. The spatial average of Eqn. 20 in the wall-normal direction makes the flux term zero due to
Figure 4: The average fluid fluctuations across the channel width are plotted over a range of volume fractions for cases \(B_{1}\), \(C_{1}\), \(B_{2}\), and \(C_{2}\). The fluid fluctuations are normalized by the fluid bulk velocity (\(\overline{u}\)). Fig. (a) Streamwise, (b) Cross-stream, (c) Wall-normal, and (d) Spanwise fluid fluctuations.
no-slip condition at the walls, and the equation becomes,
\[\overline{\langle\widetilde{u_{x}^{{}^{\prime}}u_{y}^{\prime}}} \frac{\partial(\widetilde{U}_{x})}{\partial y})_{s}-(\nu+\nu_{t}) \langle\frac{\partial\widetilde{U}_{x}}{\partial y}\frac{\partial\widetilde{U}_ {x}}{\partial y}\rangle_{s}\] \[-\langle\widetilde{U}_{x}\frac{\overline{\rho_{p}f\phi_{c}}}{ \overline{\rho_{f}\tau_{p}}}(\widetilde{u}_{x}-\widetilde{v}_{x})\rangle_{s}- \widetilde{U}_{x}\frac{1}{\rho_{f}}\frac{\partial\widetilde{P}}{\partial x}=0 \tag{21}\]
The terms in the Eqn. 21 are plotted in Fig. 7 for different channels and volume fractions for \(Re_{b}=5600\). All the terms in Fig. 7 are normalized by respective \(\bar{u}^{3}/2\delta\). The turbulent production term is represented by \(P\), mean viscous dissipation by \(\epsilon_{m}\), pressure work by \(\Pi\), and dissipation due to particles by \(D\). The wall-normal distance (\(y\)) is scaled by channel width (\(h\)). For the unladen case in Fig. 7 (a), all predicted terms for all the channels with different dimensions overlap as expected. However, for a particle volume fraction of \(2\times 10^{-4}\) in Fig. 7 (b), the deviation is observed in all the terms. The peak of turbulent production terms for the case \(C_{1}\) is lowered by nearly 30%, viscous dissipation in the near-wall is lower by 8%, and pressure work in the channel center is lower by 10% compared to the case \(A_{1}\). However, \(\langle D\rangle_{s}\) is higher by nearly 400% for the case \(C_{1}\) compared to the case \(A_{1}\). For \(\phi=5\times 10^{-4}\) in Fig. 7 (c), the turbulence collapse in case of \(C_{1}\), and the turbulent production term becomes zero. The viscous dissipation in the near-wall is lower by approximately 40%, and the pressure work is also reduced by 30%. The dissipation due to particles is higher by nearly 400% for the case \(C_{1}\) than the case \(A_{1}\).
In Fig. 8, the terms from Eqn. 21 are plotted for cases \(B_{1}\) and \(B_{2}\), and for different volume fractions to analyze the effect of Reynolds number. Fig. 8 (a) shows that for unladen flow, the peak values of turbulent production and viscous dissipation (in the near-wall region) are higher by nearly 50% for the case \(B_{2}\) (\(Re_{b}=13750\)) compared to the case \(B_{1}\) (\(Re_{b}=5600\)). However, \(\langle\Pi\rangle_{s}\) is approximately 25% less for \(Re_{b}=13750\) than \(Re_{b}=5600\). This decrease in \(\langle\Pi\rangle_{s}\) is compensated by the decrease in \(\langle\epsilon_{m}\rangle_{s}\), while there is almost no variation in the \(\langle P\rangle_{s}\). The peak value of turbulent production is much closer to the wall for a higher Reynolds number. In Fig. 8 (b) for \(\phi=5\times 10^{-4}\), the peak value of production term
Figure 5: The terms from Eqn. 18 are plotted in the wall-normal direction for \(Re_{b}=5600\) at different volume fractions. (a) \(\phi=0\), (b) \(\phi=2\times 10^{-4}\), and (c) \(\phi=5\times 10^{-4}\). Solid line: Derivative of Reynolds stress (\(R_{12,y}\)), Dashed: viscous diffusion ( \(D_{m}\)), Dotted: Pressure gradient (\(\Pi_{m}\)), Dash dotted: particle feedback force ( \(F_{p}\)). Black lines are for the case \(A_{1}\), red lines are for the case \(B_{1}\), and blue lines are for the case \(C_{1}\).
and viscous dissipation in the near-wall region is almost \(50\%\) for \(Re_{b}=5600\) compared to \(Re_{b}=13750\). However, \(\langle\Pi\rangle_{s}\) is nearly \(20\%\) and \(\langle D\rangle_{s}\) is nearly \(80\%\) lower for \(Re_{b}=13750\) compared to \(Re_{b}=5600\). In Fig. 8 (c) for \(\phi=10^{-3}\), the mean production term almost becomes zero, and the mean viscous dissipation in the near-wall region is \(50\%\) lower for the case \(B_{1}\) compared to the case \(B_{2}\). The scaled \(\langle\Pi\rangle_{s}\) is nearly \(5\%\), while \(\langle D\rangle_{s}\) is nearly one order of magnitude lower for \(B_{2}\) compared to \(B_{1}\). From the above analysis, it is clear that the variation in particle feedback term is more significant than the others before the critical loading for any of the cases. The effect of particle feedback increases with an increase in \(2\delta/d_{p}\) for fixed Reynolds numbers, and decreases with an increase in fluid Reynolds number while keeping \(2\delta/d_{p}\) fixed.
### Discussion
In the above section, we observed that the particle-induced drag force depends on the channel dimension and the fluid phase Reynolds number. Such an increase in drag and drag-induced dissipation is the source of different extents in turbulence modulation when \(\phi\) is less than CPVL. In this section, we discuss this dependence in detail. From Eqn. 2, the scaled feedback force on the fluid due to particles can be written as,
\[F_{p}=\frac{\rho_{p}f\phi_{l}}{\rho_{f}\tau_{p}}(\widetilde{u}_{x}-\widetilde {v}_{x})\frac{2\delta}{\overline{u}^{2}}, \tag{22}\]
\[F_{p}=\frac{\rho_{p}f\phi_{l}}{\rho_{f}(\rho_{p}d_{p}^{2}/18\mu)}(\widetilde{ u}_{x}-\widetilde{v}_{x})\frac{2\delta}{\overline{u}^{2}}, \tag{23}\]
\[F_{p}=18f\phi_{l}\frac{Re_{p}}{Re_{b}^{2}}\left(\frac{2\delta}{d_{p}}\right)^{ 3}. \tag{24}\]
Figure 6: The terms from Eqn. 18 are plotted in the wall-normal direction for cases \(B_{1}\) and \(B_{2}\) at different volume fractions. (a) \(\phi=0\), (b) \(\phi=5\times 10^{-4}\), and (c) \(\phi=1\times 10^{-3}\). Solid line: Derivative of Reynolds stress (\(R_{12,y}\)), Dashed: viscous diffusion ( \(D_{m}\)), Dotted: Pressure gradient (\(\Pi_{m}\)), Dash dotted: particle feedback force ( \(F_{p}\)). Here, red lines are for the case \(B_{2}\), and blue lines are for the case \(B_{1}\).
Here, \(\phi_{l}\) is the local volume fraction of the particle, \(f\) is the inertial correction factor, \(Re_{p}\) is the particle Reynolds number, \(Re_{b}\) is the fluid bulk Reynolds number, \(\delta\) is the half-channel width, and \(d_{p}\) is the particle diameter. In the case of fixed \(Re_{b}\) and \(d_{p}\), when \(2\delta/d_{p}\) is modified by varying \(\delta\), it is observed that there is no significant variation in \(Re_{p}\), \(\phi_{l}\), and \(f\) over a range of volume fractions as shown in Fig. 9. Under those conditions, we can write,
\[F_{p}\propto\delta^{3}. \tag{25}\]
Thus, a change in \(2\delta/d_{p}\) from 54 to 117 results in almost one order of magnitude change in the \((2\delta/d_{p})^{3}\), which is much higher than the variation of other parameters. While in the case of constant \(2\delta/d_{p}\) when \(Re_{b}\) is changed, there is no significant variation in \(Re_{p}\), \(\phi_{l}\), and \(f\) over a range of volume fractions (Fig. 10). Then, using Eqn. 24 we can write,
\[F_{p}\propto\frac{1}{Re_{b}^{2}}. \tag{26}\]
Here, an increase in \(Re_{b}\) from 5600 to 13750 leads to a nearly six-fold decrease in the \(1/Re_{b}^{2}\) term, which is higher than the variation of other parameters. Therefore, the above formulation indicates that \(2\delta/d_{p}\) and \(Re_{b}\) are the dominating factors in controlling the extent of turbulence modulation compared to the other parameters.
### Two point statistics: Spatial correlation
The spatial correlation of the fluid fluctuations qualitatively describes the length scale associated with the structure of the fluid phase. The spatial correlation of fluid fluctuation in the streamwise (\(x\)) direction is defined as,
\[r_{ij}(r,t)=\overline{u_{i}^{\prime}(x,t)u_{j}^{\prime}(x+r,t)}. \tag{27}\]
Where \(r\) is the distance between the two points, and \(i\) and \(j\) are the components of fluctuations. The overbar
Figure 7: The terms from Eqn. 21 are plotted in the wall-normal direction for \(Re_{b}=5600\) at different volume fractions. (a) \(\phi=0\), (b) \(\phi=2\times 10^{-4}\), and (c) \(\phi=5\times 10^{-4}\). Solid line: production term (\(P\)), Dashed: viscous dissipation (\(\epsilon_{m}\)), Dotted: Pressure work (II), Dash dotted: dissipation due to particle feedback (\(D\)). Black lines are for the case \(A_{1}\), red lines are for the case \(B_{1}\), and blue lines are for the case \(C_{1}\).
**Fig. 8. The terms from Eqn. 21 are plotted in the wall-normal direction for \(Re_{b}=5600\) at different volume fractions. (a) \(\phi=0\), (b) \(\phi=5\times 10^{-4}\), and (c) \(\phi=1\times 10^{-3}\). Solid line: production term (\(P\)), Dashed: viscous dissipation (\(\epsilon_{m}\)), Dotted: Pressure work (\(\Pi\)), Dash dotted: dissipation due to particle feedback (\(D\)). Here, red lines are for the case \(B_{2}\), and blue lines are for the case \(B_{1}\).**
**Fig. 9. (a) The particle Reynolds number and (b) local particle volume fraction in the wall-normal direction for different system sizes and volume fractions for \(Re_{b}=5600\).**
denotes the ensemble averaging. The spatial correlation coefficient is defined as,
\[R_{ij}(r,t)=\frac{\overline{u_{i}^{\prime}(x,t)u_{j}^{\prime}(x+r,t)}}{\overline{ u_{i}^{\prime}(x,t)u_{j}^{\prime}(x,t)}}. \tag{28}\]
\(R_{ij}\) is plotted for streamwise and spanwise directions in the near-wall (\(y^{+}=15\)) regime for a range of particle volume fractions and shown in Fig. 11. In Fig. 11 (a and b), the correlation coefficient is plotted for \(\phi=0\), \(1.0\times 10^{-3}\), and \(1.3\times 10^{-3}\) for the case \(A_{1}\). For the streamwise direction, the correlation coefficient of the streamwise component (\(R_{xx}\)) decays slower than the \(R_{yy}\) and \(R_{zz}\) as shown Fig. 11 (a). The decay of the correlation coefficient significantly decreases with an increase in the volume fraction from \(\phi=0\) to \(1.3\times 10^{-3}\) for each of the components. However, the variation in decay rate is more pronounced for the streamwise component. Such an observation indicates that the turbulent structures become lengthier with an increase in volume fraction. A decrease in the correlation coefficient for all the components is observed when \(\phi=1.5\times 10^{-3}\) (for the case \(A_{1}\)) as the turbulence has collapsed for this volume loading (not shown here). For the spanwise spatial correlation coefficient in Fig. 11 (b), no significant variation is observed in the correlation coefficient for wall-normal and spanwise components. However, an increase is observed for the streamwise component with an increase in volume loading, which signifies that the vortical structures become wider in the spanwise direction.
In Fig. 11 (c and d), the effect of channel dimension on the correlation coefficient is analyzed for \(Re_{b}=5600\) and \(\phi=5\times 10^{-4}\). In the streamwise direction for streamwise and spanwise velocity fluctuations, the correlation decay becomes slower with an increase in \(2\delta/d_{p}\), Fig. 11 (c). Such an observation is a signature of the lengthier structure with increased channel dimensions for a fixed Reynolds number. However, in the case of correlations in the spanwise direction, the wall-normal fluctuations decay faster, and there is almost no significant modification of the dimension of turbulent streaks. The effect of Reynolds number on the correlation coefficients for the cases \(B_{1}\) and \(B_{2}\) are plotted in Fig. 11 (e and f). Interestingly, the correlation coefficient decay faster for the higher Reynolds number of \(Re_{b}=13750\) compared to the \(Re_{b}=5600\) for both the directions and all the components of velocity fluctuations. This suggests that the vortical structures are smaller in dimensions at a high Reynolds number. In this context, it is worth mentioning that for \(2\delta/d_{p}=20\), \(\phi=2.36\times 10^{-3}\) and \(Re_{p}<50\), Yu _et al._[3] have reported less turbulence attenuation for higher Reynolds number of \(Re_{b}=12000\) than the \(Re_{b}=5746\). They have commented that at a high Reynolds number, the large-scale vortices, although smaller in size, are stronger than those of a low Reynolds number. Zhou _et al._[13] have also reported that the vortical structures become weaker and fewer with an increase in mass loading. The authors also found that the spacing between the streaks increases with the addition of particles.
In Figs. 12-13, the contours are plotted for instantaneous streamwise velocity fluctuations (\(\widetilde{u}_{x}^{\prime}\)) in the near-wall region (\(y^{+}=15\)) to depict the size and strength of the vortical structures. The streamwise fluctuations are scaled with respective unladen friction velocity (\(u_{\tau}\)). In Fig. 12 (a), the low and high-speed streaks are shown in the \(x-z\) plane for the case \(A_{1}\) at lower volume fraction (\(4\times 10^{-4}\)). The streamwise and spanwise dimensions are provided in viscous units. It is observed that the length of the high-speed streaks (red regime) is around 2000 viscous units, whereas low-speed streaks (blue regime) are extended up to complete channel length. For the same channel in Fig. 12 (b), the streaks become fewer
Figure 10: (a) The particle Reynolds number and (b) local particle volume fraction in the wall-normal direction for cases \(B_{1}\) and \(B_{2}\) at different volume fractions.
and wider as the volume fraction is increased to \(10^{-3}\). The high-speed streaks become more elongated and extend up to 3500 viscous units, while there is no significant change in the lengths of low-speed streaks. The effect of \(2\delta/d_{p}\) on the high-low speed streaks for a volume fraction of \(\phi=4\times 10^{-4}\) is shown in Figs. 12 (a) and 13. It is observed that the high-speed streaks become wider and fewer as the \(2\delta/d_{p}\) is increased while keeping \(\phi\) constant. The length of high-speed streaks extends up to 3500 viscous units from case \(A_{1}\) to \(C_{1}\).
In Fig. 14, the normalized contours for instantaneous streamwise fluctuations are shown for cases \(B_{1}\) and \(B_{2}\)
Figure 11: The spatial correlation coefficient is plotted in the streamwise direction (Fig. (a, c, and e)) and spanwise direction (Fig. (b, d, and f). (a and b) \(R_{ij}\) is plotted for different volume fractions for case \(A_{1}\). (c and d) \(R_{ij}\) is plotted for cases \(A_{1}\), \(B_{1}\) and \(C_{1}\) for \(\phi=4\times 10^{-4}\). (e and f) \(R_{ij}\) is plotted for cases \(B_{1}\) and \(B_{2}\) for \(\phi=5\times 10^{-4}\). The legends for Fig. (a) are the same as in (b), and for Fig. (c) are the same as in (d).
Figure 14: The contours for instantaneous streamwise fluctuations (uxPlus is \(\widetilde{u}_{x}^{\prime}/u_{\tau}\)) plotted for (a) case \(B_{1}\), \(\phi=0\), (b) case \(B_{1}\), \(\phi=5\times 10^{-4}\), (c) case \(B_{2}\), \(\phi=0\), and (d) case \(B_{2}\), \(\phi=5\times 10^{-4}\) in x-z plane at \(y^{+}=15\). The streamwise and spanwise distances are normalized with viscous scales.
Figure 12: The contours for instantaneous streamwise fluctuations (uxPlus is \(\widetilde{u}_{x}^{\prime}/u_{\tau}\)) plotted for case \(A_{1}\) at (a) \(\phi=4\times 10^{-4}\), and (b) \(\phi=10^{-3}\) in x-z plane at \(y^{+}=15\). The streamwise and spanwise distances are normalized with viscous scales.
The contours are plotted in the \(x-z\) plane at the near-wall region (\(y^{+}=15\)) for unladen flow and a volume fraction of \(5\times 10^{-4}\) while channel dimensions are kept constant at \(2\delta/d_{p}=81\). In Fig. 14 (a and b) for the case \(B_{1}\), it is observed that the streaks become fewer, lengthier, and fatter with an increase in particle volume loading. However, the streaks are smaller, thinner, and closely spaced for \(Re_{b}=13750\) when compared to similar cases in Fig. 14 (a and c) and Fig. 14 (b and d). It is interesting to note that there is almost no change in the streaks spacing and lengths in case of higher Reynolds number, Fig. 14 (c and d). While streaks become fewer, fatter and the spacing becomes more for lower \(Re_{b}=5600\) when volume fraction is increased from 0 to \(5\times 10^{-4}\).
The coherent structures responsible for the turbulent production are analyzed for different \(2\delta/d_{p}\), Reynolds numbers and volume fractions. Different methods have been reported in the literature to analyze the turbulent structures [60; 61; 62; 63; 47]. In the present work, we report the contours of \(-\lambda_{2}\) which is the second largest eigenvalue of \(S^{2}+\Omega^{2}\). Here, \(S\) is the strain rate tensor, and \(\Omega\) is the rotational tensor of the velocity gradient. This method was proposed and discussed in detail by Jeong and Hussain [61]. This method neglects the unsteady straining which can cause minimum pressure without the presence of vortical structures and viscous effects that may eliminate minimum pressure in vortical flows [61]. The isocontours for \(-\lambda_{2}^{+}=0.0025\) shows the effect of volume fraction, system size, and Reynolds numbers, Fig. 15 and Fig. 16. The (+) symbol indicates that the values are reported in viscous units. The contours are colored according to the fluid mean velocity (\(\bar{u}\)) normalized with respective unladen friction velocity (\(u_{\tau}\)). In Fig. 15 (a) for case \(A_{1}\), the \(-\lambda_{2}^{+}=0.0025\) structures are shown for lower volume fraction of \(4\times 10^{-4}\). Here, the length and width of the structures are approximately 200 and 50 viscous units, respectively. However, the \(-\lambda_{2}^{+}\) structures vanish as the volume fraction reaches near CPVL (not shown here). In Fig. 15 (a, b and c), It is observed that the \(-\lambda_{2}^{+}\) structures become fewer as channel dimension is changed from case \(A_{1}\) to case \(C_{1}\) while volume fraction and Reynolds number are kept constant. In Fig. 16, \(-\lambda_{2}^{+}\) contours are plotted for both the unladen and particle-laden flows for cases \(B_{1}\) and \(B_{2}\). The \(-\lambda_{2}^{+}=0.0025\) contours become smaller and thinner as the Reynolds number is increased from 5600 to 13750 for unladen flows. When particle volume fraction is \(5\times 10^{-4}\), the number of \(-\lambda_{2}^{+}\) structures becomes significantly smaller for low Reynolds number.
### Particle statistics
In the previous sections, the effect of \(\phi\), \(2\delta/d_{p}\), and \(Re_{b}\) are studied on the fluid phase properties. It is observed that the turbulence attenuation increases with an increase in \(\phi\) up to a volume fraction, and a sudden turbulence collapse is observed at CPVL. However, with an increase in the \(2\delta/d_{p}\) ratio, a higher extent of turbulence attenuation is observed for the fixed \(\phi\) and Reynolds number. But, for the same volume fraction and fixed channel dimension, the extent of attenuation is low at a higher Reynolds number. This section discusses the modification of particle phase properties for different \(\phi\), \(2\delta/d_{p}\), and fluid phase Reynolds number.
The effect of the channel dimension (\(2\delta/d_{p}\)) on the particle phase properties at \(Re_{b}=5600\) is shown in Fig. 17.
Figure 15: The \(-\lambda_{2}^{+}=0.0025\) isocontours for cases (a) \(A_{1}\), (b) \(B_{1}\), and (c) \(C_{1}\) at \(\phi=4\times 10^{-4}\) in the near-wall x-z plane. The colors show the magnitude of the mean fluid velocity (Uplus is \(\overline{u}/u_{\tau}\)) within the contours.
Mean velocities and second moments of the fluctuating velocities are plotted for cases \(A_{1}\), \(B_{1}\), and \(C_{1}\) for two different volume fractions of \(\phi=2\times 10^{-4}\) and \(5\times 10^{-4}\). The normalized particle mean velocities are plotted along with unladen fluid mean velocity, shown in Fig. 17 (a). The particle mean velocity profile is almost flat across the channel width for the case \(A_{1}\). However, the particle mean velocity is lower in the near-wall region for larger channel sizes of case \(B_{1}\) and \(C_{1}\). For case \(C_{1}\), the particle mean velocity is less in the near-wall region for \(\phi=2\times 10^{-4}\) than the other volume fraction cases. At the channel center, the particle mean velocity is higher for both the volume fractions for the case \(C_{1}\) as the considered Stokes number is less here than the other two system sizes, Table 1. With an increase in solid volume fraction for a fixed channel dimension, near-wall particle velocity increases due to the transfer of momentum from the channel center to the wall through particle-particle interaction. With the increase in channel dimension, Stokes number of the particle decreases, leading to a decrease in the fluid-particle relative velocity near the wall.
The normalized particle fluctuations are plotted in Fig. 17 (b - d). The streamwise particle fluctuations shown in Fig. 17 (b) are higher in the near-wall region and decrease away from the wall. The streamwise particle fluctuations decrease as the volume fraction is increased from the \(\phi=2\times 10^{-4}\) to \(5\times 10^{-4}\) for a fixed system size. The wall-normal and spanwise particle fluctuations increase with an increase in the volume fraction due to an increase in the collision frequency for a fixed system size [8], Fig. 17 (c and d). It is observed that the streamwise particle fluctuation increase with an increase in system size. Similar behavior is observed for the wall-normal and spanwise particle fluctuations. A more significant increase in wall-normal and spanwise particle fluctuations is observed for the case \(C_{1}\) as the volume fraction is increased from \(\phi=2\times 10^{-4}\) to \(5\times 10^{-4}\). The dimensional particle streamwise fluctuations are plotted in Fig. 17 (f) for similar cases as in Fig. 17 (b). Here, it is observed that the dimensional particle streamwise fluctuations decrease with an increase in system size for the same volume fraction. This is similar to the behavior observed for fluid bulk velocity which decreases with an increase in system size for a constant Reynolds number. However, it is interesting to note that the particle fluctuations increase relative to the fluid bulk velocity with an increase in channel dimensions, Fig. 17 (b).
Next, we have analyzed the mechanisms of the generation of particle streamwise fluctuations which originate from three sources [64, 65]. The first is due to the force exerted by the fluid fluctuations in the streamwise direction, which is proportional to the \(D_{xx}\tau_{p}\) where \(D_{xx}\) is the velocity space diffusion coefficient due to fluid fluctuations. Velocity space diffusion coefficient (\(D_{ij}\)) for the particle in turbulent field is defined as,
\[D_{ij}=\frac{\langle u_{i}^{\prime}(0)u_{j}^{\prime}(0)\rangle}{\tau_{p}^{2}} \int\limits_{0}^{\infty}dt^{\prime}R_{ij}. \tag{29}\]
Here, \(R_{ij}\) is the Eulerian time correlation tensor of the fluid fluctuations, and \(\int\limits_{0}^{\infty}dt^{\prime}R_{ij}\) is the integral time scale
of the fluid phase (\(\tau_{I}\)). Eqn. 29 can be written as,
\[D_{ij}=\frac{\langle u^{\prime}_{i}(0)u^{\prime}_{j}(0)\rangle}{\tau_{p}^{2}}\tau_ {I}. \tag{30}\]
In the above equation, \(\tau_{I}\) may be replaced with the time scale defined as \(2\delta/\bar{u}(=\tau_{f})\). Therefore,
\[D_{ij}\propto\frac{\langle u^{\prime}_{i}(0)u^{\prime}_{j}(0)\rangle}{\tau_{p} ^{2}}\tau_{f}. \tag{31}\]
Figure 17: The particle properties are plotted in the wall-normal direction for cases \(A_{1}\), \(B_{1}\), and \(C_{1}\) for different volume fractions. The particle velocities (except for Fig. (f)) are scaled with fluid bulk velocity (\(\overline{u}\)), and wall-normal distance is scaled with channel width (\(h\)). (a) Mean velocity, (b) second moments of streamwise, (c) wall-normal, (d) spanwise fluctuations, (e) particle concentration, and (f) second moments of streamwise fluctuations. The particle concentration is normalized with an average concentration across the channel width.
Normalizing the above relation with fluid average velocity(\(\bar{u}\)), we get,
\[\frac{\tau_{p}D_{ij}}{\overline{u}^{2}}\propto\frac{\langle u_{i}^{\prime}u_{j}^{ \prime}/\overline{u}^{2}\rangle}{St}. \tag{32}\]
Here, \(St\) is the particle Stokes number based on the fluid integral time scale (\(2\delta/\bar{u}\)). The second contribution to the streamwise particle velocity fluctuations is due to particle migration across the streamlines due to wall-normal fluid fluctuations, which is proportional to the
Figure 18: The particle properties are plotted in the wall-normal direction for cases \(B_{1}\) and \(B_{2}\) for different volume fractions. The particle velocities (except for Fig. (f)) are scaled with fluid bulk velocity (\(\overline{u}\)), and wall-normal distance is scaled with channel width (\(h\)). (a) Mean velocity, second moments of (b) streamwise, (c) wall-normal, (d) spanwise fluctuations, (e) particle concentration, and (f) second moments of streamwise fluctuations. The particle concentration is normalized with an average concentration across the channel width.
\((\tau_{p}D_{yy})St_{\gamma}^{2}\). The third source is due to collision induced by the mean velocity gradient of the particle phase, which is proportional to the \(\phi_{l}(\dot{\gamma}d_{p})^{2}St_{\gamma}^{3}\). Here, \(\phi_{l}\) is the local particle volume fraction, \(\dot{\gamma}\) is the mean velocity gradient of particle phase, and \(St_{\gamma}\) is the particle Stokes number based on the mean strain rate as \(St_{\gamma}=\tau_{p}\dot{\gamma}\). Simplifying, \(St_{\gamma}=\tau_{p}(dv_{p}/dy)=\tau_{p}(dv_{p}^{*}/dy^{*})(\ddot{u}/\delta)=2 \times St(dv_{p}^{*}/dy^{*})\), where \(v_{p}^{*}\) is the normalized particle velocity, and \(y^{*}\) is the normalized wall-normal distance. In the near-wall region, the streamwise particle fluctuations generated due to fluid streamwise fluctuations, which are inversely proportional to \(St\), are low as the \(St\) is large for cases \(A_{1}\), \(B_{1}\) and \(C_{1}\). The contribution due to particle collisions is also low due to smaller local volume fraction (\(\phi_{l}\)). The significant contribution is due to the wall-normal migration of the particle, which is proportional to \(St_{\gamma}^{2}\). In the channel center, the contribution due to wall-normal migration decreases due to low \(St_{\gamma}\). However, it remains dominant compared to the other two sources. The streamwise particle fluctuations increase with an increase in system size (shown in Fig. 17 b) as \(St_{\gamma}\) increases from case \(A_{1}\) to \(C_{1}\).
The wall-normal particle fluctuations are also generated due to three mechanisms. First, due to wall-normal fluid fluctuations, which is proportional to \((\tau_{p}D_{yy})\). The second mechanism is the collision of the particles having different mean velocities, which is proportional to the \(\phi_{l}(\dot{\gamma}d_{p})^{2}St_{\gamma}\). The third contribution is due to the collision induced by the streamwise particle fluctuations, which is proportional to the \(\phi_{l}(T_{xx}^{1/2}\tau_{p}/d_{p})T_{xx}\). Here, \(T_{xx}=\overline{v_{x}^{2}}\) is the mean square streamwise particle fluctuations. Non-dimensionalising the \(\phi_{l}(T_{xx}^{1/2}\tau_{p}/d_{p})T_{xx}\) with the fluid bulk velocity and channel width (28), it becomes \(\phi_{l}(T_{xx}^{*3/2}\times St\times 2\delta/d_{p})\). In the present study, the particle Stokes number is high, thus, the contribution from \(\tau_{p}D_{yy}\) is low. The second contribution, \(\phi_{l}(\dot{\gamma}d_{p})^{2}St_{\gamma}\), is also not significant as local volume fraction (\(\phi_{l}\)) and (\(\dot{\gamma}d_{p}\)) are low. The significant contribution is due to the third mechanism for \(A_{1}\), \(B_{1}\), and \(C_{1}\). This dominant term increases (due to an increase in \(T_{xx}\) and \(2\delta/d_{p}\)) with the increase in system sizes from \(A_{1}\) to \(C_{1}\) which is also observed in Fig. 17 (c).
The normalized particle concentration is plotted in Fig. 17 (e). The concentration profiles are almost flat for lower system sizes of \(2\delta/d_{p}=54\) and \(81\) for the range of volume fractions of \(\phi=2\times 10^{-4}\) and \(5\times 10^{-4}\) considered in this study. However, the particle concentration is larger in the near-wall and channel center region for case \(C_{1}\). This might be due to the decrease in effective particle inertia with the reduction of Stokes number (Table 1).
The effect of the Reynolds number on the particle properties is examined in Fig. 18 for cases \(B_{1}\) and \(B_{2}\). The particle properties are plotted for two volume fractions of \(\phi=5\times 10^{-4}\) and \(10^{-3}\). The particle mean velocity predicted for both the volume fractions are almost similar as the Stokes number is also the same as shown in Fig. 18 (a). The particle mean velocity is almost 10% lower in the near-wall region compared to the channel center. In case of streamwise particle fluctuations, the fluctuations are higher for \(Re_{b}=13750\) compared to the \(Re_{b}=5600\) for the same volume fractions near the wall. However, away from the wall, the streamwise fluctuations are lower for case \(B_{2}\) (\(Re_{b}=13750\)) than that in case of \(B_{1}\) (\(Re_{b}=5600\)). For particle wall-normal and spanwise fluctuations, it is observed that the fluctuations are lower for the case \(B_{2}\) compared to the case \(B_{1}\). The normalized particle concentration is almost flat and the same for all the considered volume fractions as shown in Fig. 18 (e). The dimensional particle streamwise fluctuations are plotted in Fig. 18 (f) for similar cases as in Fig. 18 (b). Here, it is observed that the dimensional particle streamwise fluctuations increase with an increase in Reynolds number for the same volume fraction. The fluid bulk velocity also increases with an increase in Reynolds number for constant system size; however, the particle fluctuations scaled with the fluid bulk velocity decrease with an increase in Reynolds number, Fig. 18 (b).
When volume fraction is \(5\times 10^{-4}\), the component \((\tau_{p}D_{yy})St_{\gamma}^{2}\) increases in the near-wall region for the case of \(B_{2}\) due to high \(St_{\gamma}\). Thus, the streamwise particle fluctuations caused by wall-normal migration are higher in the near-wall region for the case \(B_{2}\) compared to \(B_{1}\) for \(\phi=5\times 10^{-4}\) (Fig. 18 b). \((\tau_{p}D_{yy})St_{\gamma}^{2}\) decreases in channel center due to reduction in \(\dot{\gamma}\). In the channel center, \((\tau_{p}D_{yy})St_{\gamma}^{2}\) provides almost simialr values for the cases \(B_{1}\) and \(B_{2}\) as \(St_{\gamma}(=2\times St(dv_{p}^{*}/dy^{*}))\) is also similar for both the cases. However, it is not observed in Fig. 18 (b), and the particle streamwise fluctuations are lower for \(Re_{b}=13750\) compared to the \(Re_{b}=5600\). The reason behind this observation is not clear at this moment. For wall-normal particle fluctuations, \(\phi_{l}(T_{xx}^{*3/2}\times St\times 2\delta/d_{p})\) is the dominant source, and it is lower for the \(B_{2}\) as the \(T_{xx}^{*}\) is lower compared to \(B_{1}\) (except very close to the wall) as shown in Fig. 18 (b). The above discussion explains the statistics of the particle phase shown in Fig. 17 - 18.
## IV Conclusion
In the present work, the simulations are performed for vertical channel flow for different volume fractions, system sizes, and Reynolds numbers to critically assess those parameters' effects on turbulence modulation. The fluid phase is described in the Eulerian framework using the large eddy simulation. A dynamic one-equation model has been adopted to model the subgrid-scale stress. The individual particles are tracked using a Lagrangian technique with point-particle assumption. It is observed that the fluid fluctuations decrease with an increase in volume fraction up to a critical particle volume loading at which turbulence collapses. The extent of turbulence attenuation increases with an increase in system size (\(2\delta/d_{p}\)) for the same volume fraction while keeping the Reynolds number fixed. But, for the same volume fraction and
fixed channel dimension, the extent of attenuation is low at a higher Reynolds number.
Different terms from the momentum and energy balance equations are plotted, and it is concluded that the variation of the particle feedback term is the dominant term when the volume fraction is lower than the critical volume fraction. It is observed that the dissipation due to the particle significantly increases with an increase in system size for the same volume fraction and fixed Reynolds number. However, the dissipation due to particle decreases significantly when the Reynolds number is increased while keeping channel dimensions same, which reduces the extent of turbulence attenuation. From the analysis of the variation of scaled feedback force, it is found that the system size and bulk Reynolds number are the two important parameters that significantly affect the turbulence modulation compared to the others.
The spatial correlation of fluid fluctuations is plotted in the streamwise and spanwise directions, and it is observed that the correlation coefficient for the streamwise component decays slower with an increase in system size, which indicates that the streamwise turbulent structures become elongated along the channel length. The isocontours of the streamwise fluctuations also depict a similar observation. The isocontours show that the high-speed streaks become elongated and fewer with an increase in system size for a fixed Reynolds number. This may be because the presence of particles affects the breakdown of coherent structures with an increase in system size. The correlation coefficient decays faster for a higher Reynolds number than a lower one. The isocontours of the streamwise fluctuations show that the streaks are smaller, thinner, and closely packed for a higher Reynolds number.
The particle fluctuations, scaled with fluid bulk velocities, are analyzed for the effect of system size and Reynolds number. For a constant system size and Reynolds number, the streamwise particle fluctuations decrease, while wall-normal and spanwise particle fluctuations increase with an increase in particle volume fraction due to the increase in particle collision frequency. It is observed that the particle fluctuations increase with an increase in system size for a constant Reynolds number and volume fraction. However, the particle fluctuations, except for the streamwise particle fluctuations in the near-wall, decrease with an increase in Reynolds number for fixed channel dimension and volume fraction. Qualitative analysis for the source of particle streamwise and wall-normal fluctuations is discussed. It is observed that the streamwise particle fluctuations generated due to wall-normal migration are dominant compared to the other two sources (force due to streamwise fluid fluctuations and collision-induced fluctuations). The source due to particle migration in the wall-normal direction increases with an increase in system size. For the wall-normal fluctuations, the source due to collision induced by streamwise particle fluctuations is dominant in the near-wall and channel center. The present analysis highlights the multiscale nature of particle-laden flows in the sense that an increase in Stokes number of particles universally demonstrates the turbulence modulation, but the system size is to be considered. Therefore, this study will be a guideline for designing industrial instruments while using them for gas-solid operations.
## V Appendix
The unladen fluid statistics for \(Re_{b}=5600\) is presented in Fig. 19. The simulations are verfied against inHouse DNS [19] and DNS data of Moser _et al._[57]. The fluid phase velocities are normalized with unladen friction velocity (\(u_{\tau}\)), and the wall-normal distance is normalized with \(u_{\tau}\) and half-channel width (\(\delta\)). The streamwise mean velocity and fluctuations plotted in the wall-normal direction, are in good agreement with both the DNS datas as shown in Fig. 19 (a and b). The cross-stream stress and wall-normal fluctuations are underpredicted by LES within 15% and 20% deviations compared to DNS datas, respectively.
In Fig. 20, the unladen fluid statistics for \(Re_{b}=13750\) is validated against the DNS data of Moser _et al._[57] at \(Re_{\tau}=395\), experimental data of Li _et al._[58] at \(Re_{\tau}=430\), and LES data of Salmanzadeh _et al._[53] at \(Re_{\tau}=350\). The mean velocity and fluid fluctuations predicted by LES model are in good agreement. The peak value of fluid fluctuations are within 15% deviation compared to DNS data of Moser _et al._[57].
## Acknowledgement
We would like to thank the Science and Engineering Research Board (SERB), Department of Science and Technology (DST), Government of India, for their financial support.
|
2306.04053 | Detection of the Cosmological Time Dilation of High Redshift Quasars | A fundamental prediction of relativistic cosmologies is that, due to the
expansion of space, observations of the distant cosmos should be time dilated
and appear to run slower than events in the local universe. Whilst observations
of cosmological supernovae unambiguously display the expected
redshift-dependent time dilation, this has not been the case for other distant
sources. Here we present the identification of cosmic time dilation in a sample
of 190 quasars monitored for over two decades in multiple wavebands by
assessing various hypotheses through Bayesian analysis. This detection counters
previous claims that observed quasar variability lacked the expected
redshift-dependent time dilation. Hence, as well as demonstrating the claim
that the lack of the redshift dependence of quasar variability represents a
significant challenge to the standard cosmological model, this analysis further
indicates that the properties of quasars are consistent with them being truly
cosmologically distant sources. | Geraint F. Lewis, Brendon J. Brewer | 2023-06-06T22:56:11Z | http://arxiv.org/abs/2306.04053v1 | # Detection of the Cosmological Time Dilation
###### Abstract
A fundamental prediction of relativistic cosmologies is that, due to the expansion of space, observations of the distant cosmos should be time dilated and appear to run slower than events in the local universe. Whilst observations of cosmological supernovae unambiguously display the expected redshift-dependent time dilation, this has not been the case for other distant sources. Here we present the identification of cosmic time dilation in a sample of 190 quasars monitored for over two decades in multiple wavebands by assessing various hypotheses through Bayesian analysis. This detection counters previous claims that observed quasar variability lacked the expected redshift-dependent time dilation. Hence, as well as demonstrating the claim that the lack of the redshift dependence of quasar variability represents a significant challenge to the standard cosmological model, this analysis further indicates that the properties of quasars are consistent with them being truly cosmologically distant sources.
## 1 Introduction
A fundamental consequence of the relativistic picture of expanding space is cosmological time dilation, where events in the distant universe appear to run
slowly compared to those in the local cosmos [1; 2; 3]. Whilst this time dilation has been unambiguously detected in the light curves exhibited by cosmologically distant supernovae [4; 5; 6; 7; 8], the appearance of time dilation in other cosmic sources is less conclusive. For example, whilst examinations of the light curves of gamma-ray bursts (GRBs) have generally shown consistency with the expected cosmological signature, uncertainties in the detailed emission mechanism and expected light curve characteristics mean that this detection has not been definitive [e.g. 9; 10; 11; 12; 13]. Furthermore, the role of the more recently discovered fast radio bursts [FRBs: 14] as'standard clocks' is similarly limited by knowledge of the physical processes driving the output [15].
Quasars have been known to be variable sources since their discovery in the 1960s [16], with emission arising from a relativistic accretion disk orbiting a supermassive black hole [17]. However, it has been claimed that the variability displayed by quasars over a broad range of redshifts does not show the expected cosmological time dilation [18; 19; 20]. This has led to the suggestion that quasar variability is not intrinsic, but is due to microlensing due to the presence of cosmologically distributed black holes [21; 22]. Others have stated that this points to more fundamental issues with our cosmological ideas [e.g. 23; 24; 25], with even the suggestion that quasars are not cosmologically distant and that their observed redshifts are due to mechanisms other than the expansion of space.
In 2012, a study of the variability characteristics of a sample of thirteen quasars observed behind the Magellanic Clouds as part of the MACHO microlensing program was suggestive of the expected \((1+z)\) time dilation dependence, where \(z\) is the quasar redshift [26]. However, with the small sample and relatively short monitoring period, this result is inconclusive. Recently, a new sample of the variability properties of quasars was presented as part of the Dark Energy Survey and comprises 190 quasars, covering the redshift range \(z\sim 0.2\to 4.0\)[27]. These are drawn from the sample of more than one hundred thousand spectroscopically identified quasars with absolute magnitude \(M<-22\) in the \(300\ deg^{2}\) Sloan Digital Sky Survey of Stripe 82 (S82). Published as part of the SDSS DR7 quasar catalogue, the physical properties of these quasars are presented in [28]. This includes the bolometric luminosity which was determined through spectral fitting and correction from composite spectral energy distributions [29]. These quasars were photometrically observed between 1998 and 2020, and so for more than two decades, through the combination of multiple epochs of exposures from SDSS, PanSTARRS-1, and the Dark Energy Survey, with additional follow-up monitoring with Blanco 4m/DECam.
The total dataset consists of roughly two hundred photometric observations of each quasar in multiple bands, although the cadence of these observations is very uneven over the observing period. To account for this when calculating characteristic timescales of the quasar variabilities, [27] adopted a Gaussian process regression [e.g. 30] to interpolate the photometric data and the associated uncertainties between the observations; details are given in Appendix **A2**
of their paper. Each quasar light curve in each band is represented as a Damped Random Walk [DRW; 31; 32]; this is found to be an accurate description of quasar variability with only a mild dependence on the physical properties of the quasars [33]. Practically, this defines the covariance matrix of the Gaussian Process that describes the variability. With this, the Gaussian Process regression software, Celerite[34], is used to determine the characteristic DRW time scale, as well as the \(16^{th}\) and \(84^{th}\) percentiles of the distribution. Armed with these bolometric luminosities and variability time scales drawn from the DRW analysis, the goal of this paper is to search for the signature of cosmological time dilation of these distant sources.
## 2 Results
In the following analysis, we consider the redshift dependence of time dilation to be of the form \((1+z)^{n}\), where \(z\) is the redshift of the source. Clearly, for the expected cosmological dependence, \(n=1\), whilst \(n=0\) demonstrates no redshift dependence, representative of the claims from several previous studies of quasar samples [e.g 20]. To explore the various possibilities, several distinct hypotheses are explored. These are:
* \(\mathcal{H}_{0}\): \(n\) is fixed at zero, representing no redshift dependence on the observed quasar timescales.
* \(\mathcal{H}_{1}\): \(n\) is fixed at one, representing the expected redshift dependence of the cosmological time dilation.
* \(\mathcal{H}_{2}\): \(n\) is treated as a free parameter.
* \(\mathcal{H}_{3}\) and \(\mathcal{H}_{4}\): \(n\) is fixed at \(-1\) and \(2\) respectively.
The final two cases represent extreme cases where additional influences, such as quasar evolution, may significantly influence the observed time variability of quasars.
As outlined in the Methodology (Section 3), these differing hypotheses were compared through the calculation of the Bayesian evidence [35] for each situation under consideration, with the results of these calculations presented in Table 1; in assessing the ratio of Bayesian evidences, a factor of 10-100 is considered a strong favouring of one hypothesis over another, whereas greater than 100 is decisive [36]. One immediate conclusion is that the favoured hypothesis is \(\mathcal{H}_{1}\), the case where \(n=1\), which represents the expected redshift dependence of the cosmological time dilation. This is significantly favoured over the alternative \(\mathcal{H}_{0}\), with an evidence ratio greater than \(10^{5}\), which represents the situation where there is no redshift dependence on the observed timescales of cosmological variability. Furthermore, \(\mathcal{H}_{1}\) is significantly favoured over the two extreme cases, \(\mathcal{H}_{3}\) and \(\mathcal{H}_{4}\).
The posterior distribution for the redshift dependence for the time dilation, \(n\), specifically \(\mathcal{H}_{2}\), where this is treated as a free parameter, is presented in Figure 1. Reflecting the previous analysis, the is clearly offset from zero, indicative of a redshift dependence of the observed timescale of variability
over the quasar sample. This posterior distribution, which may be summarised using \(n=1.28^{+0.28}_{-0.29}\), is consistent with the expected cosmological dependence with \(n=1\), and the presented analysis significantly favours the presence of cosmological time dilation of the observed quasar variability.
## 3 Methodology
Probing the fundamental nature of our universe often calls upon standard rulers or candles to allow us to determine the influence of expansion on observable quantities. In hunting for cosmological time dilation, a standard clock with a measurable timescale is required. However, the challenge with objects such as quasars, and other cosmological sources such as gamma-ray bursts, is the complexity of the physical processes driving their variability. For quasars, where variability arises in the stochastic processes in the relativistic disk orbiting a supermassive black hole, the resultant luminosity fluctuations could potentially be dependent on a multitude of physical properties, including the mass of the central black hole, the degree of accretion, and the wavelength of the observations.
To address this, the sample of quasars under consideration here was split into a number of subsamples of objects with similar intrinsic properties in terms of their bolometric luminosity and the rest wavelength of observations. The observations under consideration were taken in the \((g,r,i)\) wavebands, and for the purpose of this study, the rest wavelength in each of the observed wavebands is defined to be
\[\lambda_{g}=\frac{4720\mathrm{\AA}}{1+z},\quad\lambda_{r}=\frac{6415\mathrm{ \AA}}{1+z},\quad\lambda_{i}=\frac{7835\mathrm{\AA}}{1+z} \tag{1}\]
where \(z\) is the redshift of the quasar under consideration and the numerical values are representative of the observed wavebands.
The quasar subsamples are presented graphically in Figure 2, which presents the rest wavelengths of each quasar in the \((g,r,i)\) bands versus their bolometric luminosity. Each quasar has been colour-coded with its variability timescale, \(\tau_{DRW}\), assessed by fitting each observed light curve in each band with a damped random walk [see 27, for more detail]. Underlying the quasar sample are the regions of twelve subsamples under consideration in the colour salmon. These were chosen to have a width in rest wavelength and bolometric luminosity of \(\Delta\lambda=1000\mathrm{\AA}\) and \(\Delta\log(L_{Bol}/L_{\odot})=0.5\). Note that the regions are continuous, with the top-left subsample spanning \(\Delta\lambda=900\mathrm{\AA}\to 1900\mathrm{\AA}\), and \(\Delta\log(L_{Bol}/L_{\odot})=46.7\to 47.2\); the details of the subsamples are given in Table 2. From Figure 2 it is clear that these subsamples encompass the majority of quasars presented in this survey, and note that the combination of the three wavebands means that each subsample of quasars contains a broader distribution of redshifts than if the wavebands were considered individually. Hence this combination provides a redshift lever arm which constrains the presence of cosmological time dilation in each subsample. We also note that
a by-eye examination of Figure 2 is suggestive of a gradient in the variability timescale over the sample.
Given that in each subsample the quasars possess similar rest wavelengths and bolometric luminosities, we make the assumption that they also possess the same characteristic intrinsic timescales, and hence any difference in timescale for a particular quasar subsample is due to the influence of cosmic time dilation and will show the appropriate dependence upon redshift. Of course, the physics of quasar variability is likely to depend on a number of factors, and so this assumption is considering that these quasars will exhibit similar variability properties in the mean; we discuss this point again at the conclusion of this study.
For each quasar subsample (labelled with \(k\)), we model the observed variability timescales (i.e. \(log_{10}(\tau_{DRW}/days)\)) as
\[M_{k}=C_{k}+n\log_{10}(1+z) \tag{2}\]
where \(z\) is the redshift of the quasar and \(n\) is the power of the expected cosmological term, that is \((1+z)^{n}\). This model represents the variability with a different normalisation term, \(C_{k}\), for each wavelength-luminosity bin, but demands the same cosmological dependence in terms of redshift.
For the five distinct hypotheses considered, the normalisation terms, \(C_{k}\), were allowed to vary, and so for the cases where \(n\) is considered to be a fixed value, this corresponds to an exploration of a twelve-dimensional posterior probability distribution. Physically, this situation reflects the situation where each wavelength-luminosity bin has a differing characteristic timescale, but a redshift dependence dependent upon the chosen value of \(n\). For the remaining hypotheses, \(\mathcal{H}_{2}\), where \(C_{k}\) and \(n\) are treated as free parameters, this corresponds to a thirteen-dimensional posterior probability distribution to be explored.
To calculate the Bayesian evidence (also known as marginal likelihood) for each of these hypotheses, it is necessary to define a likelihood. It is important to note that the presented measurements and uncertainties of \(\tau_{DRW}\) (in \(\log_{10}\) space) are not symmetrical. Hence we represented the probability of each distribution of \(\log_{10}(\tau_{DRW}/days)\) as a skewed Gaussian, specifically scipy.stats.skewnorm in the numerical approach which is written in python (represented as \(\mathcal{SN}\)). The \(16^{th}\), \(50^{th}\) and \(84^{th}\) percentiles of this skewed distribution were fitted to the given values via a straight-forward optimization. This resulted in uncertainties of these percentiles of typically less than \(2-3\%\). Hence we can define the log of the likelihood as
\[\log\mathcal{L}=\sum_{k}\ \sum_{l=g,r,i}\ \sum_{m=1\ldots N_{q}}\ \mathcal{SN}. \mathrm{logPDF}(\mathcal{M}_{k},\theta_{l,m}(l,m\in k)) \tag{3}\]
where \(k\) sums over each of the subsample regions, \(l\) over the observed wavebands and \(m\) over the number of quasars, \(N_{q}\), in the sample. Also, \(\theta_{l,m}\) are the parameters for the skewed Gaussian representing the probability distribution
for \(\log_{10}(\tau_{DRW}/days)\) for each of the quasars in each of the waveband, and \(l,m\in k\) implies that the quasar should only be considered if its rest frame properties place it in subsample \(k\).
To explore the posterior probability distribution and calculate the Bayesian evidence (also known as marginal likelihood), we employed Diffusive Nested Sampling DNest4[37], a variant of the nested sampling technique [38]. This allows for correct posterior sampling and marginal likelihood estimation even in the case where the constrained prior distributions are difficult to sample from or explore with Markov Chain Monte Carlo. For simplicity, we use uniform priors over the normalisation parameters, \(C_{k}\), between 1 and 5, and for \(\mathcal{H}_{2}\) where \(n\) is treated as a free parameter, a uniform distribution over \(n\) is adopted between -1 and 3; the posterior distribution of \(n\) for this hypothesis is presented in Figure 1. The normalisations, \(C_{k}\), are well constrained and are reproduced for completeness in Figure 3 for \(\mathcal{H}_{2}\).
## 4 Conclusions
This paper presents the detection of the cosmological dependence of the time dilation in a recent sample of almost two hundred quasars. These were monitored in multiple wavebands over a two-decade period, allowing the determination of a characteristic timescale by treating the observed quasar variability as a damped random walk.
Through an assessment of the Bayesian evidence, it was found that the hypothesis considering the expected \((1+z)\) cosmological dependence provides a significantly better description of the data than the case where there is no dependence on redshift. In considering the redshift dependence of quasar variability to be of the form \((1+z)^{n}\), where \(n\) is treated as a free parameter, the posterior distribution is found to be \(n=1.28^{+0.28}_{-0.29}\), again consistent with the expected cosmological expansion of space. This detection of the cosmic expansion directly imprinted onto the variability of quasars further demonstrates that their observed properties are consistent with them being luminous and variable sources at cosmological distances, and counters previous claims that quasar variability is not intrinsic, but instead is due to external influences or non-standard physics. This has an immediate impact on various claims, such as the presence of a cosmologically significant population of microlensing black holes (e.g. 18; 22) or more esoteric ideas about the framework of the universe [39], and is further evidence that we inhabit an expanding relativistic universe.
We do note that our result of \(n=1.28^{+0.28}_{-0.29}\) could be consistent with an offset from the expected cosmological value of \(n=1\) and could potentially indicate the presence of additional factors such as an evolution of quasars over cosmic time in addition to the time dilation due to cosmic expansion. Of course, we could imagine that quasar evolution over cosmic time could be responsible for the observed redshift dependence of the DRW time scale, but as we are considering similar quasars in terms of the bolometric luminosity and observed rest wavelength, it would be a curious coincidence for this evolution to result in a \((1+z)\) dependence to spoof cosmic expansion. Furthermore, if
quasar evolution were solely responsible for the observed DRW properties then the resulting lack of the expected cosmic time dilation would present a severe challenge to our cosmological model. However, it is important to note that there are some potential correlations of the DRW timescales with the inferred intrinsic properties of the quasars (e.g. 33), although these are not strong, and more extensive photometric datasets in terms of the number of quasars and the duration of their photometric lightcurves will be required to cleanly separate the influence of cosmic expansion from quasar evolution.
In closing, we note that the lack of detection of the time dilation of quasar variability in previous studies is potentially due to the relatively small sample size in terms of the number of quasars under consideration [26], or the cadence of data sampling and characterisation of the quasar variability [19; 20]. Built on the observations of Stone et al. (2022) [27], this present study has demonstrated that we are now in an epoch where we have observations of a sufficiently large number of quasars spanning a broad range in redshifts, and observed over extended periods and with a cadence that overcomes their stochastic nature and results in an accurate characterisation of their variability, yielding a robust determination of the imprint of cosmological expansion on their light curves.
Furthermore, with upcoming programs such as the Vera Rubin Observatory Legacy Survey of Space and Time (LSST), the number of quasars observed at high temporal cadence will rapidly increase and the measurement of cosmological time dilation, and potentially the influence of quasar evolution, will become readily observable (e.g. 32).
Data Availability.The source data for this project is available at [https://zenodo.org/record/](https://zenodo.org/record/) S842449#.YipOg-jMJPY, with the details of the available FITS tables presented in Stone et al. (2022) [27]. Note that a revised version of this catalogue was recently released due to an error in some rest frame quantities. This revision does not impact any of the research presented in this paper. The software for this project is available at [https://github.com/eggplantbren/QuasarTimeDilation](https://github.com/eggplantbren/QuasarTimeDilation).
Code Availability.This project made use of several publicly available software packages, especially DNest4 [37] to undertake the exploration of the posterior probability space and calculate the Bayesian evidence by integrating across this space. Further software packages employed include matplotlib [40], numpy [41], scipy [42]. Initial explorations of the posterior probability space were undertaken with emcee with corner plots prepared with corner [43]. The software employed as part of this project will be made available on reasonable request to the corresponding authors.
Acknowledgements.We thank Stone et al. (2022) [27] for making their data and the results of their analysis publicly available. We also thank Scott Croom for his input and advice on quasar variability surveys. We further thank the teams responsible for creating and maintaining the various software packages, detailed below, that this study has employed. GFL would like to thank
the hospitality of the Lowell Observatory where the last stages of this work were completed during a period of isolation due to the contraction of covid.
Author Contribution Statement.The project was conceived by GFL, including an initial exploration of the data, the definition of the models and hypotheses considered, the likelihood function and sampling of the posterior space. BJB undertook detailed sampling and calculating the Bayesian evidence using DNest4. Both authors discussed the results of the exploration in detail and determined the resulting conclusion. Both were responsible for the writing of the manuscript.
Competing Interests Statement.The authors declare no competing interests.
\begin{table}
\begin{tabular}{c r r r} \hline \hline Hypothesis & \(n\) & \(\log\mathcal{Z}\) & \(\mathcal{Z}/\mathcal{Z}_{max}\) \\ \hline \(\mathcal{H}_{0}\) & 0 & -366.12 & \(9.3\times 10^{-6}\) \\ \(\mathcal{H}_{1}\) & 1 & -354.53 & 1 \\ \(\mathcal{H}_{2}\) & Free & -356.52 & 0.14 \\ \(\mathcal{H}_{3}\) & -1 & -390.13 & \(3.5\times 10^{-16}\) \\ \(\mathcal{H}_{4}\) & 2 & -358.36 & \(2.2\times 10^{-2}\) \\ \hline \end{tabular}
\end{table}
Table 1: The marginal likelihoods for the various hypotheses considered in this paper. As described in more detail in the Methods (Section 3), these were calculated with the diffusive nested sampling approach DNest4[37].
\begin{table}
\begin{tabular}{c c c c c c} \hline Subsample & \(\Delta\lambda\) (Å) & \(\Delta\log(L_{Bol}/L_{\odot})\) & \(N_{qs}\) & \(\Delta z\) & \(\Delta\log_{10}(\tau_{DRW}/days)\) \\ \hline
1 & \(900\to 1900\) & \(46.7\to 47.2\) & 37 & \(1.60\to 4.15\) & \(2.68\to 4.03\) \\
2 & \(1900\to 2900\) & \(46.7\to 47.2\) & 27 & \(0.81\to 3.00\) & \(2.83\to 4.11\) \\
3 & \(900\to 1900\) & \(46.2\to 46.7\) & 74 & \(1.55\to 3.98\) & \(2.61\to 4.23\) \\
4 & \(1900\to 2900\) & \(46.2\to 46.7\) & 111 & \(1.11\to 3.01\) & \(2.49\to 4.42\) \\
5 & \(2900\to 3900\) & \(46.2\to 46.7\) & 22 & \(1.11\to 1.70\) & \(3.03\to 3.93\) \\
6 & \(900\to 1900\) & \(45.7\to 46.2\) & 30 & \(1.48\to 2.80\) & \(2.70\to 3.71\) \\
7 & \(1900\to 2900\) & \(45.7\to 46.2\) & 101 & \(0.68\to 2.80\) & \(2.55\to 3.92\) \\
8 & \(2900\to 3900\) & \(45.7\to 46.2\) & 58 & \(0.60\to 1.69\) & \(2.63\to 4.03\) \\
9 & \(3900\to 4900\) & \(45.7\to 46.2\) & 11 & \(0.60\to 1.00\) & \(2.66\to 3.84\) \\
10 & \(1900\to 2900\) & \(45.2\to 46.7\) & 27 & \(0.63\to 1.45\) & \(2.30\to 4.30\) \\
11 & \(2900\to 3900\) & \(45.2\to 46.7\) & 31 & \(0.47\to 1.45\) & \(2.27\to 4.30\) \\
12 & \(3900\to 4900\) & \(45.2\to 46.7\) & 20 & \(0.47\to 0.98\) & \(2.33\to 4.20\) \\ \end{tabular}
\end{table}
Table 2: The properties of the survey subsamples presented in Figure 2, with the boundaries of the subsamples given by \(\Delta\lambda\), rest wavelength, and \(\Delta\log_{10}(L_{Bol}/L_{\odot})\), bolometric luminosity. The remaining columns give the number of quasar light curves in each subsample, \(N_{qs}\), as well as the redshift range of those quasars, \(\Delta z\), and timescale for the observed variability as given by treating this as a damped random walk, \(\Delta\log_{10}(\tau_{DRW}/days)\).
Figure 2: The entire quasar sample under consideration as a function of rest wavelength and bolometric luminosity, colour-coded with the DRW timescale, \(\tau_{DRW}\). The underlying rectangles in salmon pink represent the boundaries of the subsamples employed in the analysis presented in this paper. Inset is the labelled numbers of the fields.
Figure 3: The posterior distributions for the normalisation parameters, \(C_{k}\), for \(\mathcal{H}_{2}\) where \(n\) is treated as a free parameter. The bolometric luminosity range for each of the normalisation parameters is given in the upper left of each panel (see Table 2). As with Figure 1, these were the result of the sampling of the posterior probability distribution with DNest4. |
2304.12048 | New compound and hybrid binding energy sputter model for modeling
purposes in agreement with experimental data | Rocky planets and moons experiencing solar wind sputtering are continuously
supplying their enveloping exosphere with ejected neutral atoms. To understand
the quantity and properties of the ejecta, well established Binary Collision
Approximation Monte Carlo codes like TRIM with default settings are used
predominantly. Improved models such as SDTrimSP have come forward and together
with new experimental data the underlying assumptions have been challenged. We
introduce a hybrid model, combining the previous surface binding approach with
a new bulk binding model akin to Hofs\"ass & Stegmaier (2023). In addition, we
expand the model implementation by distinguishing between free and bound
components sourced from mineral compounds such as oxides or sulfides. The use
of oxides and sulfides also enables the correct setting of the mass densities
of minerals, which was previously limited to the manual setting of individual
atomic densities of elements. All of the energies and densities used are
thereby based on tabulated data, so that only minimal user input and no fitting
of parameters are required. We found unprecedented agreement between the newly
implemented hybrid model and previously published sputter yields for incidence
angles up to 45{\deg} from surface normal. Good agreement is found for the
angular distribution of mass sputtered from enstatite MgSiO$_3$ compared to
latest experimental data. Energy distributions recreate trends of experimental
data of oxidized metals. Similar trends are to be expected from future mineral
experimental data. The model thus serves its purpose of widespread
applicability and ease of use for modelers of rocky body exospheres. | Noah Jäggi, Andreas Mutzke, Herbert Biber, Johannes Brötzner, Paul Stefan Szabo, Friedrich Aumayr, Peter Wurz, André Galli | 2023-04-24T12:40:59Z | http://arxiv.org/abs/2304.12048v1 | # New compound and hybrid binding energy sputter model for modeling purposes
###### Abstract
Rocky planets and moons experiencing solar wind sputtering are continuously supplying their enveloping exosphere with ejected neutral atoms. To understand the quantity and properties of the ejecta, well established Binary Collision Approximation Monte Carlo codes like TRIM with default settings are used predominantly. Improved models such as SDTrimSP have come forward and together with new experimental data the underlying assumptions have been challenged. We introduce a hybrid model, combining the previous surface binding approach with a new bulk binding model akin to Hofsass & Stegmaier (2022). In addition, we expand the model implementation by distinguishing between free and bound components sourced from mineral compounds such as oxides or sulfides. The use of oxides and sulfides also enables the correct setting of the mass densities of minerals, which was previously limited to the manual setting of individual atomic densities of elements. All of the energies and densities used are thereby based on tabulated data, so that only minimal user input and no fitting of parameters are required. We found unprecedented agreement between the newly implemented hybrid model and previously published sputter yields for incidence angles up to 45\({}^{\circ}\) from surface normal. Good agreement is found for the angular distribution of mass sputtered from enstatite MgSiO\({}_{3}\) compared to latest experimental data. Energy distributions recreate trends of experimental data of oxidized metals. Similar trends are to be expected from future mineral experimental data. The model thus serves its purpose of widespread applicability and ease of use for modelers of rocky body exospheres.
Solar wind, Exosphere, Mercury (planet), The Moon, Sputtering +
Footnote †: journal: The Planetary Science Journal
0000-0002-8820-7885]Noah Jaggi
0000-0002-4882-7885]Andreas Mutzke
0000-0002-4882-7885]Herbert Biber
0000-0002-0783-0885]Johannes Brotzner
0000-0002-4882-7885]Paul Stefan Szabo
0000-0002-4883-3885]Friedrich Aumayr
0000-0002-4882-7885]Peter Wurz
0000-0002-4882-7885]Andre Galli
## 1 Introduction
In recent years there were several efforts to better constrain the erosion of rocky planetary bodies exposed to highly energetic solar wind ions. This includes investigating the effect of surface roughness (Biber et al., 2022) and porosity (Szabo et al., 2022), performing ion irradiation experiments with mass yield measurements (e.g., Hijazi et al., 2017; Szabo et al., 2018, 2020; Biber et al., 2022) as well as new surface- and bulk-binding-energy model from theory (Hofsass & Stegmaier, 2022; Morrissey et al., 2022). In this work, we discuss the parameter of density and its inclusion in SDTrimSP (Mutzke et al., 2019) as well as a new hybrid binding energy model that reliably recreates experimental sputter yields completely without the requirement to adjust input parameters. The new approach will pose a valuable
tool for modeling the ion sputtering contribution to exospheres (i.e., Pfleger et al., 2015; Suzuki et al., 2020; Killen et al., 2022; Kazakov et al., 2022).
### Space weathering of exposed rocky surfaces
Exposed bodies in space are subject to solar wind irradiation. The main constituents of solar wind, H\({}^{+}\), and He\({}^{2+}\), thereby bear kinetic energies of approximately 1 keV/amu--equivalent to about 440 km/s (Wurz, 2005; Gershman et al., 2012; Winslow et al., 2013; Baker et al., 2013). When hitting a surface, most ions are neutralized and enter the sample, with some fraction being reflected as either neutrals or even ions (Lue et al., 2011; Vorburger et al., 2013). The ions entering the sample initiate a cascade of collisions with a chance to eject particles from the near-surface at supra-thermal energies. This process is responsible for altering the surface composition and creating lattice defects which leads to amorphization (Betz and Wien, 1994; Loeffler et al., 2009; Dukes et al., 2011; Domingue et al., 2014).
Ion sputtering releases atoms from the surface having typical velocities that are significantly lower than the impinging ions (e.g. Thompson, 1968), but large enough to form an extended exosphere with a significant fraction of atoms exceeding the escape velocity of any small body including the Moon (2.4 km/s) and Mercury (4.3 km/s) (e.g., Wurz et al., 2007, 2010). Such exospheres allow for ground-based observatories and space probe missions such as LADEE and LRO at the Moon (Paige et al., 2010; Elphic et al., 2014) and MESSENGER (Solomon et al., 2001; McNutt Jr et al., 2018) or the future BepiColombo (Benkhoff et al., 2010; Millio et al., 2020; Orsini et al., 2020) at Mercury to detect them. These observations were used early on to self-consistently model Mercury's surface composition based on the four expected major processes contributing to the exosphere: Solar wind ion sputtering, micro-meteoroid impact vaporization, photon-stimulated desorption, and thermal desorption (Madey et al., 2002; Mura et al., 2009; Wurz et al., 2010; Gamborino and Wurz, 2018; Wurz et al., 2022).
An important piece of information which is necessary to distinguish the exospheric species sourced from the surface is the process-specific energy distribution of the ejected material. For example, solar wind ion sputtering and micro-meteoroid impact vaporization compete in supplying Mercury's exospheric high-energy particle population with refractory species (e.g., Ca and Mg), whilst photon-stimulated desorption dominates the supply of energetic volatile and moderately volatile species (i.e., Na, K, and S) Mangano et al. (2007); Cassidy et al. (2015); Schaible et al. (2020); Janches et al. (2021); Grava et al. (2021). The same way that fluxes, or precipitation rates, of the particles causing these processes are still in the process of being better constrained (i.e., proton precipitation for solar wind sputtering at Mercury's cusps in Fatemi et al., 2020; Raines et al., 2022; Glass et al., 2022), the understanding of the underlying physics is still a work in progress. At the Moon, precipitation rates seem comparably trivial to compute, but the Moon travelling through the Earth's magnetotail as well as localized crustal fields add complexity to the system (e.g., Lue et al., 2011; Poppe et al., 2018; Nenon and Poppe, 2020).
### Sputter models
To efficiently model ion induced sputtering, Binary Collision Approximation (BCA) models are used. The BCA codes track particles as they travel through the sample and cause recoils, which are in turn tracked throughout the sample. There are many different models available, however, we will focus on the results of the Monte Carlo based, most widely used TRIM code (Biersack and Haggmark, 1980) in the SRIM package (Ziegler et al., 2010) as well as its successor SDTrimSP (Mutzke et al., 2019), a combined and improved version of the static TRIM.SP (Biersack and Eckstein, 1984) and the dynamic TRYDIN (Moller and Eckstein, 1984).
TRIM has been shown to overestimate the sputter yield compared to experimental yields for minerals (Szabo et al., 2018). Exosphere modelers need more accurate inputs which are in line with the latest understanding of sputtering. There have been several suggestions on how to best recreate experimental data. Here are the major contributions that set the expectations as well as limitations of the current state-of-the-art sputter modeling.
* Schaible et al. (2017) varied O binding energies to better fit early experimental data for sputtering of Al\({}_{2}\)O\({}_{3}\) and SiO\({}_{2}\)(Roth et al., 1979; Ken Knight and Wehner, 1967). Increasing the O-binding energy decreases the O yield, but not enough to significantly improve the agreement.
* Szabo et al. (2020) suggested that the best agreement between the mass yield of an irradiated sample and SDTrimSP is obtained by a) adjusting atomic densities to obtain an appropriate sample density, b) adjusting the surface binding energy (SBE) of O to 6.5 eV and c) set the SBEs of each element to the averaged SBE of all elements in the sample, resulting in a SBE which is highly dependent on the O concentration in the
sample (Appendix A). Although we found these parameters to work reasonably well for all kinds of silicates, the universality of these modifications is questionable.
* Morrissey et al. (2022) determined surface binding energies using molecular dynamics and suggests lower sputter yield across all surface species due to an increase in single component's binding energies. However, the restricted availability of species-specific surface binding energies prevents the applicability of the results on a broad range of minerals. This is also caused by the limited availability of interatomic potentials for each mineral system of interest.
* Hofsass & Stegmaier (2022) proposed completely neglecting surface binding energies and instead using only bulk binding energies from tabulated data. This way, particles leaving the sample do not have to overcome a surface potential and instead lose energy with each recoil. Although they solely use tabulated data to set the bulk binding energy and propose a sound physical constraint on the cutoff energy for the tracing of the particles, they are still required to make use of an undisclosed level of implantation to find good agreement with experimental data.
* Biber et al. (2022) used an in-house built ray-tracing code SPRAY (Cupak et al., 2021) with data from SDTrimSP and atomic force microscope images to discuss the effect of surface roughness on the sputter yield of a powder pellet and a flat, glassy thin film. They found that a rough pressed pellet surface reduces the yield, especially at shallow incident angles (above 45\({}^{\circ}\) relative to surface normal). The cause of this reduced yield was related to surface roughness leading to shallower local incident angles, shadowing, and re-deposition of material. For a detailed overview of rough surface sputter models see Kustner et al. (1998) and Arredondo et al. (2019).
All these models require varying degrees of adjustments of parameters when it comes to density, binding energies, cut-off energies or roughness. To adequately describe the sputtering process on realistic surfaces, roughness has to be taken into account. This effect is not considered in this work, as we focus on the fundamental sputter physics within the sample, which is agnostic to properties affecting trajectories of impinging ions and ejecta. For this reason, we compare our results to experimental thin-film data, which are considered to be flat surfaces Biber et al. (2022). We propose a new compound model for obtaining a realistic initial mineral density as well as a hybrid binding energy model to obtain increased binding energies based on tabulated data that can recreate experimental results.
## 2 Methods of computation
### Model parameters
Angular dependent sputter yields for various different models were calculated with SDTrimSP to compare with a wide range of experimental data. To obtain good statistics in SDTrimSP, we modeled between \(7.7\times 10^{6}\) and \(31\times 10^{6}\) impactors for each of 19 incident angles between 0\({}^{\circ}\) and 89\({}^{\circ}\) relative to the surface normal (Mutzke et al., 2019). The step size was set to gradually decrease from an initial 10\({}^{\circ}\) for incidence close to the surface normal and dropping to 2\({}^{\circ}\) for incidence angles 80-88\({}^{\circ}\). We collected the information of up to 10\({}^{6}\) recoils leaving the sample and perform statistics based on the last 10\({}^{5}\) recoils. The data contains the species name, end energy, azimuth angle, and zenith angle. The fits of the data shown in the figures throughout this manuscript are described in Sec. 2.5. The inelastic loss model seven (inel = 7) is used in all SDTrimSP calculations, which determines the inelastic loss in the sample based on the Lindhard-Scharff stopping power model (Lindhard & Scharff, 1961) unless there are corrections available (e.g., tables for H and He in Ziegler & Biersack, 1985). For a detailed description of SDTrimSP, we encourage the reader to look into the accompanying literature (e.g., Mutzke et al., 2019).
The surface composition of irradiated samples show a clear fluence dependence until an equilibrium is reached. This was shown by Baretzky et al. (1992) for the oxide Ta\({}_{2}\)O\({}_{5}\), and by Szabo et al. (2020) in the form of the fluence-dependence of experimental sputter data of minerals. Furthermore, the experimental sputter yields were best recreated using the dynamic mode of SDTrimSP (Szabo et al., 2020). For this reason, all computations in this manuscript were performed in dynamic mode of SDTrimSP and the results are for ejecta from a surface in equilibrium with the impinging ions. For irradiation with He, the fluence was set to 750 at./A\({}^{3}\) whereas H irradiation required fluences of up to 3000 at./A\({}^{3}\) (or \(3\times 10^{19}\) at./cm\({}^{3}\)) at normal incidence in some models. The dynamic mode allows the sample to change with the ion fluence and best simulates the sample composition reaching an equilibrium with the solar wind ions, reproducing the fluence-dependence of the experimental sputter yields. In detail, samples in SDTrimSP have
an infinite lateral extent with a finite number of layers vertically. In our case, all layers have the same composition set initially and a thickness of 10 A. After each fluence step, comprised of about \(10^{5}\) impactors, the layers within the sample are updated according to the components that were either lost or gained within the last step.
Direct comparisons between SRIM and SDTrimSP calculations were performed for mass yield (amu ion\({}^{-1}\)). In SRIM (Ziegler et al., 2010) we modeled \(10^{5}\) impinging H and He ions for static sputter yield results to obtain good statistics. We used the 'Monolayer Collision Steps / Surface Sputtering' damage model. The mineral density was set to its default density, as calculated by SRIM from the element components atomic density parameters (comparable to \(\rho_{\rm atomic}\) from tabulated data in SDTrimSP given in Table 1).
We will now introduce a few select parameter settings that are required to model sputtering of minerals. These comprise of the dynamic mode of SDTrimSP, the different ways of introducing binding energies, including our new addition, as well as a new way for correcting sample density.
### Binding energy
The efficiency at which particles can be removed from a surface, the sputter yield, is in one part a function of the total binding energy of the system. The two common binding energies provided to a BCA model are the surface binding energy (SBE) and the bulk binding energy (BBE). The former is in the shape of a surface potential that has to be overcome to leave the sample. The latter is an energy that is subtracted from each recoil and simulates the interaction between neighboring atoms in the otherwise mineral-lattice-agnostic model that is SDTrimSP. It is possible to obtain a constant yield whilst keeping the sum of the binding energies constant (Moller and Posselt, 2001). We now quickly introduce three different binding energy models, two of which are already established (pure SBE or BBE models) and one model that combines the two (SBE + BBE). The models are summarized in Table 2.
#### 2.2.1 SB: Surface binding model
The surface binding (SB) model is the default calculation model for TRIM and SDTrimSP. In this approach, a particle may leave the sample only if its kinetic energy exceeds the SBE. Energy loss within the sample occurs through elastic energy transfer during collisions and inelastic electronic losses.
Although the SBE is an energy determined by the attractive forces of neighboring atoms (Sigmund, 1969; Gades and Urbasek, 1992), it is common practice to approximate the SBE as the atomic enthalpy of sublimation (\(\Delta H_{S}\)). The exception are gasses where the surface binding energies are based on the enthalpy of dissociation. For example, pure O does not form a solid, and therefore the dissociation enthalpy of oxygen \(\Delta H_{diss}\)(O\({}_{2}\)) is used instead of the sublimation enthalpy. Hobler and Morrissey showed for Si and Na that the atomic enthalpy of sublimation can severely underestimate the energy necessary to remove an atom from their crystalline structure (Hobler, 2013; Morrissey et al., 2022). This was determined by the means of molecular dynamics (MD) calculations, which take into account the bonds between atoms. The results have so far only been tentatively confirmed for nepeline (NaAlSiO\({}_{4}\), Martinez et al., 2017) where the sputtered secondary Na\({}^{+}\) ions express a peak in their energy distribution around 2.4 eV, which was attributed to a SBE of Na of 4.8 eV (Morrissey et al., 2022). This exceeds the tabulated value of 1.1 eV by a factor of 4.3. Interestingly, the secondary K\({}^{+}\) ion results of Martinez et al. (2017) would suggest K SBEs of 4 eV, also exceeding the tabulated value of 0.93 eV by the same factor. Morrissey et al. (2022) also found that within plagioclase--the primary Na bearing mineral on a planetary surface--the surface binding energy is increased to 7.9 eV in the Na end member albite (NaAlSi\({}_{3}\)O\({}_{8}\)), which would result in a reduction of the Na sputter yield from albite by a factor of 15. The MD results therefore show a positive correlation between SBE and Na coordination number (amount of neighboring atoms).
How the SBE of a damaged surface, or, as outlined by Hofsass and Stegmaier (2022), a non-normal orientation of a mineral unit cell would differ from the ideal conditions chosen in MD simulations is unclear. Furthermore, the energy distributions of secondary ions do not necessarily represent their neutral counterparts, as neutralization of ejected particles is energy-dependent, which can cause a significant offset of the ion distribution towards lower energies (Benninghoven et al., 1987; Van der Heide, 2014). Another example that adds to the uncertainty of the link between neutral and ion energy distributions is from Betz (1987), who showed that ground state Ba sputtered from a continuously oxidized Ba surface coincides with metastable Ba (originating from the decay of short lived, excited state Ba) and Ba ions from a non-oxidized surface. Ground state Ba from a non-oxidized surface expresses a significantly lower peak energy which can be related to the \(\Delta H_{S}\). The energy distributions of ions, metastable atoms, and ground state atoms coincide with each other and exceed \(\Delta H_{S}\). The larger energy of ions and metastable atoms are interpreted to be caused by matrix dependent ionization processes (e.g., Dukes and Baragiola, 2015) whereas the increased energy
of the sputtered ground state atoms from an oxidized sample are so far not well understood and depend on the procedure including a single initial oxidation or, as in Betz (1987), a continuous oxidation. What is certain is that the displacement and removal of atoms that would lead to changes in bonds within the sample alters coordination numbers and therefore the binding energy that has to be overcome for their removal. The interatomic potentials between the atoms in the sample would end up far from equilibrium, which is commonly neglected in MD simulations due to computational load (Behrisch and Eckstein, 2007). Lastly, Hobler (2013) compared MD and BCA results and concluded, that the enthalpy of sublimation approximation works well in BCA to reproduce experimental data, even when the crystalline structure of the mineral is not taken into account. The reasoning behind this is that in MD simulations, an increase of yield is tied to an increase in defect creation, which ultimately negates the effect of the higher SBEs in the MD simulation. The increased SBEs suggested by MD models are to be taken with caution, but it is established, that an overall increase in energy loss within the sample is necessary to best fit experimental data.
#### 2.2.2 BB: Bulk binding model
The bulk binding (BB) model was recently suggested by Hofsass and Stegmaier (2022). It sets the SBE to zero, whilst setting a BBE for each component which has to be overcome for a component to be freed from their sample and which is lost during each recoil. The authors used the enthalpy of sublimation (\(E_{s}\)) for single species samples (i.e., the tabulated values used as SBEs in the surface binding model). For binary compounds, such as oxides and sulfides in minerals, the enthalpy of formation (\(\Delta H_{f}\)) has to be overcome before the enthalpy of sublimation of each component, thereby increasing the energy loss in the sample (as suggested earlier by Dullni, 1984).
In SDTrimSP, the implementation of the BB model is similar but slightly different. The sublimation enthalpy of species that form gasses under standard conditions are neglected when determining \(E_{bulk}\) (Table 2). This is based on the assumption, that, e.g., O from breaking up SiO\({}_{2}\) will already be in its gaseous state and thus will not require to be sublimated, unlike Si. As an example, \(E_{bulk}\) (or BBEs) for the elements in the binary compound SiO\({}_{2}\) are, as implemented in SDTrimSP,
\[E_{bulk}(\text{Si}) =E_{s}(\text{Si})+\frac{\Delta H_{f}(\text{SiO}_{2})}{m+n} \tag{1}\] \[=4.664\,\text{eV}+\frac{9.441\,\text{eV}}{3}=7.701\,\text{eV}\] \[E_{bulk}(\text{O}) =\frac{\Delta H_{f}(\text{SiO}_{2})}{m+n}\] \[=\frac{9.441\,\text{eV}}{3}=3.147\,\text{eV},\]
with \(m\) and \(n\) being the number of components Si and O in the compound (Si\({}_{m}\)O\({}_{n}\)). In SDTrimSP, this model is implemented as the surface-binding-model eight (isbv = 8), which is only available when using the new density model introduced in Sec. 2.4.2.
A side-effect of setting the SBE to zero and only using a bulk binding energy (BBE) is a lack of a planar attraction potential and therefore no refraction of sputtered particles towards larger emission angles occurs (Roth et al., 1983; Gades and Urbassek, 1992; Jackson, 1975; Hofsass and Stegmaier, 2022). When a surface potential has to be overcome, the extent of the refraction acting on a particle leaving the surface of a sample is proportional to the ratio of the energy of the particle in relation to the potential that has to be overcome (Thompson, 1968; Sigmund, 1969):
\[\sin(\theta_{1})=\sqrt{\frac{E_{0}}{E_{0}-E_{\text{sbe}}}}\sin(\theta_{0}), \tag{2}\]
with the incident energy \(E_{0}\), the SBE \(E_{\text{sbe}}\), the angle of the atom crossing the surface barrier \(\theta_{1}\), and the initial incident angle of the atom \(\theta_{0}\). Instead, in the BB model, any released particle inside the compound can travel freely through the surface, independent of its energy.
In BCA computations, a cutoff energy (\(E_{\text{cutoff}}\)) for each species is set which determines when a recoil is considered to be 'at rest' and no longer causes collisions. In the SB model, \(E_{\text{cutoff}}\) is chosen to be 0.1 eV below the lowest, non-zero \(E_{s}\) of all species within the sample. Choosing a lower \(E_{\text{cutoff}}\) would increase computation times due to the impactor travelling deeper into the sample before it is considered at rest. In the context of this work, longer impactor paths are irrelevant because recoils that are below \(E_{\text{cutoff}}\) do not contribute to the sputter yield. Any recoil from within the
sample needs to exceed the SBE to leave the compound with an energy \(E_{\rm ejecta}\) of
\[E_{\rm ejecta}=E_{\rm recoil}-\rm{SBE}. \tag{3}\]
This explains why the \(E_{\rm cutoff}\) should not be chosen to exceed the SBE of any given component. A recoil of a relatively heavy species that is too slow to overcome the SBE is still capable of causing recoils of lighter species with kinetic energies exceeding their SBE.
For the BB model, however, the BBE is subtracted at each collision, after which recoils can leave the sample without further change of their energy. This energy can therefore be arbitrarily small and has to be limited by the cutoff energy for convergence. With the cutoff, \(E_{\rm ejecta}\) cannot be inferior to the cutoff energy \(E_{\rm cutoff}\)
\[E_{\rm ejecta}\geq E_{\rm cutoff}. \tag{4}\]
The suggested approach by Hofsass and Stegmaier (2022) to obtain the best results to reproduce experimental data is to set a cut-off energy (\(E_{\rm cutoff}\)) in the bulk binding model which lies between 1/2 and 1/8.5 of the atomic \(E_{s}\) (the authors thereby favour a factor of 1/3, which is also the default set for BB models in SDTrimSP). The effect of the absence of a SBE and the use of a BBE and \(E_{\rm cutoff}\) on the energy distribution of the sputtered particles is evident, as the lower energetic tail of sputtered atoms is cut off at the given \(E_{\rm cutoff}\), and no Thompson distribution (Thompson, 1968) is seen (Fig. 1). For the example of SiO\({}_{2}\), we obtain
\[\begin{split} E_{\rm cutoff}(\rm O)&=\frac{E_{s}( \rm O)}{3}=0.861\,\rm{eV}\\ E_{\rm cutoff}(\rm Si)&=\frac{E_{s}(\rm Si)}{3}=1.55 5\,\rm{eV}.\end{split} \tag{5}\]
Figure 1: Model comparison for angular distributions of total sputtered mass yield (left) and energy distribution of sputtered O (right) from irradiated enstatite (MgSiO\({}_{3}\)) for impinging He ions at an incident angle of 45\({}^{\circ}\) and energy of 4 keV. The bulk binding model (BB, black line) is based on the pure bulk binding energy (BBE) assumption, where a lack of a surface binding energy (SBE) prevents scattering of the particles towards the surface, resulting in ejecta being preferentially emitted towards the surface normal. The energy distribution of the BB model does not express the characteristic Thompson distribution but instead shows a monotonously decreasing distribution, starting at the element-specific cutoff energy of \(\Delta H_{s}/3\). The surface binding model (SB, light blue line) shown for comparison is calculated with an SBE instead of the BBE. The experimental data are from thin-film irradiation (Biber et al., 2022) and normalized to \(y_{\rm max}=1\) with an error of one standard deviation.
### HB: New hybrid binding energy model
The planar potential on the surface is an issue, as its strength needs to exceed atomic enthalpies of sublimation to properly reproduce experimental data. The presence of such a surface potential is however supported by previous energy distribution measurements (Betz & Wien, 1994; Samartsev & Wucher, 2006; Martinez et al., 2017). Furthermore, metals covered by a layer of O\({}_{2}\) express energy peak broadening as well as a slight shift to larger energies (Dullni, 1984; Wucher & Oechsner, 1986, 1988). The energy distribution of the BB model is thus only fitting to sputtering of binary metal compounds where monotonously decreasing energy distributions were observed with peak energies close to zero (Szymonski, 1981). In oxide-bearing minerals we would thus expect a behavior where the energy distribution is affected proportionally with the amount of available O. Neither the SB nor the BB model is capable to take this into account, which demands a new model.
We introduce a hybrid binding energy model (HB) that uses the element enthalpy of sublimation as SBE and the enthalpy of formation for compounds as BBE. The energies thus represent a surface potential which has to be overcome as well as the bonds within the sample, which have to first be broken up before an atom is mobilized. The model is based purely on tabulated data, just like the bulk binding model of Hofsass & Stegmaier (2022) but without the need of a specific \(E_{\rm cutoff}\) to best reproduce sputter yields and energy distributions. It therefore poses a promising alternative to the previous approaches for obtaining and increased binding energies.
As an example, the SBE and BBE for the binary compound SiO\({}_{2}\) result in
\[\begin{split} E_{surf}(\rm Si)&=E_{s}(\rm Si)=4.664 \,\rm eV\\ E_{bulk}(\rm Si)&=\frac{\Delta H_{f}(\rm SiO_{2})} {m+n}\\ &=\frac{9.441\,\rm eV}{3}=3.147\,\rm eV\\ E_{surf}(\rm O)&=\frac{\Delta H_{diss}(\rm O_{2})}{2}=2.582\,\rm eV\\ E_{bulk}(\rm O)&=\frac{\Delta H_{f}(\rm SiO_{2})} {m+n}\\ &=\frac{9.441\,\rm eV}{3}=3.147\,\rm eV.\end{split} \tag{6}\]
The bulk binding energies which are determined from binary compounds only hold as long as we assume that each element remains bound over the course of irradiation. This is naturally not the case and led in consequence to the implementation of a more sophisticated compound model.
### New compound model
We propose a simple model for sample compositions which serves two purposes. It allows discrimination between chemically bound atoms and 'free' atoms (not chemically bound) and to use data of compounds (i.e., oxides and sulfides) to adequately approximate realistic mass densities of minerals. The simulation names using this compound model to differentiate between bound and un-bound atoms as well as density are labelled by '-C' (HB-C, for the combination of compound and hybrid model; Table 2).
#### 2.4.1 Discriminate between bound and free atoms
Instead of using single atoms, the starting condition considers each atom to be bound to its respective compound--for example, Si and O are bound in SiO\({}_{2}\). If a recoil occurs with sufficient energy to overcome the bulk binding energy the bound atom is un-bound. The atomic species produced by breaking up compounds have no longer a chemical binding energy (BBE = 0, Table 2). If the remaining energy after the collision is large enough, the target-atom can move trough the sample. The atom then either comes to a halt and attempts to re-form a bond or is ejected. To prevent a major accumulation of atomic species, free atoms react to form the initially set compounds again whenever possible. In the current SDTrimSP implementation, the compound with the highest formation enthalpy is prioritized to re-form given the available O. This has the desired effect that oxygen is unlikely to ever exist as a free atom. In SDTrimSP, the compound hybrid model is implemented as the surface-binding-model four (isbv = 4). In the non-compound models BB and HB, each component within the sample has a fixed BBE due to the atomic model not being capable of differentiating bound from free components (Table 2). They therefore do not behave identical to
their compound counterparts (BB-C and HB-C), which causes major differences especially between the HB and HB-C energy and angular distributions (Sec. 3).
#### 2.4.2 Set atomic density with compounds
It was found that the best fitting models to sputter yields for mineral not only require an increase in binding energy (as already hinted at in, e.g., Dullni 1984), but also an accurate model that reflects realistic material properties which includes the atomic density (e.g. Szabo et al. 2020). The default way of determining densities in SDTrimSP and TRIM is by using tabulated data of atomic species. In (Szabo et al. 2020), the authors follow Moller and Posselt (2001) and calculate a density for wollastonite (CaSiO\({}_{3}\)) based on tabulated atomic densities, which results in 0.0376 atoms A\({}^{-3}\). Increasing the density of oxygen \(\rho_{\rm O}\) to 0.7 atoms A\({}^{-3}\) (from an initial 0.04 A\({}^{-3}\)) leads to a bulk density more akin of the wollastonite density of 0.07412 atoms A\({}^{-3}\), corresponding to 2.86 g cm\({}^{-3}\). This value for \(\rho_{\rm O}\) exceeds the typical atomic density by over an order of magnitude. Therefore, in dynamical modeling removal of oxygen causes disproportionate changes to the surface density of the compound compared to removing any other element. To prevent this, we propose calculating mineral densities based on the tabulated atomic densities of compounds, which are simplified building blocks of minerals.
In SDTrimSP, the density of each layer of the sample is calculated based on the density of its components with
\[\rho=\left(\sum_{1}^{n}\frac{X_{n}}{\rho_{n}}\right)^{-1}, \tag{7}\]
where \(\rho\) is the density of the sample, \(X_{n}\) the atomic fraction and \(\rho_{n}\) the density of the \(n^{\rm th}\) component.
The atomic densities and atomic fractions define the bulk density, and therefore the mean free path between two atoms in the sample. The mean free path \(\mu\) is formulated in SDTrimSP as
\[\mu=\rho^{-1/3}. \tag{8}\]
In BCA simulations such as SDTrimSP, an ion travelling through the sample will gradually lose its energy through nuclear and electronic interactions, which influence its motion (e.g., Eckstein 1991). After the impinging ion has travelled the distance \(\mu\), a collision occurs (Eckstein 1991; Mutzke et al. 2019). High density samples have small \(\mu\) and more energy is conserved between two collisions as the effect of electronic stopping is reduced.
Another effect of density is the distance between the atoms and therefore it has an influence on the transferable energy during a collision. This energy is inverse proportional to the distance between the projectile and the center of the particle at rest. The furthest distance at which a collision occurs is the maximal impact parameter, where energy transfer is at its minimum
\[p_{\rm max}=\mu(2\pi)^{-1/2}. \tag{9}\]
With smaller \(\mu\), the minimum transferable energy becomes larger as the spacing between the atoms, and therefore the mean impact parameter decreases. Higher densities therefore reduce the amount of low-energetic sputtered particles through recoils and lowers the number of recoils as the energy is lost faster.
Mineral densities and calculated mean free paths of relevant rock-forming minerals are shown in Table 1. As an example, for enstatite (\(\rho_{\rm En}\sim 3.20\) g cm\({}^{-3}\)), the default atomic model would result in
\[\begin{split}\rho_{\rm En}&=\left(\frac{X_{\rm Mg} }{\rho_{\rm Mg}}+\frac{X_{\rm Si}}{\rho_{\rm Si}}+\frac{X_{\rm O}}{\rho_{\rm O} }\right)^{-1}\\ \rho_{\rm En}&=\left(\frac{0.2}{0.0431}+\frac{0.2}{ 0.0499}+\frac{0.6}{0.0429}\right)^{-1}\text{ at A}^{-3}\\ &=\ 0.0442\text{ at A}^{-3}\\ &=1.47\text{ g cm}^{-3},\end{split} \tag{10}\]
whereas the compound model, using tabulated data for elements results in
\[\begin{split}\rho_{\rm En}&=\left(\frac{X_{\rm MgO}}{ \rho_{\rm MgO}}+\frac{X_{\rm SiO2}}{\rho_{\rm SiO2}}\right)^{-1}\\ \rho_{\rm En}&=\left(\frac{0.5}{0.1070}+\frac{0.5}{0.0797}\right)^{-1}\ \rm{at\ A}^{-3}\\ &=\ 0.0913\ \rm{at\ A}^{-3}\\ &=3.05\ \rm{g\ cm}^{-3}.\end{split} \tag{11}\]
This example and the results in Table 1 demonstrate, how using compound data recreates realistic mineral densities and as a result the mean free path within a sample well. Table 1 also shows, that densities can be approximated without any manual adjustments compared to the default atomic model. Together with the hybrid binding energy model, it poses the first step in properly approximating oxides and oxide-derived minerals in Monte Carlo BCA codes such as SDTrimSP.
\begin{table}
\begin{tabular}{c l l c c c c c c c} \hline \hline Group & Mineral & Formula & \(\rho_{\rm ref}\) & \(\rho_{\rm compounds}\) & \(\Delta\mu_{\rm compounds}\) & \(\rho_{\rm atomic}\) & \(\Delta\mu_{\rm atomic}\) \\ & & & (g/cm\({}^{3}\)) & (at/Å\({}^{3}\)) & (g/cm\({}^{3}\)) & (at/Å\({}^{3}\)) & (1) & (g/cm\({}^{3}\)) & (at/Å\({}^{3}\)) & (1) \\ \hline Plagioclase & Orthoclase & KAISi\({}_{3}\)O\({}_{8}\) & 2.56 & 0.0723 & 2.67 & 0.0754 & -1\% & 1.36 & 0.0384 & 23\% \\ & Albite & NaAlSi\({}_{3}\)O\({}_{8}\) & 2.62 & 0.0786 & 2.70 & 0.0808 & -1\% & 1.43 & 0.0429 & 22\% \\ & Anorthite & CaAl\({}_{2}\)Si\({}_{2}\)O\({}_{8}\) & 2.73 & 0.0768 & 2.99 & 0.0840 & -3\% & 1.53 & 0.0429 & 21\% \\ & Nepheline & NaAlSiO\({}_{4}\) & 2.59 & 0.0747 & 2.84 & 0.0820 & -3\% & 1.44 & 0.0414 & 22\% \\ Pyroxene & Wollastonite & CaSiO\({}_{3}\) & 2.93 & 0.0760 & 2.91 & 0.0755 & 0\% & 1.45 & 0.0375 & 26\% \\ & Diopside & CaMgSi\({}_{2}\)O\({}_{6}\) & 3.40 & 0.0946 & 2.97 & 0.0827 & 5\% & 1.46 & 0.0405 & 33\% \\ & Enstatite & Mg\({}_{2}\)Si\({}_{2}\)O\({}_{6}\) & 3.20 & 0.0960 & 3.05 & 0.0913 & 2\% & 1.47 & 0.0441 & 30\% \\ & Ferrosillite & Fe\({}_{2}\)Si\({}_{2}\)O\({}_{6}\) & 3.95 & 0.0902 & 3.82 & 0.0872 & 1\% & 2.15 & 0.0491 & 22\% \\ Olivine & Forsterite & Mg\({}_{2}\)SiO\({}_{4}\) & 3.27 & 0.0980 & 3.21 & 0.0960 & 1\% & 1.46 & 0.0438 & 31\% \\ & Fayalite & Fe\({}_{2}\)SiO\({}_{4}\) & 4.39 & 0.0908 & 4.64 & 0.0900 & 0\% & 2.48 & 0.0512 & 21\% \\ Oxides & Ilmenite & FeTiO\({}_{3}\) & 4.72 & 0.0937 & 4.83 & 0.0959 & -1\% & 2.54 & 0.0504 & 23\% \\ & Quartz & SiO\({}_{2}\) & 2.65 & 0.0797 & 2.65 & 0.0797 & 0\% & 1.51 & 0.0454 & 21\% \\ Sulfides & Troilite & FeS & 4.61 & 0.0632 & 4.61 & 0.0632 & 0\% & 3.89 & 0.0533 & 6\% \\ & Niningerite & MgS & 2.68 & 0.0573 & 2.68 & 0.0573 & 0\% & 1.91 & 0.0408 & 12\% \\ & MnS & MnS & 3.99 & 0.0552 & 3.99 & 0.0552 & 0\% & 3.80 & 0.0526 & 2\% \\ & CrS & CrS & 4.89 & 0.0701 & 4.89 & 0.0701 & 0\% & 3.70 & 0.0530 & 10\% \\ & TiS & TiS & 3.85 & 0.0580 & 3.85 & 0.0580 & 0\% & 3.07 & 0.0462 & 8\% \\ & CaS & CaS & 2.59 & 0.0432 & 2.59 & 0.0432 & 0\% & 1.74 & 0.0290 & 14\% \\ Accessories & Spinel & MgAl\({}_{2}\)O\({}_{4}\) & 3.64 & 0.1078 & 3.77 & 0.1115 & -1\% & 1.58 & 0.0468 & 32\% \\ & Chromite & FeCr\({}_{2}\)O\({}_{4}\) & 4.79 & 0.0902 & 5.29 & 0.0996 & -3\% & 2.88 & 0.0543 & 18\% \\ \hline \end{tabular} Note. – Difference in mean free path lengths (\(\mu=\rho^{-1/3}\)) are calculated as \(\Delta\mu=\mu/\mu_{\rm ref}-1\); The density short forms are: \(\rho_{\rm ref}\) – mass densities and atomic densities calculated based on typical mineral densities found on webmineral (see also, e.g., Deer et al., 1992); \(\rho_{\rm compounds}\) – densities calculated based on tabulated oxide and sulfide data from pure compound properties; \(\rho_{\rm atomic}\) – densities calculated based on atomic data included in tables of SDTrimSP which are based on mono-atomic solids.
\end{table}
Table 1: Major rock forming minerals required to represent an unknown planetary surface, consisting of volcanic minerals
### Fitting the simulated data
The modeled sputter yield by element and mass is fitted using an Eckstein fit based on the Yamamura et al. (1983) formula (Eckstein & Preuss, 2003):
\[\begin{split} Y(\upalpha)=& Y(0)\left\{\cos\left[ \left(\frac{\upalpha}{\upalpha_{0}}\frac{\pi}{2}\right)^{c}\right]\right\}^{-f} \times\\ &\exp\left\{b\left(1-1\middle/\!\cos\left[\left(\frac{\upalpha}{ \upalpha_{0}}\frac{\pi}{2}\right)^{c}\right]\right)\right\},\end{split} \tag{12}\]
with the fitting parameters \(b\), \(c\), and \(f\) and the angle of incidence \(\upalpha\). The value for \(\upalpha_{0}\) is chosen as \(\pi/2\) instead of being calculated by
\[\upalpha_{0}=\pi-\arccos\sqrt{\frac{1}{1+E_{0}/E_{sp}}}\geq\frac{\pi}{2}, \tag{13}\]
because the projectile binding energy \(E_{sp}\) would be required or assumed and for the typical solar wind energies \(E_{0}\) in keV range with \(E_{sp}\) in the eV range this would cause only minor deviations from \(\upalpha_{0}=\pi/2\).
For the angular distribution of sputtered particles, the data are fitted using an adapted cosine fit function after (Hofsass & Stegmaier, 2022) to take the non-symmetrical nature of sputtered particles into account. The system of equation is as follows:
\[f(\phi)\begin{cases}A\,\cos^{m}\left(\frac{\pi}{2}\left(\frac{\pi+2\phi}{\pi+ 2\phi_{\rm tilt}}-1\right)\right)&\phi\leq\phi_{\rm tilt}\\ A\,\cos^{n}\left(\frac{\pi}{2}\left(1-\frac{\pi-2\phi}{\pi-2\phi_{\rm tilt}} \right)\right)&\phi\geq\phi_{\rm tilt},\end{cases} \tag{14}\]
with the scaling factor \(A\), the tilt angle \(\phi_{\rm tilt}\), the exponents \(m\) and \(n\), and the angle \(\phi\).
The energy distribution data are fitted using a Thompson distribution (Thompson, 1968),
\[f(E)=S\frac{E}{(E+E_{0})^{3}}, \tag{15}\]
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{6}{c}{SDTrimSP model presets} & \multicolumn{3}{c}{manually set models} \\ \cline{2-9} & SB & SB-C & BB & BB-C & HB-C & BB\({}_{0}^{*}\) & HB\({}^{*}\) \\ \hline \(SBE\) & \(\Delta H_{\rm sub}\) & \(\Delta H_{\rm sub}\) & 0 & 0 & \(\Delta H_{\rm sub}\) & 0 & \(\Delta H_{\rm sub}\) \\ \(BBE_{\rm f}^{\dagger}\) & 0 & 0 & \(\Delta H_{\rm sub}\) & 0 & 0 & \(\Delta H_{\rm sub}\) + CBE & CBE \\ \(BBE_{\rm b}^{\dagger}\) & - & 0 & - & \(\Delta H_{\rm sub}\) + CBE & CBE & - & - \\ \(\rho_{\rm f}\) & atomic & atomic & atomic & atomic & atomic & atomic & atomic \\ \(\rho_{\rm b}\) & - & compound & - & compound & compound & - & - \\ \(E_{\rm cutoff}\) & \(<\)\(\Delta H_{\rm sub}\) & \(<\)\(\Delta H_{\rm sub}\) & \(\Delta H_{\rm sub}/3\) & \(\Delta H_{\rm sub}/3\) & \(<\)\(\Delta H_{\rm sub}\) & \(\Delta H_{\rm sub}/3\) & \(<\)\(\Delta H_{\rm sub}\) \\ isbv & 1 & 1 & 8 & 8 & 4 & 1 & 1 \\ \hline \end{tabular} Note. – Short forms: SBE – surface binding energy; BBE – bulk binding energy; f – ‘free’, un-bound atom; b – compound-bound atom; CBE – Chemical Binding Energy: \(\Delta H_{f}/(m+n)\) whereas m and n are the number of cations and anions in a compound; \(E_{\rm cutoff}\) – Cutoff energy; \(\Delta H_{\rm sub}\) – enthalpy of sublimation; \(\Delta H_{f}\) – enthalpy of formation of binary compound; isby – model number in SDTrimSP input files.
\({}^{*}\)Each component is considered un-bound in regards to its density and bound regarding to the BBE (CBE assigned). The BB\({}_{0}\) model is the original Hofsäss & Stegmaier (2022) model. The HB model is only used to demonstrate the effect of density independent of the hybrid binding energy model.
\({}^{\dagger}\)For O, \(\Delta H_{\rm sub}\) is neglected and only CBE is used as a BBE, if any.
\end{table}
Table 2: The different energy and density models and their parameters
with a scaling factor \(S\), the energy removed from the sputtered atom before it escapes the surface \(E_{0}\) (approximately SBE, when considering a pure SB model) and the energy of the sputtered atom \(E\). The energy peak is located at \(E\approx E_{0}/2\).
## 3 Results
The validity of any new suggested model can ultimately only be verified through experimental data focusing on speciation of the sputtered material as well as its angular and energy distribution. For now, we can only compare experimental sputter yield data in mass per impinging ion (amu/ion) and their angular distribution with model outputs. The composition of the modeled yield is stoichiometric. Lighter species are initially sputtered in an over-stoichiometric way. With fluence and decreasing abundance of light species, the sputter yield composition approaches the initial sample stoichiometry, which evidently will not correspond to the sample surface composition in equilibrium. We know that the laboratory data correspond to fluences where this irradiation-equilibrium is reached. For the scope of this work, we assume that the laboratory yield composition is indeed stoichiometric.
### HB-C model and experimental data
We first present the capabilities of the newly implemented hybrid binding energy model which includes the compound model (HB-C). The results of the Szabo et al. (2020) approach and the HB-C model are thereby largely identical when it comes to mass yields and recreate the experimental data reasonably well (Fig. 2). The largest discrepancies lie in both the angular and energy distributions. A high SBE increases the refraction which occurs on the surface, and therefore increases the spread of the angular distribution. We show this behavior in Fig. 3 where the Szabo et al. (2020) approach--with the highest SBEs of all model results shown in this work--leads to the largest tilt angle (\(27^{\circ}\) at an angle of incidence of \(45^{\circ}\)) with the broadest angular distribution of all models (exponents \(m=4.9\) and \(n=1.4\) for He\({}^{+}\) on wollastonite). The homogeneous, atom-insensitive energy distribution of the Szabo et al. (2020) approach is the consequence of using an identical SBE for each species (Fig. 3).
### All model comparison
In Fig. 4 we compare the HB-C model with other models in relation to the experimental sputter yield data of wollastonite and enstatite. It is apparent that we find the experimental data lying between the HB-C and the HB model. The latter thereby does not differentiate between bound and unbound species in the sample. Most relevant is that the experimental data are recreated using the HB-C model at normal incidence and close to normal incidence (\(<45^{\circ}\)).
#### 3.2.1 Angular distributions
We compare to experimental angular distributions of Biber et al. (2022) with modeled data of enstatite in Fig. 5. The largest agreement with experimental data is with the HB model, which expresses the strongest degree of forward sputtering (largest tilt angle) due to the high binding energy of each species in the sample. The cases with lower or no BBE--this includes the unbound species within the HB-C model--clearly show a drastically reduced degree of forward sputtering compared to the HB model. Angular distribution data of TRIM is not shown, as it expresses distributions even more narrow than the BB model (Fig. 1 Hofsass & Stegmaier, 2022).
#### 3.2.2 Energy distributions
Although no experimental data exists for the irradiated enstatite, we present the modeled energy distributions of the sputter ejecta in Fig. 6. The SB and SB-C model show a nearly identical energy distribution, whilst the HB and HB-C models express a smaller amount of low energy particles and thus broader peaks. The more prominent, high energy tail of sputtered particles in the HB model is due to the species experiencing large BBEs at any degree of applied fluence. In comparison, the compound model (HB-C) can build up free Mg which are consecutively sputtered without having to overcome a BBE. This in return increases the number of low energy Mg in the energy distribution, which lies closer to the SB-C model. This is manifested in the Mg energy distribution peaking at 0.9 eV in the HB-C model compared to the 0.6 eV in the SB models and the 1.8 eV in the HB model.
## 4 Discussion
We were able to confirm that it is of utmost importance to properly set the density of the irradiated sample. It is evident in Fig. 4 that under normal incidence, the HB-C model which recreates the mineral density adequately fits the experimental data best for both H\({}^{+}\) and He\({}^{+}\) irradiation results.
The experimental data of the H-irradiated wollastonite thin-film expresses a significant deviation from SDTrimSP predictions for the flat surface sputter behavior. This could so far not be explained (Szabo et al., 2018). Nevertheless, all the experimental data in Fig. 4 shows good agreement with the HB-C model close to normal incidence and up to at least 45\({}^{\circ}\). This is relevant for approximating irradiation of a realistic, rough surfaces, because yield enhancements between a flat and rough surface are generally small for incidence angles below 45\({}^{\circ}\)(Kustner et al., 1998; Biber et al., 2022). This is not due to impacts realistically occurring at normal incidence in nature, but due to surface roughness leading to locally reduced incidence angles for shallow impinging ions and therefore flattened mass yield distributions. This is discussed in Biber et al. (2022) for enstatite irradiation experiments and was previously shown for rough Bo and Be surfaces (Gauthier et al., 1990; Roth et al., 1991; Kustner et al., 1999).
### Angular distribution
We observed, that no model can completely recreate the large polar tilt angle seen in experimental data (Fig. 5). The model that comes closest is the HB model, which boasts large BBEs, subsequently leading to a rapid loss of energy with each recoil. The increased binding energy thus negatively affects the collision kinematics of long collision cascades and gives primary-knock-on collisions (i.e., Fig. 2.6 in Behrisch & Wittmaack, 1991) a higher significance in the angular distribution of sputtered material. More random ejecta from long collision cascades which would lead to ejecta distributions close to normal is reduced. As a consequence, the tilt of the angular distribution increases. This behavior has also been observed on binary alloys, both experimentally and through MD simulations. There, atoms sputtered from the second atomic layer form angular distributions towards the surface normal whereas first-layer-emitted atoms have a broad distribution Schwebel et al. (1987); Whitaker et al. (1993); Gnaser (1999). In all but the HB and HB-C model, components with low BBEs (if any) exist at the irradiation equilibrium. Energy loss within the sample is
Figure 2: The agreement of the initial approach used to fit the experimental data (Szabo et al., 2020) with the HB-C model is shown, including TRIM model results Biersack & Eckstein (1984). The abbreviations are: HB – Surface binding energy (SBE) based on heat of sublimation and bulk binding energy on enthalpy of formation; C – densities calculated based on compound densities and differentiation between unbound and bound species. Szabo et al. (2020) used an averaged SBE of all components after increasing the O\({}_{\rm{SBE}}\) to 6.5 eV. To reach the proper wollastonite density, they increased the O atomic density accordingly.
therefore less significant, which reduces the contribution of first-layer-emitted atoms and causes a near circular plume of ejecta closer to the surface normal.
The width of the angular distribution, quantified in the cosine fit exponents (\(m\) and \(n\), Fig 5), is also tied to the surface binding energy. In all modeling approaches but the ones from Szabo et al. (2020) and Hofsass and Stegmaier (2022) the used SBEs are identical and therefore the exponents are comparable. The BB model is most narrow (no surface potential, no refraction) and results in the lowest tilt angle with a visible forward-sputter contribution which is not able to significantly affect the tilt of the distribution. Both the HB-C and especially the HB model lead to a larger tilt due to preventing randomly distributed, low-energy particles to leave the surface and thus favoring forward-facing ejecta, which is observed as a peak around \(-60^{\circ}\). Towards increasing incident angles relative to the surface normal (\(>45^{\circ}\), not shown), the number of single knock-on recoils increases independent of the chosen model, enhancing the
Figure 3: Modeled angular distribution of total sputter yield (data in grey, fit in orange) and energy distributions of sputter ejecta The energy in the legend corresponds to the peak energy of the Thompson fit function. from wollastonite irradiated by 4 keV He\({}^{+}\). Szabo et al. (2020) increased the O surface binding energy (SBE) to 6.5 eV, averaged SBEs for all elements, and increased O density to reach initial wollastonite density. The large surface binding energy causes a high degree of surface scattering of the ejected particles whereas the averaging of the binding energies leads to an identical energy distribution for all species. The HB-C model uses both SBE and bulk binding energy to achieve an increase in binding energy whilst reliably reproducing mineral densities based on oxide compound data and differentiating between compound-bound and un-bound atoms.
Figure 4: SDTrimSP model results compared to TRIM model results (red dash-dotted line Biersack and Eckstein, 1984) and experimental data by Szabo et al. (2018) (H\({}^{+}\) on wollastonite), Szabo et al. (2020a) (He\({}^{+}\) on wollastonite) and Biber et al. (2022) (He\({}^{+}\) on enstatite). Near ideal mineral densities are obtained in models taking compounds (-C) into account whereas the atomic cases represent lower densities, about a factor two below compound derived densities. Abbreviations and line styles: SB – dashed lines, tabulated enthalpy of sublimation as element surface binding energies; BB – dotted lines, tabulated enthalpy of sublimation as element bulk binding energies; HB – solid lines, tabulated enthalpy of formation as bulk binding energy and enthalpy of sublimation as surface binding energies; C – densities calculated based on compound densities and differentiation between compound-bound and un-bound atoms.
peak size of the forward-aligned ejecta. Local shallow incident angles are unlikely to contribute to sputtering of a realistic, rough and/or porous sample. This is motivated by the strong sputter yield decrease observed at shallow incidence, which is related to processes of shadowing and re-deposition (Kustner et al., 1999; Cupak et al., 2021; Biber et al., 2022; Szabo et al., 2022). For this reason, the forward-facing peak at shallow incidence angles is not expected to be present for sputtering of regolith. Furthermore, the contribution to the total sputtered particles is negligible for non-shallow incident angles.
The sample roughness could in theory be another cause for the discrepancy between model and experimental data. The surface of the enstatite glassy thin-film was analyzed using an Atomic Force Microscope and its roughness was deemed negligible (Biber et al., 2022). Furthermore, when compared to the angular distribution of a rougher surface,
Figure 5: Polar angular distributions of total sputter yields from enstatite irradiated with 4 keV He\({}^{+}\) at an angle of 45\({}^{\circ}\) based on different model assumptions. The larger density prescribed by the compound model leads to a slightly more narrow angular distribution—seen in the smaller \(m\) fit exponents of 2.9 and 3.9 of the cosine fit—when compared to the atomic model \(m\) exponents of 3.1 and 4.3 respectively. If elements become un-bound with irradiation (HB-C model), the effect of a bulk binding energy (BBE) on the tilt angle is small compared to the SB model (\(+2.3^{\circ}\)). If elements remain bound and experience a constant BBE and surface binding energy (HB model), forward sputtering is more prominent (SB model tilt \(+6.2^{\circ}\)). Abbreviations: SB–tabulated enthalpy of sublimation as element surface binding energies; HB – tabulated enthalpy of formation as bulk binding energy and enthalpy of sublimation as surface binding energies; C – densities calculated based on compound densities and differentiation between compound-bound and un-bound atoms. Experimental data from thin-film irradiation (Biber et al., 2022) normalized to \(y_{\rm max}=1\) with an error of one standard deviation.
the thin film angular distribution is nearly identical when normalized (figures 2&3 in Biber et al., 2022). Roughness is therefore unlikely to account for the discrepancy seen in Figure 5.
### Energy distribution
Energy distributions of particles from SB models follow Thompson distributions with peak energies close to 1/2 of the SBEs used. The HB model however reaches peak energies that are approximately equal to the SBEs used (\(E_{s}(Mg)=1.5\), \(E_{s}(O)=2.6\), and \(E_{s}(Si)=4.7\)) and the HB-C model shows elevated energies which are closer to SBE/2. At constant SBEs, the peaks of the energy distribution are widened with increasing bulk binding energies (Fig. 6). Models which include a BBE experience a shift towards larger energies with a broadening of the energy distribution, as low-energy particles are not reflected back into the sample. This behavior follows the O\({}_{2}\)-covered metal
Figure 6: Energy distributions of sputtered elements from enstatite irradiated with 4 keV He\({}^{+}\) at an angle of 45\({}^{\circ}\) based on different model assumptions. The energy in the legend corresponds to the peak energy of the Thompson fit function. Abbreviations: SBE – tabulated enthalpy of sublimation as element surface binding energies; HB – tabulated enthalpy of formation as bulk binding energy and enthalpy of sublimation as surface binding energies; C – densities calculated based on compound densities and differentiation between compound-bound and un-bound atoms.
irradiation experiments performed by Dullni (1984); Wucher & Oechsner (1986) and Wucher & Oechsner (1988). Therefore the peak energies of the energy distributions, fitted by Thompson distributions, do not correspond to the enthalpy of sublimation \(\Delta H_{s}\) of the atomic species but rather the combination of enthalpy of formation \(\Delta H_{f}\) of the oxide present with \(\Delta H_{s}\) (Fig. 3 in Dullni, 1984). The expected energy distribution broadening in a system where O\({}_{2}\) is present is thus recreated by both the HB and the HB-C models with the same underlying assumptions, making it a valuable addition to the SB and BB models which, on the contrary, cannot. The results are also reminiscent of the broadening observed by increasing SBEs as in Morrissey et al. (2022), and the conclusion is the same. Larger total binding energies lead to a larger high-energy fraction of the sputtered particles whilst reducing the number of ejected particles. In exospheres around solar-wind exposed surfaces, less abundant but more energetic particles would then be detectable farther from the surface.
#### 4.3.1 Inclusion of intermediary compounds
It becomes evident from Figure 6, that larger peak energies can be achieved if the atomic species remain in a bound condition. In the scope of this work we did not explore the formation of possible intermediates. The current implementation will always break up the compound and one of the products will continue to travel through the sample. If there are enough free elements available, only the original oxide can form, and therefore the model--for the example of SiO\({}_{2}\)--is limited to:
\[\text{SiO}_{2}\xleftarrow{}\text{Si + O + O} \tag{16}\]
A more sophisticated model would need to include the following reactions:
\[\text{SiO}_{2}\xleftarrow{}\text{Si + O}_{2} \tag{17}\] \[\text{SiO}_{2}\xleftarrow{}\text{SiO + O}\] (18) \[\text{SiO }\xleftarrow{}\text{Si + O}\] (19) \[\text{O}_{2}\xleftarrow{}\text{O + O}\] (20) \[\text{Si}_{2}\xleftarrow{}\text{Si + Si} \tag{21}\]
which would reduce the number of un-bound atoms in the sample. The resulting energy distribution would thus lie closer to the hybrid model (HB) where atomic species are considered to remain bound in their compounds. To fully simulate the process of amorphization we would need to know what drives the stability of the different products within a mineral in irradiation equilibrium.
### Effect of increased SBE
To demonstrate the effect of an increased SBE, we compared the standard SB model and the newly implemented HB-C model with the results of Morrissey et al. (2022). As of now, there are only SBEs available for Na in Na silicates with increasing coordination numbers (number of O atoms being a neighbor to Na). Therefore we only compare the results for albite NaAlSi\({}_{3}\)O\({}_{8}\) irradiated by 1 keV H\({}^{+}\) (Table 3). For a static computation in SDTRimSP of albite with increased Na binding energies of \(E_{s}(\text{Na})=7.9\) eV Morrissey reported a yield of \(4.12\times 10^{-4}\) Na/ion at normal incidence. If SDTrimSP is run in dynamic mode, the yield at the irradiation equilibrium is increased by a factor of two, to \(7.90\times 10^{-4}\) Na/ion. If compared to the yields of the SB model (\(1.08\times 10^{-3}\) Na/ion) and the HB-C model (\(1.10\times 10^{-3}\)) the dynamic Na yields with \(E_{s}(\text{Na})=7.9\) eV differ by 30%. This similarity in SB and HB-C equilibrium yield is due to free Na atoms in the HB-C model behaving identical to the Na in the BB model. Na\({}_{2}\)O having the lowest enthalpy of formation and therefore bound Na in the HB-C model is not prioritized in forming bonds with free O, causing an accumulation of Na in the surface layer at irradiation equilibrium as a result. The increase in density and BBE which is imbued in the HB-C model does therefore not apply to Na at the irradiation equilibrium as no surface Na\({}_{2}\)O exists. The energy peak of the Morrissey approach (\(E_{s}(\text{Na})=7.9\) eV) is, as expected, around 4 eV (approx. SBE/2 = 7.8/2) with the tilt angle exceeding the results of both the SB and the HB-C model by a factor of two and expressing a wide distribution as given by the large fit exponents (\(m\) and \(n\)). In conclusion, the effect of increasing the SBE of Na is apparent not only in actual yields (-30%) but also in the angular and energy energy distributions.
Both the angular and energy distribution data of sputtered minerals depend on the chosen surface and bulk binding energies. Extensive experiments to properly discriminate between different sputtered species as well as obtaining the species' energy distribution would be highly valuable for constraining surface and bulk binding energies. Obtaining energy distributions would give a needed insight on the energy peak broadening effect occurring on minerals. If this was available, further restrictions on realistic binding energies could be enforced whereas SBEs define the energy peak position and width and BBEs act as a 'broadening agent' for further enhancing energy peak widths. As a side effect, the increasing and/or shifting of binding energies between SBE and BBE could achieve the desired forward tilt of the sputtered material whilst not degrading the agreement in total mass yields.
It would be pleasing, although unlikely, if experimental data of energy and angular distributions could be recreated based on solely tabulated thermodynamic data. Nevertheless, we expect SBEs to be larger than tabulated, as demonstrated for an ideal, intact crystal lattice in MD by Morrissey et al. (2022). Using one single SBE might not be appropriate to describe an altered sample however. SBEs at various degrees of alteration would be necessary to understand the evolution of the SBE with increasing level of amorphization. The correlation of SBE with coordination number shown by Morrissey is reminiscent of the SBE dependence on the degree of amorphization and a similar behavior is expected for the surfaces of irradiated samples (Loeffler et al., 2009; Biber et al., 2022). One should however refrain from adjusting the SBE like a fit parameter to best reproduce experimental data. For now we propose the use of the HB-C model for recreating experimental mass changes, with the enthalpy of sublimation as SBE and the enthalpy of formation of the mineral-forming compounds as BBE.
## 5 Conclusions
We introduced a hybrid binding energy model in the binary collision approximation (BCA) code SDTrimSP with an underlying compound model which combines tabulated data for surface binding energies (SBE), bulk binding energies (BBE) as well as densities for mineral samples whilst differentiating between free and compound-bound components. In regards to previous modeling approaches we offer an alternative that minimizes the number of free parameters further and well reproduces experimental data. The new compound hybrid model (HB-C) merges the pure surface binding (SB) and bulk binding (BB) models while reproducing mineral properties. This includes proper mineral densities through tabulated compound data, but also combining surface and bulk binding energies, which leads to increased energy loss within the collision cascade, causing energy peak broadening as expected in a O-dominated system (e.g., Dullni, 1984).
Although the differences between the SB and the HB-C model seem minor, the model infrastructure allows for further inclusions that are reasonable in terms of mineralogy and physics. Furthermore, comparisons with experimental
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \(E_{s}\)(Na) & \(Y_{\rm Na}\) & \(\phi_{\rm tilt}(45^{\circ})\) & \(m\), \(n\) \\ & [eV] & [\(10^{-3}\frac{\rm at}{\rm ion}\)] & [\({}^{\circ}\)] & [1] \\ \hline M22\({}^{a}\) & 7.9 & 0.41 & - & - \\ SB & 7.9 & 0.79 & 34.4 & 5.1, 1.5 \\ SB & 1.1 & 1.08 & 16.1 & 3.0, 2.0 \\ HB-C & 1.1 & 1.10 & 18.9 & 3.9, 2.3 \\ \hline \end{tabular} Note. –\({}^{a}\)Computed in static mode; \(Y_{\rm Na}\) – sodium sputter yield; \(E_{s}\)(Na) – surface binding energy of sodium; \(\phi_{\rm tilt}(45^{\circ})\) – angular distribution tilt angle at an ion incidence angle of \(45^{\circ}\) relative to surface normal; \(m\), \(n\) – cosine fit exponents
\end{table}
Table 3: Effect of an increased sodium surface binding energy on total yield and angular distribution from simulating 1 keV H\({}^{+}\) irradiation on albite (NaAlSi\({}_{3}\)O\({}_{8}\))
-sputter yields result in unprecedented agreement between 0\({}^{\circ}\) (normal incidence) and 45\({}^{\circ}\), a range which is especially of interest for modelers that require sputter yields as inputs. The HB-C model thus convinces on the following points: 1) Good agreement with existing experimental data in parameter spaces relevant to exosphere modelers; 2) Corrects for underestimation of the default sample density computation based on atomic densities by using tabulated densities of compounds instead; 3) Sets surface binding energies and bulk binding energies based on tabulated enthalpy of sublimation and enthalpy of formation of compounds respectively, which allows for an universal application to minerals; 4) Does not require setting parameters such as SBE, BBE, density, and cut-off energy (surface-binding-model four, isbv = 4, in SDTrimSP), therefore greatly increasing the ease-of-use. For the time being, the HB-C model does an exemplary job in recreating experimental sputter data whilst producing reasonable energy and angular distributions of ejecta.
Financial support has been provided by the Swiss National Science Foundation Fund (200021L_182771/1) as well as the Austrian Science Fund FWF (Project No. I 4101-N36) and by KKKO (Commission for the Coordination of Fusion research in Austria at the Austrian Academy of Sciences OAW). The authors gratefully acknowledge support from NASA's Solar System Exploration Research Virtual Institute (SSERVI) via the LEADER team, Grant #80NSSC20M0060.
SDTrimSP (Mutzke et al., 2019), TRIM (in SRIM package) (Biersack & Eckstein, 1984; Ziegler et al., 2010)
## Appendix A Averaging the surface binding energies
If we assume, like in Szabo et al. (2020), that the binding energy that has to be overcome is solely dependent on the number of bonds with O, called the coordination number, the SBE of any component would be a function of the O content in the sample. A way to simulate this effect of the coordination number of atoms is to assume an averaged binding energy, which is a mass balance over all species present in the compound. In SDTrimSP, this is implemented as the surface-binding-model two (isbv = 2, Mutzke et al., 2019):
\[SBE=\sum q_{i}Es_{i},\] (A1)
where \(q_{i}\) is the concentration and \(Es_{i}\) is the SBE of component \(i\). This results in a single SBE for all components and therefore the compound. This was applied in Szabo et al. (2020) in addition to the density correction to best fit wollastonite (CaSiO\({}_{3}\)) data. To illustrate this effect, let us assume an increased \(Es_{O}\) of 6.5 eV (Szabo et al., 2020) and compare it to the default \(Es_{O}\) of 2.58247 eV. For nepheline, (NaAlSiO\({}_{4}\)) this would result in an average \(Es\) of 5.03 eV for all species instead of 2.79 eV with
\[q_{Na} =q_{Al}=q_{Si}=1/7\] (A2) \[q_{O} =4/7\] \[Es_{Na} =1.11\,\mathrm{eV}\] \[Es_{Al} =3.41\,\mathrm{eV}\] \[Es_{Si} =4.66\,\mathrm{eV}\] \[Es_{O} =2.58\,\mathrm{eV}\Rightarrow Es_{avg}=2.79\,\mathrm{eV}\] \[Es_{O} =6.50\,\mathrm{eV}\Rightarrow Es_{avg}=5.03\,\mathrm{eV}\]
On first glance, this seems to work, as the suggested SBE for Na in a pristine, crystalline mineral is about 4.8 eV based on Molecular Dynamics (MD) simulations (Morrissey et al., 2022). In the case of the major rock forming mineral albite (NaAlSi\({}_{2}\)O\({}_{6}\); \(Es_{Na}=8.4\) eV Morrissey et al., 2022), the isbv = 2 approximation with \(Es_{O}=6.5\) eV nets an average SBE of 5.4 eV, which does not reproduce the high binding energies of Na suggested by MD. This suggests that adjusting SBEs based on a single component has its limits when it comes to simulating bond strengths of complex mineral structures. |
2310.19082 | Valence band electronic structure of Nb2Pd1.2Se5 and Nb2Pd0.95S5
superconductors | We present a comparative study of our valence band photoemission results on
Nb2Pd1.2Se5 and Nb2Pd0.95S5 superconductors which is supported by our DFT based
electronic structure calculations. We observe that the VB spectra of both the
compounds are qualitatively similar, except slight difference in the binding
energy position of all features between the two compounds which could be the
result of different electronegativity of Se and S atom. The calculated density
of states reveal that the VB features are mainly composed of Pd Se S hybridized
states. The nature of DOS originating from the distinctly coordinated Pd atoms
is different. Further, the involvement of the various Pd 4d and Nb 4d states in
crossing of Fermi level signifies the multiband character of these compounds.
In addition, we find a temperature dependent pseudogap in Nb2Pd0.95S5 which is
absent in Nb2Pd1.2Se5. | H. Lohani, P. Mishra, R. Goyal, V. P. S. Awana, B. R. Sekhar | 2023-10-29T17:06:17Z | http://arxiv.org/abs/2310.19082v1 | Valence Band Electronic Structure of Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\) and Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\) Superconductors
###### Abstract
We present a comparative study of our valence band photoemission results on Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\) and Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\) superconductors which is supported by our DFT based electronic structure calculations. We observe that the VB spectra of both the compounds are qualitatively similar, except slight difference in the binding energy position of all features between the two compounds which could be the result of different electronegativity of Se and S atom. The calculated density of states(DOS) reveal that the VB features are mainly composed of Pd-Se/S hybridized states. The nature of DOS originating from the distinctly coordinated Pd atoms is different. Further, the involvement of the various Pd-4d and Nb-4d states in crossing of Fermi level(E\({}_{f}\)) signifies the multiband character of these compounds. In addition, we find a temperature dependent pseudogap in Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\) which is absent in Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\).
keywords:, UV Photoelectron spectroscopy, Ternary superconductors, Electronic structure calculation Pacs: 74.25.Jb, 74.70.Dd, 71.20.Be +
Footnote †: journal: Elsevier
## 1 Introduction
Superconducting state of the matter remains as an exciting as well as extensively studied topic from the past. Recently discovered superconductors (SC), like Fe-pnictides[1], Fe-chalcogenides[2], MgB\({}_{2}\)[3], SrRuO\({}_{4}\)[4], organic SC[5; 6] have added new concepts, like multiband effects, admixture of spin singlet and triplet pairing, spin orbit coupling(SOC) effects \(etc\). to the understanding of the superconducting phenomena. Among them Pd based ternary chalcogenides, like Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\)[7; 8], Nb\({}_{2}\)PdSe\({}_{5}\)[9], Ta\({}_{2}\)PdS\({}_{5}\)[10], Ta\({}_{2}\)Pd\({}_{0.97}\)S\({}_{6}\)[11], Ta\({}_{2}\)Pd\({}_{0.97}\)Te\({}_{6}\)[12] and Ta\({}_{4}\)Pd\({}_{3}\)Te\({}_{16}\)[13] show some peculiar behaviors arising from their low dimensional structure and strong SOC.
Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\) and Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\) are isomorphic and have their superconducting critical temperature, T\({}_{c}\)\(\sim\) 6 K[7; 9]. Although, Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\) shows a Fermi liquid behavior at low temperatures, the value of Sommerfiled coefficient estimated for Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\) and Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\), 15.7 and 32 mJ/mol-K\({}^{2}\) respectively, indicate a moderately and strongly coupled electronic interaction respectively in them. Heat capacity measurements have shown signatures of multiband superconducting behavior in both the compounds which has been well described by the two band-\(\alpha\) model. The calculated Fermi surface(FS) of both the compounds exhibit sheets of electron and hole character of different dimensions. Nesting between these 1-D like sheets of FS thought to give rise various density wave orderings, like charge density wave (CDW) and spin density wave (SDW) in these systems[23; 8]. These interesting behaviour could be originated from their complex atomic packing as hybridization strength among different atomic orbitals mainly depends on structural geometry.
Theoretical investigations of the electronic structure of some of these superconducting transition metal chalcogenides, like Nb\({}_{2}\)PdSe\({}_{5}\)[9], Nb\({}_{2}\)PdS\({}_{5}\)[8], Ta\({}_{2}\)PdS\({}_{5}\)[17], Ta\({}_{4}\)Pd\({}_{3}\)Te\({}_{16}\)[23] confirm the multiband nature of these compounds. However, till date no direct experimental studies like photoemission has been reported on the valence band
(VB) electronic structure of these class of SCs. In this paper, we present a comparative study of our angle integrated ultra violet photoemisson results supported by our DFT based calculations on Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\) and Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\). Our measurements show the VB spectra of both the compounds are qualitatively similar with slight difference in binding energy position of all the features between the two compounds which could be originated due to different electronegativity of Se and S atom. The calculated density of states(DOS) describe the VB features are mainly composed of Pd-Se/S hybridized states. The different nature of DOS originating from the differently coordinated atoms signifies an important role of the complex structural geometry and multiband effects in the electronic structure of these compounds.
## 2 Experimental and calculation details
Polycrystalline samples of Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\) and Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\) were synthesized via solid state reaction route. Structural and physical properties were studied and reported earlier[7; 9]. Photoelectron spectroscopic studies were performed in angle integrated mode using a hemispherical SCIENTA-R3000 analyzer and a monochromatized He source (SCIENTA-VUV5040). The photon flux was of the order of \(10^{16}\) photons/s/steradian with a beam spot of 2 mm in diameter. Fermi energy (E\({}_{f}\)) for all the spectra were calibrated by measuring the E\({}_{f}\) of a freshly evaporated Ag film onto the sample holder. The total energy resolution, estimated from the width of the Fermi edge, was about 27 meV for the He I excitation. All the photoemission measurements were performed inside the analysis chamber under a base vacuum of \(\sim 3.0\times 10^{-10}\) mbar. The polycrystalline samples were repeatedly scraped using a diamond file inside the preparation chamber with a base vacuum of \(\sim 5.0\times 10^{-10}\) mbar and the spectra were taken within 1 hour, so as to avoid any surface degradation. All measurements were repeated many times to ensure the reproducibility of the spectra. For the temperature dependent measurements, the samples were cooled by pumping liquid nitrogen through the sample manipulator fitted with a cryostat. Sample temperatures were measured using a silicon diode sensor touching the bottom of the stainless steel sample plate. The low temperature photoemission measurements at 77 K were performed immediately after the cleaning the sample surfaces followed by the room temperature measurements.
First-principles calculations were performed by using plain wave basis set inherent in Quantum Espresso (QE)[24]. Many electron exchange-correlation energy approximated by Perdew-Burke-Ernzerhof (PBE) functional [25; 26; 27] in a scalar relativistic, ultrasoft pseudopotential was employed[28]. Fine mesh of k-points with Gaussian smearing of the order of 0.0001 Ry was used for sampling the Brillouin zone integration. Kinetic energy and charge density cut-off were set to 180 Ry and 900 Ry respectively. Experimental lattice parameters and atomic coordinates of Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\)[9] and Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\)[7] after relaxation under damped (Beeman) dynamics with respect to both ionic coordinates and the lattice vector, were used in the calculations.
## 3 Results and discussion
Fig.1 shows crystal structure of Nb\({}_{2}\)Pd(Se/S)\({}_{5}\) which crystallizes in centrosymmetric structure with space group symmetry C\({}_{2/m}\)(# 12). The lattice parameters of Nb\({}_{2}\)Pd(Se/S)\({}_{5}\) are a = 12.788/12.134 A, b = 3.406/3.277 A, c = 15.567/15.023 A and \(\beta\) = 101.63\({}^{\circ}\)/103.23\({}^{\circ}\). The further details of crystal structures regarding Wyckoff positions, site symmetry and fractional occupancies can be obtained from Ref.[9]/[7] for Nb\({}_{2}\)Pd(Se/S)\({}_{5}\). Both the structures mainly comprise of chains of Pd and Nb polyhedra with Se/S atoms. Pd and Nb atoms are placed at two inequivalent sites with different coordination environment denoted by Pd1, Pd2 and Nb1, Nb2. The atoms denoted by Pd1 are arranged in columns facing their square planarly atoms and in octahedral coordination, whereas, the Pd2 site has a distorted rhombohedral prismatic environment[29]. The Nb atoms form edge sharing and face sharing columns at Nb1 and Nb2 sites respectively and exist in a trigonal prismatic coordination. In packing these Pd and Nb centered polyhedra several octahedral and tetrahedral vacant sites are created. These vacancies lead to site dependent variations in the interorbital hybridization of transition metal atoms with their nearest neighbour (nn) chalcogen atoms due to their different coordination and bond lengths which consequently reflect on the electronic structure of both the compounds.
Fig.2(a) shows valence band (VB) spectra of Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\) and Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\) taken at Hel (21.2 eV) photon energy. In Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\), seven features are observed at binding energy (BE) E\({}_{b}\) = -0.27, -1.25, -1.99, -2.53, -3.27, -4.03 and -5.08 eV which are marked by A to G respectively. The features D, F and G are more intense compared to E and C while the near E\({}_{f}\) feature A appears as a hump type structure. The VB spectra of Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\) is similar
to that of Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\), except that all the features occur at slightly higher BE probably due to its higher resistivity which could be originating from the large electronegativity of S anion in comparison to Se anion. In Fig.2(b) the VB spectra of the two compounds taken at He II (40.8 eV) excitation energy is presented. The He II spectra show sharp enhancement for features A, D and E due to change in the photoemission matrix elements with variation in photon energy. Atomic photoionization cross section (\(\sigma\)) of Pd-4d relative to Se-4p (S-3p) is \(\sim\) 8 (14) and 146 (134) for He I and He II respectively[30]. This indicates that features A, D and E are mainly dominated by Pd-4d states in both these compounds. In the cases of both He I and He II, the near E\({}_{f}\) spectral weight is lesser in Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\) compared to Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\). This result is again consistent with their conductivity nature which is smaller in case of Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\)[7] in comparison to Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\)[9].
First principles based calculations have been performed on systems with comparable compositions(Nb\({}_{2}\)PdSe\({}_{5}\), and Nb\({}_{2}\)PdS\({}_{5}\)) in order to have a guidance in interpreting the experimental results. Fig.3(a) and (b) show calculated DOS along with different atomic contributions of Nb\({}_{2}\)PdSe\({}_{5}\) and Nb\({}_{2}\)PdS\({}_{5}\) respectively. The strongly hybridized states of Pd-4d with their nn Se(S)-p are dominant in VB part \(\sim\) BE range E\({}_{b}\) = -7.0 to -1.0 eV, whereas sufficient states originating from Nb-4d and Se(S)-p hybridization are present in the near E\({}_{f}\) region of DOS. These Nb-4d and Se(S)-p hybridized states are also major component in conduction band(CB) region of the DOS in Nb\({}_{2}\)PdSe\({}_{5}\)(Nb\({}_{2}\)PdS\({}_{5}\)). The lower BE of calculated peaks in Nb\({}_{2}\)PdSe\({}_{5}\), in comparison to those of Nb\({}_{2}\)PdS\({}_{5}\), are in agreement with the trends observed in the experimental data of the VB spectra. Similarly, the DOS of both the compounds consist various peak type structures which are not well resolved in the experimental VB spectra. Possible reason for this could be inelastic scattering of the photoelectrons. However, the calculated peaks, which are close to the experimentally observed VB features are labeled with same notation as used in the experimental VB spectra(Fig.2). These results describe the features E, and D mainly derived from Pd-4d states which is consistent with the significant enhancement of these features due to variation in \(\sigma\) of Pd-4d in He II spectra.
As clear from the above results of DOS that the Pd-4d, and Nb-4d states are strongly hybridized with Se/S states. The hybridization could be differ at two distinct coordination sites Pd1 and Pd2 due to different structural environment. In order to provide more insights into the effects of distinct coordination environment m-DOS of Pd is presented in Fig.4. Fig.4 (a) and (b) show Pd-4d states along with different d orbital contributions at Pd1 and Pd2 sites respectively in Nb\({}_{2}\)PdSe\({}_{5}\). Fig.4 (c) and (d) show the same states in Nb\({}_{2}\)PdS\({}_{5}\). The large band width of the Pd-4d seen in both the compounds indicates significant hybridization between the Pd-4d and the nn anion orbitals. In Nb\({}_{2}\)PdSe\({}_{5}\), at the Pd1 site, the 4d\({}_{x^{2}-y^{2}}\) and 4d\({}_{xy}\) orbitals are directed towards the square planarly coordinated nn Se atoms and show largest separation between bonding (-4.81 eV and -4.28 eV) and antibonding (1.51 eV) states. The 4d\({}_{3z^{2}-r^{2}}\) and 4d\({}_{zx}\) orbitals are oriented towards the nn Pd and Nb atoms and show comparatively smaller separation between bonding (-3.81 eV) and antibonding (0.27 eV) states. This indicates a weak hybridization possibly due to the larger interatomic distances Pd1-Pd1 (3.406A) and Pd1-Nb1 (3.102A) in comparison to the Pd1-Se (2.392A). On the other hand,
Figure 1: (a) Crystal structure of Nb\({}_{2}\)Pd(Se/S)\({}_{5}\), Pd and Nb atoms at two inequivalent sites are marked.
at the Pd2 site, square planar coordination of the nn Se atoms are aligned in the YZ plane (Fig.1). Therefore, the 4d\({}_{yz}\) orbital is strongly hybridized with the nn Se and antibonding states of this orbital are predominant near the E\({}_{f}\). States originate from 4d\({}_{3z^{2}-r^{2}}\), 4d\({}_{xy}\) and 4d\({}_{x^{2}-y^{2}}\) orbitals are moderately hybridized while the 4d\({}_{zx}\) states show atomic like character. Apart from the differences in DOS of individual 4d orbitals originating from distinctly coordinated Pd atoms(Pd1, and Pd2), the largest peak of Pd1-4d DOS lies at higher BE relative to same peaks in Pd2-4d DOS. These differences could be related to different interorbital hybridization strengths due to the smaller distance of nn Se atoms at the Pd1 site (2.392 A) as compared to the Pd2 site (2.598A). In case of Nb\({}_{2}\)PdS\({}_{5}\), the orbital character of different 4d orbitals of Pd at both the sites are similar to those of Nb\({}_{2}\)PdSe\({}_{5}\). However, the DOS structure of various 4d orbitals for both Pd1 and Pd2 atoms are slightly different from Nb\({}_{2}\)PdSe\({}_{5}\). This difference could be associated to change in the hybridization strength due to variation in the bond lengths between the two compounds. These results are compared with experimental findings which informs that the higher BE VB features G, and F(Fig.2) fall in bonding states of Pd1(4d\({}_{x^{2}-y^{2}}\) and 4d\({}_{xy}\)) while the 4d\({}_{3z^{2}-r^{2}}\) (-1.65 eV) and 4d\({}_{x^{2}-y^{2}}\) (-1.35 eV) states of Pd2 are close to the experimentally observed features C and B respectively (Fig.2) in Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\). Likewise, the energy position of differently coordinated Pd1 (4d\({}_{x^{2}-y^{2}}\) and 4d\({}_{xy}\)) and Pd2 (4d\({}_{xy}\)) states at BE -5.12 eV and -2.09 eV matches closely with the experimental features G and D respectively in the VB spectra of Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\) (Fig.2). Thus, the qualitative similarity of the experimental data with the calculated DOS could establish the presence of differently coordinated Pd atoms in both the compounds which has also been predicted theoretically in previous reports[8; 17; 9].
In both the compounds Nb atoms also possess distinct coordination environment at two inequivalent sites Nb1 and Nb2, like Pd atoms. We investigate the role of coordination geometry in the DOS of Nb atoms. Fig.5 depicts the Nb-4d DOS with different d orbital contributions at Nb1 and Nb2 sites for Nb\({}_{2}\)PdSe\({}_{5}\)(Fig.5(a) and (b)) and Nb\({}_{2}\)PdS\({}_{5}\)(Fig.5 (c) and (d)). In both the cases, the DOS of Nb1 and Nb2 is not appreciably differed, unlike the case of Pd DOS. This could be the result of nearly same inter-atomic distances of the Nb atoms with their nn Se/S atoms at both the coordination sites(Nb1 and Nb2). However, a slight difference is visible between the DOS structure of Nb1 and Nb2 in the vicinity of E\({}_{f}\) which could be an important factor in the electronic structure of these compounds. Further, the
Figure 2: (a) and (b) valence band spectra of Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\)(Red) and Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\)(Black) obtained by using HeI and HeII excitation energy at 300 K respectively.
orbital character of states which are predominantly involved in the E\({}_{f}\) crossing is 4d\({}_{3z^{2}-r^{2}}\), and 4d\({}_{xx}\) orbitals of Nb1 and 4d\({}_{xy}\) of Nb2. This result, as well as similar behaviour of the E\({}_{f}\) crossing by orbital specific Pd-4d DOS(see Fig.4) are signature of multiband nature of these compounds which is consistent with the observed multigap behavior in heat capacity measurements on these compounds[7; 9]. These calculated results are also in good agreement with previous reports[17; 8; 9].
Fig.6(a) and (b) show a comparison between the VB spectra of Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\) and Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\) at 300 K and 77 K using He I and He II excitation energy, where differences between the normalized spectra collected at 300 K and 77 K are also plotted in each graph in order to highlight the temperature dependent changes. In case of Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\), the near E\({}_{f}\) states remain unchanged with the lowering of temperature. On the other hand, Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\) spectra shows a depletion of spectral weight near the E\({}_{f}\) which is more pronounced in case of He II. This suppression of DOS is a signature of a pseudogap. In addition, an earlier study shows a crossover of transport carriers from electron to hole has been found at 100 K in Hall measurements in this compound[7]. These two observations are consistent with previous report describing a pseudogap driven sign reversal in Hall coefficients[32]. On the other hand, in Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\) the energy scale T \(\sim\) 50 K, below which a crossover in the electronic ground state has been realized in the transport measurements[9] is lower than our measurement temperature 77 K. This could possibly be the reason for the unseen pseudogap feature at 77 K in Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\).
## 4 Conclusion
We have presented a comparative study of the VB electronic structure of Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\) and Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\) in conjugation with DFT based calculations. We find the VB spectra of both the compounds are qualitatively similar, though all the features are observed at slightly higher BE in Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\) relative to Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\) which could be the result of larger electronegativity of S anion in comparison to Se anion. The calculated DOS describe the VB features are mainly composed of Pd-Se/S hybridized states. The different nature of DOS originating from the differently coordinated atoms signifies an important role of the complex structural geometry in the electronic structure of these compounds. In addition, in our calculated DOS, states crossing the E\({}_{f}\) are dominated by different Pd-4d and Nb-4d orbitals ensuring significant role of multiband effects in these compounds. Furthermore, Nb\({}_{2}\)Pd\({}_{0.95}\)S\({}_{5}\) spectra exhibits a depletion of spectral weight near the E\({}_{f}\) with lowering the temperature to 77 K which results a pseudogap feature. This observation is consistent with previous finding of sign reversal in Hall coefficients below the temperature 100 K. On the other hand, temperature dependent pseudogap is absent in case of Nb\({}_{2}\)Pd\({}_{1.2}\)Se\({}_{5}\) which could be due to the energy scale T \(\sim\) 50 K of crossover in the electronic ground state is lower than our measurement temperature 77 K. Our comprehensive valence band electronic structure study in these compounds may open ways for further experimental and theoretical investigations in this field.
Figure 3: (a) and (b) calculated Total DOS with different atomic contributions of Nb\({}_{2}\)PdSe\({}_{5}\) and Nb\({}_{2}\)PdS\({}_{5}\) respectively.
Figure 4: (a) and (b) DOS originated from different orbitals of Pd-4d at sites Pd1 and Pd2 respectively in Nb\({}_{2}\)PdSe\({}_{5}\). (c) and (d) show the same DOS at Pd1 and Pd2 site respectively in Nb\({}_{2}\)PdS\({}_{5}\).
Figure 5: DOS originated from different orbitals of Nb-4d at site Nb1 and Nb2 are shown in (a) and (b) respectively in Nb\({}_{2}\)PdS\({}_{5}\). (c) and (d) show same DOS at Nb1 and Nb2 site respectively in Nb\({}_{2}\)PdS\({}_{5}\). |
2308.05839 | An Anisotropic Density Turbulence Model from the Sun to 1 au Derived
From Radio Observations | Solar radio bursts are strongly affected by radio-wave scattering on density
inhomogeneities, changing their observed time characteristics, sizes, and
positions. The same turbulence causes angular broadening and scintillation of
galactic and extra-galactic compact radio sources observed through the solar
atmosphere. Using large-scale simulations of radio-wave transport, the
characteristics of anisotropic density turbulence from $0.1 \, R_\odot$ to $1$
au are explored. For the first time, a profile of heliospheric density
fluctuations is deduced that accounts for the properties of extra-solar radio
sources, solar radio bursts, and in-situ density fluctuation measurements in
the solar wind at $1$ au. The radial profile of the spectrum-weighted mean
wavenumber of density fluctuations (a quantity proportional to the scattering
rate of radio-waves) is found to have a broad maximum at around $(4-7) \,
R_\odot$, where the slow solar wind becomes supersonic. The level of density
fluctuations at the inner scale (which is consistent with the proton resonance
scale) decreases with heliocentric distance as $\langle\delta{n_i}^2 \rangle
(r) \simeq 2 \times 10^7 \, (r/R_\odot-1)^{-3.7}$ cm$^{-6}$. Due to scattering,
the apparent positions of solar burst sources observed at frequencies between
$0.1$ and $300$ MHz are computed to be essentially cospatial and to have
comparable sizes, for both fundamental and harmonic emission. Anisotropic
scattering is found to account for the shortest solar radio burst decay times
observed, and the required wavenumber anisotropy is $q_\parallel/q_\perp
=0.25-0.4$, depending on whether fundamental or harmonic emission is involved.
The deduced radio-wave scattering rate paves the way to quantify intrinsic
solar radio burst characteristics. | Eduard P. Kontar, A. Gordon Emslie, Daniel L. Clarkson, Xingyao Chen, Nicolina Chrysaphi, Francesco Azzollini, Natasha L. S. Jeffrey, Mykola Gordovskyy | 2023-08-10T19:37:41Z | http://arxiv.org/abs/2308.05839v2 | # An Anisotropic Density Turbulence Model from the Sun to 1 au Derived From Radio Observations
###### Abstract
Solar radio bursts are strongly affected by radio-wave scattering on density inhomogeneities, changing their observed time characteristics, sizes, and positions. The same turbulence causes angular broadening and scintillation of galactic and extra-galactic compact radio sources observed through the solar atmosphere. Using large-scale simulations of radio-wave transport, the characteristics of anisotropic density turbulence from \(0.1\,R_{\odot}\) to 1 au are explored. For the first time, a profile of heliospheric density fluctuations is deduced that accounts for the properties of extra-solar radio sources, solar radio bursts, and in-situ density fluctuation measurements in the solar wind at 1 au. The radial profile of the spectrum-weighted mean wavenumber of density fluctuations (a quantity proportional to the scattering rate of radio-waves) is found to have a broad maximum at around \((4-7)\,R_{\odot}\), where the slow solar wind becomes supersonic. The level of density fluctuations at the inner scale (which is consistent with the proton resonance scale) decreases with heliocentric distance as \(\langle\delta{n_{i}}^{2}\rangle(r)\simeq 2\times 10^{7}\,(r/R_{\odot}-1)^{-3.7}\) cm\({}^{-6}\). Due to scattering, the apparent positions of solar burst sources observed at frequencies between 0.1 and 300 MHz are computed to be essentially cospatial and to have comparable sizes, for both fundamental and harmonic emission. Anisotropic scattering is found to account for the shortest solar radio burst decay times observed, and the required wavenumber anisotropy is \(q_{\parallel}/q_{\perp}=0.25-0.4\), depending on whether fundamental or harmonic emission is involved. The deduced radio-wave scattering rate paves the way to quantify intrinsic solar radio burst characteristics.
interplanetary scintillation (828), interplanetary turbulence (830), radio bursts (1339), solar corona (1483), solar wind (1534)
## 1 Introduction
Radio-wave scattering affects the temporal characteristics, sizes, and positions of both solar radio bursts and extra-solar sources observed through the turbulent solar atmosphere. Solar radio bursts (such as Type I, Type II, Type III, etc.) are produced predominantly via plasma mechanisms at frequencies that are close to either the local plasma frequency or its double (harmonic), and are thus particularly strongly affected by scattering (e.g., Fokker, 1965; Steinberg, 1972; Riddle, 1974; Bastian, 1994; Arzner & Magun, 1999; Thejappa & MacDowall, 2008; Kontar et al., 2017; Chrysaphi et al., 2018; Krupar et al., 2018; McCauley et al., 2018; Gordovskyy et al., 2019; Kontar et al., 2019; Chrysaphi et al., 2020; Krupar et al., 2020; Murphy et al., 2021; Maguire et al., 2021; Mohan, 2021; Musset et al., 2021; Sharma & Oberoi, 2021). Similarly, extra-solar point sources observed through the solar atmosphere and solar wind are noticeably broadened (e.g., Machin & Smith, 1952; Hewish, 1958; Dennison & Blesing, 1972; Anantharamaiah et al., 1994; Sasikumar Raja et al., 2017) and scintillate (Hewish et al., 1964; Cohen et al., 1967; Woo, 1977; Coles, 1978; Armand et al., 1987; Manoharan, 1993a; Breen et al., 1999; Miyamoto et al., 2014; Sasikumar Raja et al., 2016; Chhetri et al., 2018; Tyul'bashev et al., 2023). Overall, various observations provide a qualitatively consistent picture of turbulence in the heliosphere. These observations also strongly suggest that the density fluctuations are anisotropic, with parallel wavenumbers \(q_{\parallel}\) that are smaller than the perpendicular wavenumbers \(q_{\perp}\) (e.g., Dennison & Blesing, 1972; Coles & Harmon, 1989; Armstrong et al., 1990), with stronger anisotropy sug |
2305.19234 | Grammar Prompting for Domain-Specific Language Generation with Large
Language Models | Large language models (LLMs) can learn to perform a wide range of natural
language tasks from just a handful of in-context examples. However, for
generating strings from highly structured languages (e.g., semantic parsing to
complex domain-specific languages), it is challenging for the LLM to generalize
from just a few exemplars. We propose \emph{grammar prompting}, a simple
approach to enable LLMs to use external knowledge and domain-specific
constraints, expressed through a grammar in Backus--Naur Form (BNF), during
in-context learning. Grammar prompting augments each demonstration example with
a specialized grammar that is minimally sufficient for generating the
particular output example, where the specialized grammar is a subset of the
full DSL grammar. For inference, the LLM first predicts a BNF grammar given a
test input, and then generates the output according to the rules of the
grammar. Experiments demonstrate that grammar prompting can enable LLMs to
perform competitively on a diverse set of DSL generation tasks, including
semantic parsing (SMCalFlow, Overnight, GeoQuery), PDDL planning, and
SMILES-based molecule generation. | Bailin Wang, Zi Wang, Xuezhi Wang, Yuan Cao, Rif A. Saurous, Yoon Kim | 2023-05-30T17:26:01Z | http://arxiv.org/abs/2305.19234v3 | # Grammar Prompting for Domain-Specific Language Generation with Large Language Models
###### Abstract
Large language models (LLMs) can learn to perform a wide range of natural language tasks from just a handful of in-context examples. However, for generating strings from highly structured languages (e.g., semantic parsing to complex domain-specific languages), it is challenging for the LLM to generalize from just a few exemplars. We explore _grammar prompting_ as a simple approach for enabling LLMs to use external knowledge and domain-specific constraints, expressed through a grammar expressed in Backus-Naur Form (BNF), during in-context learning. Grammar prompting augments each demonstration example with a specialized grammar that is minimally sufficient for generating the particular output example, where the specialized grammar is a subset of the full DSL grammar. For inference, the LLM first predicts a BNF grammar given a test input, and then generates the output according to the rules of the grammar. Experiments demonstrate that grammar prompting can enable LLMs to perform competitively on a diverse set of DSL generation tasks, including semantic parsing (SMCalFlow, Overnight, GeoQuery), PDDL planning, and even molecule generation (SMILES).
## 1 Introduction
Prompting large language models (LLMs) with demonstrations and/or natural language instructions has been shown to be an effective approach for surfacing their myriad capabilities acquired through pretraining [9]. This approach is however inadequate for applications where the task specifications cannot be fully delineated through just a handful of exemplars, for example in semantic parsing where an LLM must translate a natural language utterance to an executable program in a domain-specific language (DSL). DSLs often incorporate domain-specific abstractions and semantics that are difficult to characterize via just a few demonstrations. And unlike general-purpose programming languages, DSLs are by definition specialized and thus unlikely to have been encountered often enough (or at all) during pretraining for the LLM to acquire its full syntax.
How can we draw on the few-shot learning capabilities of LLMs for applications where it must generate strings in structured output space that is substantially different from those seen during pretraining? This work explores _grammar prompting_ as a simple approach for data-efficient generation of structured languages where an output string in the language can be derived through a series of symbolic manipulations. We exploit the fact that constraints over a structured output space can often be succinctly described by a context-free grammar in Backus-Naur Form (BNF), a commonly-used metalanguage for defining a language's syntax. Grammar prompting augments each in-context example \((\mathbf{x},\mathbf{y})\) with a _specialized_ BNF grammar \(G[\mathbf{y}]\) that is minimally sufficient for generating \(\mathbf{y}\). Given a new input, the LLM first predicts the BNF grammar and then generates the answer conditioned on the grammar.
Grammar prompting follows the recent line of work which enhances the few-shot reasoning capabilities of LLMs by interleaving intermediate "reasoning" steps between each in-context input and output [49; 24; 80; 76; 70]. The key difference in our approach is that the intermediate variable is in
the form of a formal grammar rather than in natural language, which focuses on eliciting the symbolic manipulation capabilities of LLMs. The use of a formal grammar moreover makes it possible to impose constraints during incremental decoding such that syntactic validity is guaranteed. Finally, unlike chain-of-thought-style prompts [80] which typically require manual verbalization of the intermediate reasoning steps, in our approach the specialized grammar \(G[\mathbf{y}]\) can be derived automatically by parsing the output \(\mathbf{y}\) with the full (unspecialized) DSL grammar.
We apply grammar prompting to various domain specific languages for semantic parsing (SMCalFlow, Overnight, GeoQuery), AI planning (PDDL), and molecule generation (SMILES), and find that it can meaningfully improve upon standard prompting baselines in the few-shot setting.
## 2 Background and Motivation
### Domain-Specific Languages
Let \(\Sigma^{*}\) be the set of all finite strings over an alphabet \(\Sigma\), and further let \(D\subseteq\Sigma^{*}\) be a domain-specific language (DSL) for an application of interest. Given an input \(\mathbf{x}\) (e.g., a natural language command) we are interested in generating \(\mathbf{y}\in D\) (e.g., a program in a DSL fulfilling the command), as shown by the following calendar assistant example from SMCalFlow [6]:
\[\mathbf{x}:\text{{Add meeting with Jean's manager on Wednesday at 3PM}}.\] \[\mathbf{y}:\text{{CreateEvent}}(\&\text{{(start\_? WednesdayNumberPM(3))}}(\texttt{attendee\_? FindManager(Jean)}))\]
DSLs are crafted by experts who use their domain-specific knowledge to incorporate higher-level abstractions than are typically found in general-purpose programming languages. We assume access to an expert-defined grammar \(G\) that fully specifies the DSL's syntax. As is the case with many DSLs, we further assume that \(G\) is a context-free grammar in Backus-Naur Form (BNF). See Figure 1 for a simple example adapted from SMCalFlow [6]. Letting \(L(G)\) be the language generated by \(G\), we have \(D\subseteq L(G)\subseteq\Sigma^{*}\) (not all syntactically valid programs are semantically valid).
### Few-shot Learning with Large Language Models
In-context learning with large language models (LLMs) has been shown to be an effective approach for few-shot learning [9]. Under this approach, a pretrained LLM is conditioned on \(N\) demonstration examples \((\mathbf{x}^{(i)},\mathbf{y}^{(i)})_{i=1}^{N}\) followed by a test example \(\mathbf{x}\), and the output is given by decoding from the prompted LLM, i.e., \(P_{\texttt{LLM}}(\mathbf{y}\,|\,\mathbf{x},(\mathbf{x}^{(i)},\mathbf{y}^{(i)})_{i=1}^{N})\). The demonstration examples can be optionally preceded by natural language instructions to further improve performance or even enable zero-shot learning [79; 60]. Recent work has additionally shown that interleaving natural language verbalizations of intermediate reasoning steps between each \(\mathbf{x}^{(i)}\) and \(\mathbf{y}^{(i)}\) can greatly improve few-shot performance on complex reasoning tasks [49; 80; 76; 70; 15].
The effectiveness of few-shot in-context learning depends on how useful the implicit knowledge acquired through pretraining is for the task, and also on how effectively the task specifications can be conveyed through the demonstrations. Few-shot generation of strings of a DSL, where the structured nature of combinatorial output space (i.e., the DSL grammar \(G\)) cannot be adequately captured through just a handful of demonstrations, remains challenging for LLMs.
## 3 Grammar Prompting
Grammar prompting exploits the fact that while the actual strings of a DSL may not have been encountered frequently enough (or at all) during pretraining for the LLM to implicitly acquire its syntax, the LLM will likely have encountered many instances of _metalanguages_--i.e., languages used to describe other languages. BNF grammars are a standard metalanguage for specifying the a language's syntax, and is expected to occur in the LLM training corpus with some frequency (e.g., in computer science textbooks). We thus focus on using BNF grammars for few-shot DSL generation.
Figure 1: A simple BNF grammar for a calendar DSL.
Let \(G=\bigcup_{j=1}^{M}\{r_{j}\}\) be an extended BNF grammar where each rule \(r_{j}\) is of the form
<symbol> ::= <expr1> | <expr2> |...
Here <symbol> is a nonterminal symbol and each <expr1> is a sequence of nonterminal and terminal symbols.1 A straightforward approach for incorporating a BNF grammar during in-context learning is to simply prepend the string representation of the full grammar \(G\) to the demonstration examples. However in preliminary experiments, we found that simply prepending the grammar to standard prompts along with an instruction to use the grammar did not yield any improvements.2
Footnote 1: For brevity we forgo the formal tuple-based definition of \(G\) and instead define \(G\) to be equivalent to its context-free rules. We also freely go back and forth between this set definition of \(G\) and its string representation.
Footnote 2: However, when combined with specialized grammars we did observe small improvements by appending the full DSL grammar to the instructions. Hence, for all experiments where \(G\) is small enough (GeoQuery, Overnight-B, SMILES), we include \(G\) as part of the instruction. See Figure 2.
### Specialized Grammars
We instead propose to use _specialized grammars_ to enable LLMs to make better use of domain-specific knowledge and constraints. A specialized grammar \(G^{\prime}\subseteq G\) is a grammar obtained from taking a subset of the rules of the full grammar \(G\). We further define a _minimal specialized grammar_ of \(\mathbf{y}\) to be the smallest BNF grammar (as measured by number of rules) such that \(\mathbf{y}\in L(G[\mathbf{y}])\) (i.e., \(\forall r\in G[\mathbf{y}],\ \mathbf{y}\not\in L(G[\mathbf{y}]\setminus\{r\})\)). Note that the minimal specialized grammar may not be unique due to the potential instantiation of abstract BNF rules (i.e., extended BNF rules).3 In most applications we consider the rules of the minimal specialized grammar will be concrete. See appendix A.1 for further details.
Figure 2: Example of grammar prompting for a calendar DSL. We interleave the minimal specialized grammar \(G[\mathbf{y}^{(i)}]\) between the demonstrations \(\mathbf{x}^{(i)}\) and \(\mathbf{y}^{(i)}\). During decoding, the LLM first predicts the specialized grammar \(\widehat{G}\), and then predicts the program \(\widehat{\mathbf{y}}\) conditioned on \(\widehat{G}\). The blue portion is not part of the actual prompt and only shown for illustrative purposes.
Grammar prompting feeds a sequence of \((\mathbf{x}^{(i)},G[\mathbf{y}^{(i)}],\mathbf{y}^{(i)})_{i=1}^{N}\) along with \(\mathbf{x}\) as a prompt to an LLM. For inference we first obtain the specialized grammar with an (approximate) \(\arg\max\) decoding
\[\widehat{G}=\operatorname*{arg\,max}_{G^{\prime}\subseteq G}\ P_{ \text{LLM}}(G^{\prime}\,|\,\mathbf{x},(\mathbf{x}^{(i)},G[\mathbf{y}^{(i)}],\mathbf{y}^{(i)})_{ i=1}^{N}).\]
We then obtain the program conditioned on \(\widehat{G}\),
\[\widehat{\mathbf{y}}=\operatorname*{arg\,max}_{\mathbf{y}\in L(\widehat{G})}\ P_{ \text{LLM}}(\mathbf{y}\,|\,\widehat{G},\mathbf{x},(\mathbf{x}^{(i)},G[\mathbf{y}^{(i)}],\mathbf{y}^ {(i)})_{i=1}^{N}).\]
We discuss how to perform constrained decoding with \(G\subseteq G\) and \(\mathbf{y}\in L(\widehat{G})\) in the next section. Grammar prompting views DSL program generation as a _grammar specialization_ process where given a natural language specification \(\mathbf{x}\), a set of production rules is selected from \(G\), and then a program \(\mathbf{y}\) is deduced according to the selected rules. Grammar prompting can also be viewed as an instance of chain-of-thought prompting [49; 80] where the intermediate thought is in the form of a formal grammar. However, unlike typical chain-of-thought prompting where the answer is (usually) deterministic given the intermediate reasoning steps, in our case there is still some uncertainty with respect to \(\widehat{\mathbf{y}}\) given \(\widehat{G}\) (e.g., \(L(\widehat{G})\) could still be infinite).
### Constrained Decoding
The use of a formal grammar as an intermediate variable makes it possible to enforce grammatical constraints during autoregressive LLM decoding. We first discuss how we enforce the constraint \(\mathbf{y}\in L(\widehat{G})\). One approach to constrained decoding is to use \(\widehat{G}\) to obtain a left-to-right Earley parser [17] and only decode from valid continuations at each decoding step. However this simple strategy poses several practical challenges when working with API-only LLMs. For one, a valid terminal continuation in \(\widehat{G}\) may consist of multiple BPE tokens. More seriously, while we can sample a valid continuation at each time step by disallowing invalid tokens,4 since the set of valid continuations changes at each time step, this strategy would require calling the LLM API at each time step with the full prompt and prefix along with the proisve.5
Footnote 4: For example by using the logit_bias argument.
Footnote 5: These costs might be mitigated in the future if LLM APIs allow for cheaper use of cached prompts.
While there are many methods for grammar-constrained LM decoding [65; 62; 26], we present a simple strategy which speculatively decodes from the LLM to look ahead for multiple tokens. The pseudocode is shown in Algorithm 1. At each prediction step, we ask the LLM to speculatively decode the full program conditioned on the current prefix (lines 4-5). If the resulting continuation leads to a valid program, we return it (lines 6-7). Otherwise, we consult an Earley parser to extract the longest valid prefix from the current prediction (\(\mathbf{y}_{\text{prefix}}\)), along with a set of valid terminals that can follow the prefix (\(\Sigma[\mathbf{y}_{\text{prefix}}]\)). Finally, we rely on the LLM's probabilities to decide which terminal to use, with which a new partial program can be constructed (lines 10-11).6 Figure 3 illustrates one prediction step where the predicted program is corrected into a new valid partial program. Note that \(\mathbf{w}\) can consist of multiple BPE tokens, e.g., "FindManager (" in Figure 3. By
scoring over multi-token terminals, the search procedure is implicitly augmented by looking ahead for a few BPE tokens.
We use a similar procedure to operationalize the constraint \(G^{\prime}\subseteq G\), except that \(\mathrm{EarleyParse}\) is constructed with a _metagrammar_ (i.e., the grammar of \(G\)) for grammar prediction. See appendix A.1 for more details. In our ablation study we find that while these constraints are helpful insofar as they guarantee syntactic validity, grammar prompting still meaningfully improves upon standard prompting with even with simple unconstrained decoding.
## 4 Experiments
We apply grammar prompting to diverse domains: DSLs for semantic parsing (SMCalFlow, Overnight, GeoQuery), an action DSL (PDDL planning), and a molecule generation DSL (SMILES). These experiments are not necessarily intended to improve upon the state-of-the-art on these benchmarks but rather intended to assess whether LLMs can improve upon standard prompting for few-shot DSL generation by learning to predict and use grammars during in-context learning.
### Semantic Parsing for Tool Usage
Software tools are typically accompanied by a collection of human-interpretable APIs which provide a platform for developers to interact programmatically with the tools. These APIs constitute a DSL, where each production rule of the grammar specifies the input and output types for a specific API call (see Figure 1 for an example). These tools demonstrate a broad spectrum in terms of DSL complexity, ranging from single-function tools such as Google(user_query),Translate(sentence, language) to more complex tools such as the entirety of Wolfram language.7 Enabling LLMs to use external tools via APIs is an important step towards enhancing their capabilities [61; 54; 51; 69; 45].
Footnote 7: [https://www.wolfram.com/language/](https://www.wolfram.com/language/)
We test our approach on standard semantic parsing benchmarks involving complex DSLs: SMCalFlow [6], which features human-generated utterances about calendar management (see Figure 2); GeoQuery [94] which features queries against a US Geography database; and Overnight-Blocks [77], which features queries about blocks in a synthetic block model. See appendix B for examples of input-output pairs along with the specialized grammars. The original benchmarks target the training of conventional semantic parsers and thus contain hundreds/thousands of training examples. We instead focus on the arguably more practical few-shot setting. Much existing LLM-based work on these benchmark rely on retrieval-based in-context learning which first retrieves \(m\) exemplars from a large training set of \(n\) examples (\(n\gg m\)) based on some similarity measure (e.g., BM-25), and then performs in-context learning with the retrieved exemplars [55; 90; 65; 44]. In contrast, we target the few-shot setting where we only assume access to 16-32 demonstration examples.
Our baselines here include: (1) standard prompting, (2) standard prompting with constrained decoding based on the full DSL grammar \(G\)[65; 62], and (3) a derivation tree-based prompting baseline which imbues more structural information to the exemplars by feeding the linearized derivation tree instead of the surface form program.8 We use Codex [12] as the base LLM for these main experiments. We assess methods according to program accuracy (matching the predicted and reference programs) as well as execution accuracy (same execution in both programs) if possible.
Figure 3: Illustration of how an predicted program is corrected in our proposed Earley-based constrained decoding. The final partial program will be subsequently fed into the LLM for continuation.
**Few-Shot results.** The main results are shown in Table 1. We find that grammar prompting can meaningfully improve upon the standard prompting baseline even without constrained decoding. Interestingly, grammar prompting outperforms derivation tree prompting which actually provides _more_ information than the minimal specialized grammar \(G[\mathbf{y}]\) (since the derivation tree explicitly shows how the rules are actually applied to obtain the program). This potentially indicates that having the LLM "plan out" the program by forcing it to predict the specialized grammar \(\widehat{G}\) first is an effective strategy. We also analyze the effect of constrained decoding on the number of LLM API calls in Table 7 of appendix A.1, where we observe that constrained decoding requires roughly three times more API calls than unconstrained decoding. Despite the promising performance of grammar prompting, there is a large gap between using the predicted grammar vs. using the oracle grammar (i.e., setting \(\widehat{G}=G[\mathbf{y}]\)), indicating opportunities for further work in this area.
**Retrieval-based in-context learning.** While our core target application is few-shot semantic parsing, we also apply grammar prompting for retrieval-based in-context learning to test whether it can still improve performance in the data-rich regime and also to compare against the prior work on these benchmarks. Results in Table 2 (left) show that grammar prompting can improve results even in this setting, although the improvements are less pronounced than in the few-shot setting.
**Out-of-distribution generalization.** We experiment to see whether grammar prompting can improve compositional generalization on GeoQuery. Specifically, we test grammar prompting on the compositional splits of GeoQuery split from Shaw et al. [64]. These splits feature structural divergence between training and test examples, e.g., programs have different templates or length. Results in Table 2 (right) show that grammar prompting can improve upon standard prompting, across all splits (Template, TMCD, Length).
We next assess whether grammar prompting can enable LLMs to make zero-shot use of _unseen functions_ (NewFunc) that are not even part of the retrieval set. We set aside 8 functions (smallest, shortest, most, highest, sum, population_1, count, major) and remove them from the retrieval set, simulating a scenario where new functions are supported in the backend yet no NL-program paired data is available for adapting a semantic parser. Note that for GeoQuery (and Overnight-Blk), we always prepend the full DSL grammar \(G\)--which includes the held-out functions--before the in-context exemplars. We observe that with standard prompting LLMs are still capable of guessing correct function names. However, grammar-prompted LLMs achieve significantly better performance than standard prompting, suggesting that the explicit prediction of specialized grammars elicits understanding and reasoning at the grammar level, thereby
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{3}{c}{**Retrieval-based ICL**} & \multicolumn{2}{c}{**GeoQuery Out-of-Distribution**} \\ Model & GeoQuery & SMCalFlow & Overnight-Blk & Template & TMCD & Length & NewFunc \\ (\# IC. examples / \# retrieval set) & (32/560) & (16/128) & (32/1,436) & (32/441) & (32/440) & (32/448) \\ \hline Previous Work & 86.1\({}^{\clubsuit}\) & 60.7\({}^{\clubsuit}\) & 65.2\({}^{\lozenge}\) & – & – & – & – \\ \hline Standard Prompting & 96.8 & 60.0 & 69.4 & 93.2 & 77.1 & 86.4 & 63.3 \\ Grammar Prompting & 97.9 & 62.8 & 70.2 & 95.7 & 86.6 & 88.6 & 90.8 \\ _w. oracle grammar_ & 98.6 & 88.9 & 97.2 & 97.9 & 95.0 & 95.7 & 96.2 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on retrieval-based in-context learning (left) and compositional generalization (right) with Codex. GeoQuery and Overnight-Blk show execution accuracy while SMCalFlow shows program accuracy. The numbers with \({}^{\clubsuit}\) and \({}^{\diamondsuit}\) are taken from Ye et al. [90] and Cao et al. [10], respectively.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{3}{c}{**GeoQuery**} & \multicolumn{2}{c}{**SMCalFlow**} & \multicolumn{2}{c}{**Overnight-Blk**} \\ Approach & Prog. & Exec. & Prog. & Prog. & Exec. \\ \hline Standard Prompting (unconstrained decoding) & 60.7 & 81.5 & 46.4 & 29.3 & 54.7 \\ _w. constrained decoding (\(\widehat{\mathbf{y}}\in L(G)\))_ & 61.1 & 81.8 & 49.2 & 29.3 & 54.7 \\ Linearized Derivation Tree Prompting & 58.6 & 77.5 & 50.0 & 27.3 & 56.4 \\ \hline Grammar Prompting (unconstrained decoding) & 67.1 & 87.5 & 50.8 & 34.8 & 57.4 \\ _w. grammar constraint (\(\widehat{G}\subseteq G\))_ & 67.9 & 88.6 & 51.3 & 37.1 & 60.4 \\ _w. grammar and program constraint (\(\widehat{\mathbf{y}}\in L(\widehat{G})\))_ & 69.6 & 88.9 & 52.4 & 37.6 & 60.9 \\ _w. oracle grammar (\(\widehat{G}=G[\mathbf{y}]\))_ & 95.7 & 96.1 & 80.0 & 73.9 & 94.2 \\ _w. oracle grammar + program constraint_ & 95.7 & 96.8 & 83.6 & 74.4 & 96.5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on few-shot semantic parsing with Codex with various decoding strategies. GeoQuery and Overnight-Blk use 32 in-context examples, and SMCalFlow uses 16 examples. We show both program (Prog.) and execution (Exec.) accuracy when possible.
enabling generalization to unseen functions. Finally, we found that without constrained generation, LLMs were often able to guess functions that did not exist but were nonetheless sensible. It would thus be interesting to explore whether LLMs can tackle DSL-open benchmarks such as LARC [1].
Different base LLMs.We finally experiment with grammar prompting across different base LLMs. Since GPT-3.5's 4K token limit is smaller than Codex's (8K) and GPT-4's (8K) limits, we use fewer exemplars in these experiments than before (24/8/16 exemplars for GeoQuery/SMCalFlow/Overnight-B respectively). Due to API cost, we limit our experiments to a smaller subset of 100 test examples instead of the full test set. In general we observe that grammar prompting can consistently improve upon standard prompting, except on SMCalFlow with GPT-3.5 where we observed both methods to perform poorly.
### Class-Specific Molecule Generation
We next demonstrate that our approach works effectively beyond language parsing problems. We consider the molecule generation task, which is apparently a very domain-specific problem in which even LLMs may not have enough intrinsic expertise. Existing methods for molecule generation typically focus on training specialized neural models using large training sets [43, 36, 14, 2, 75, 59, 18]. We instead follow Guo et al. [29] and explore a few-shot setting where the task is to generate class-specific molecules given a small number of exemplars of that class. Formally, given a small set of molecules \(\{\mathbf{y}_{c}^{(i)}\}_{i=1}^{N}\) belonging to a particular molecule class \(c\in\{\text{\tt Acrylates},\text{\tt Chain\tt Extenders},\text{\tt Isocyanates}\}\), our goal is to generate novel molecules \(\mathbf{y}_{c}\) of the same class that can be synthesized using existing molecules. Since the in-context examples in this case will only consist of molecules of the same class, the "input" \(\mathbf{x}_{c}^{(i)}\) is the empty string in this case. The data contains \(32\) Acrylates, \(11\) Chain Extenders, and \(11\) Isocyanates (see appendix G of Guo et al. [29]).
While molecules can be more faithfully represented with 3D graph structure, the SMILES string representation [81] remains popular due to its ease of use.9 The specialized grammars \(G[\mathbf{y}_{c}]\) (which are specialized from the SMILES grammar) encode various structural properties of the molecule that are specific to the molecule class. Figure 4 shows an example of a specialized grammar and the corresponding molecule in SMILES format. In this example, ring_closure ::= "1" specifies the number of rings, and branch ::= "(" smiles ")" specifies whether there is a branch.
Footnote 9: However SMILES does not guarantee that the generated string corresponds to a valid molecule. Using our approach on more advanced string representations such as SELFIES [42] (which guarantee validity) remains an interesting avenue for future work.
We test our approach by generating 100 molecules for each class and assessing the quality of the generated molecules. In addition to the standard prompting baseline, we also run the graph grammar
\begin{table}
\begin{tabular}{l c c c} \hline \hline
[width=1.5cm] \(G[\mathbf{y}]\): & smiles & := & atom chain branch chain chain | atom chain \\ \multicolumn{2}{c}{atom} & := & organic\_symbol \\ \multicolumn{2}{c}{organic\_symbol} & := & "C" | ”N” | “0” \\ \multicolumn{2}{c}{chain} & := & atom ring\_closure bond atom | bond atom \\ \multicolumn{2}{c}{} & \multicolumn{2}{c}{bond atom\_bond atom} \\ \multicolumn{2}{c}{} & \multicolumn{2}{c}{bond atom\_bond atom} \\ \multicolumn{2}{c}{ring\_closure} & := & ”1” \\ \multicolumn{2}{c}{bond} & := & ”=” \\ \multicolumn{2}{c}{branch} & := & ”(” smiles ")” \\ \multicolumn{2}{c}{_y:_ CC(= C)C(= 0)0CCC01 = CC = CC = C1} \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results with different base LLMs on a subset of 100 examples sampled from the original test set. GeoQuery and Overnight-Blk show execution accuracy, while SMCalFlow shows program accuracy.
Figure 4: Example of a specialized grammar for generating a molecule from the Acrylates class.
baseline from Guo et al. [29] which learns hypergraph grammar [37] from the given molecules. We use four metrics: _Validity (V)_, the percentage of chemically valid molecules; _Diversity (D)_, average pairwise Tanimoto distance over Morgan fingerprints [58]; _Retrosynthesis score (R)_, the percentage of molecules that are synthesizable from existing molecules, which is computed approximately via the Retro* model [11]; _Membership (M)_, the percentage of molecules that belong to the desired monomer class. We use GPT-3.5 as the base LLM and sample from the LLM without constrained decoding, as constrained decoding was found to decrease the diversity of samples. See appendix A.2 for the full experimental setup.
Results.The results are presented in Table 4. We observe that grammar prompting significantly improves the synthesis of Acrylates and Chain Extenders across all metrics, while yielding mixed results for Isocyanates. Notably, both prompting-based methods outperform the graph grammar baseline in terms of Retro score, potentially due to the fact LLMs have been exposed to a certain number of existing molecules during pretraining, which proves advantageous for generating synthesizable molecules. In contrast, the baseline method cannot incorporate any external knowledge beyond the 11 or 32 molecules provided. While these results are preliminary, they potentially indicate that LLMs can serve as a useful tool for generating string representations of chemical structures (and potentially other biological/chemical structures), which remains underexplored.
### Action Subset Selection for Efficient PDDL Planning
Our final experiments show how grammar prompting can improve the efficiency of classical AI planners. Classical planning is the problem of finding a sequence of actions (i.e., a plan) that goes from an initial state \(\mathfrak{s}_{0}\) to a goal state \(\mathfrak{s}_{g}\). An action is represented by a ground operator (e.g., unstack(block1,block2) which consists of an operator unstack along with two object arguments). We additionally consider _macro-operators_ which can potentially speed up planning [8].10 Planning tasks, along with actions, are represented in Planning Domain Definition Language (PDDL) [27]. We explore how grammar prompted LLMs can help guide classical planning algorithms.
Footnote 10: For example, pickup-and-stack(A, B) is a combination of pickup(A) and stack(A, B).
We design specialized grammars to provide guidance to the classical greedy best-first search (GBFS) algorithm [5] by selecting a set of relevant actions. Figure 5 illustrates an example of such specialized grammar, which captures all the necessary actions for the final plan \(\mathbf{y}\) that solves the given task. The process of the guided planning consists of the following steps: (1) given a task, predict a specialized grammar \(G[\mathbf{y}]\); (2) use the specialized grammar \(G[\mathbf{y}]\) to subsequently generate a plan within the restricted action space derived from \(G[\mathbf{y}]\); (3) initialize GBFS's priority queue with the LLM-generated plan, and (4) search for the final plan in the restricted action space. Our setup builds upon the idea of using an LLM-generated plan to initialize GBFS from Silver et al. [66], which has a simpler two-step process: (1) given a task, predict a plan via standard prompting, and (2) utilize this plan to guide GBFS. We use their method as our baseline.
Following Silver et al. [66], we create a similar few-shot setting for LLM planning, using 5 tasks as in-context examples and 10 tasks for evaluation from Pyperplan [5]. We test our approach on 3 classic domains in PDDL planning, including Blocks, Depot and Satellite. For the action space, we use either a set of primitive actions (Prim) or an augmented set with macro actions (Macro). In addition to standard prompting, we add two more baselines: (1) No LLM: planning with the entire set of actions; (2) Min Macro: where we construct a minimal set of macro actions for each domain by selecting actions from existing plans for the training tasks. The Min Macro baseline is a domain-specific method to reduce the action space. By comparing to Min Macro, we can verify the effectiveness of instance-specific v.s. domain-specific action selection. See appendix A.3 for more details.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**Acrylates**} & \multicolumn{4}{c}{**Chain Extenders**} & \multicolumn{4}{c}{**Isocyanates**} \\ Model & V & D & R & M & V & D & R & M & V & D & R & M \\ \hline Graph Grammar [29] & 100 & 0.83 & 79.0 & 30.3 & 100 & 0.86 & 72.7 & 98.3 & 100 & 0.93 & 52.2 & 82.7 \\ Standard Prompting & 87.7 & 0.73 & 80.0 & 76.7 & 60.3 & 0.89 & 72.7 & 55.7 & 94.7 & 0.82 & 78.0 & 92.2 \\ Grammar Prompting & 98.0 & 0.74 & 91.0 & 93.3 & 96.3 & 0.90 & 86.7 & 94.0 & 97.7 & 0.79 & 78.0 & 96.3 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results for few-shot molecule generation with GPT-3.5. The metrics are validity (V), diversity (D), retrosynthesis score (R) and membership (M). Higher is better for all metrics.
Results.We evaluate the efficiency of planning in terms of the number of search nodes created/expanded, as well as the success rate. Table 5 shows the promising performance of LLM-guided planning via grammar prompting. In Blocks, grammar prompting significantly improves efficiency while maintaining 100% success rate. In Depot, grammar prompting with macro actions increased the success rate by 20% over the best competing baseline. In Satellite, using primitive actions yields the best performance with 100% success rate and a reduction of 57% expanded nodes comparing to the No LLM baseline. While our experiments are not intended to complete with the state-of-the-art algorithms for fast planning [20; 21; 22; 25; 78; 25], they indicate the promise of LLMs for improving existing planning algorithms.
## 5 Discussion and Limitations
We discuss several limitations of our approach including some negative results. Grammar prompting did not yield any improvements for DSLs that were likely to have been frequently encountered during pretraining (e.g., regular expressions, SQL). Moreover, constrained generation based on specialized grammars led to increased API calls, and was not always beneficial for tasks beyond semantic parsing. For instance, in molecule generation we discovered that enforcing constraints can sometimes result in worse performance, suggesting that while intermediate grammar may aid reasoning, the predictions remain imperfect. Additionally, in PDDL planning we observed that the constraints applied to prune objects can sometimes negatively impact performance, suggesting that relevant object selection is still very challenging for LLMs. It may be interesting to explore whether finetuning of moderately-sized LLMs using specialized grammars can lead to better grammar-based models for DSL generation.
On the positive front, our work demonstrates that LLMs have the capacity to understand and generate metalanguages. Working in this "metalanguage space" can be combined with chain-of-thought-style [80] prompts by, for example, manually providing natural language comments to the rules of the specialized grammars. We found this to improve results slightly on semantic parsing (see Figure 6 of appendix A.1). Many scientific problems can be formally approached by representing hypotheses into DSL programs [68]. Similarly, DSLs can enable easier encoding of human prior knowledge and scientific principles, providing a foundation for scientific discovery. Recent work shows that state-of-the-art LLMs can follow previously unseen formal systems [72]. Techniques like grammar prompting can widen the scope of scientific problems for which LLMs could be effectively applied by more explicitly accounting for external knowledge and constraints.
Figure 5: Example of a specialized grammar for PDDL planning in the Blocks domain. Given an input \(\mathbf{x}=(\mathbf{s}_{0},\mathbf{s}_{g})\), the specialized grammar \(G[\mathbf{y}]\) only includes necessary actions for solving this task.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Blocks**} & \multicolumn{3}{c}{**Depot**} & \multicolumn{3}{c}{**Satellite**} \\ Approach & Created & Expanded & Success & Created & Expanded & Success & Created & Expanded & Success \\ \hline GBFS + Prim. (No LLM) & 360 & 188 & 1.0 & 18047 & 3870 & 0.4 & 8205 & 150 & 1.0 \\ \hline Standard + Prim. & 348 & 180 & 1.0 & 17597 & 4039 & 0.4 & 6686 & 78 & 1.0 \\ Grammar + Prim. & 251 & 124 & 1.0 & 15033 & 3641 & 0.4 & 5162 & 64 & 1.0 \\ \hline Standard + Macro. & 850 & 16 & 1.0 & 1460 & 56 & 0.4 & 4003 & 27 & 0.3 \\ Grammar + Macro. & 170 & 9 & 1.0 & 2917 & 127 & 0.8 & 3665 & 46 & 0.9 \\ \hline Standard + Min Macro. & 228 & 8 & 1.0 & 1903 & 65 & 0.6 & 3483 & 35 & 0.8 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results on PDDL planning. Created/Expanded refer to the number of created/expanded nodes during planning (lower is better). Success refers to success rate (higher is better). Numbers are averaged over three runs using GPT-3.5.
Related Work
Chain-of-though prompting.Grammar prompting extends the recent line of work on improving the reasoning capabilities through providing explicit reasoning steps as part of the prompt [49; 24; 80; 76; 13; 89]. Our approach is closely related to much concurrent work on employing symbolic variables as part of the prompt [30; 48; 32; 92; 50], though to our knowledge we are not aware of any works that use formal grammars as the intermediate reasoning step.
LLMs for program generation and semantic parsing.Generating programs from natural language specifications, a task often referred to as semantic parsing in the NLP community, is a sub-problem of program synthesis where specifications can come in various forms, including input-output examples; for surveys, see Kamath and Das [38] and Gulwani et al. [28]. Recent works [7; 83] have explored using LLMs for generating code in general-purpose programming languages (e.g., Python) given natural language descriptions. Our work further extends this line by examining whether LLM can generate DSL programs, which are intrinsically scarce. Moreover, DSLs evolve much more rapidly, with API functions being frequently added, changed, or deleted according to tool development or user requirements. Grammar prompting provides a straightforward strategy for LLMs to improve DSL program generation even when only a limited set of examples is available.
Recent studies have begun investigating the use of LLMs for DSL generation. For tools with simple DSLs, such as a search function on Google/Bing, a calculator or a ToDo listing, the current ChatGPT Plugin11 is hypothesized to operate as a zero-shot/few-shot semantic parser. Toolformer [61] goes a step further by training an LLM to learn the appropriate context and usage for these simple-DSL tools. More recent work has explored few-shot prompting along with API descriptions for a broader array of tools Qin et al. [54], including for multimodal reasoning [82; 69; 88]. In contrast, our study focuses on domains that necessitate a more complex DSL and consequently require a greater reasoning capacity from the LLM. Other works focus on investigating how model scales [55] and retrievers [91; 44] affect in-context learning. There has also been work on grammar-constrained decoding with LLMs for semantic parsing [62; 65; 53], which serves as baselines in our empirical study.
Footnote 11: [https://platform.openai.com/docs/plugins/examples](https://platform.openai.com/docs/plugins/examples)
Neural grammars.Grammar prompting can also been seen as a "fully LLM" instantiation of the line of work on neural parameterizations of symbolic grammars [34; 16; 41; 40; 35; 95; 86; 87]. Indeed, our approach to semantic parsing essentially uses prompt-based learning to define a quasi-synchronous grammar [67; 74] whose rules dynamically depend on the source sentence. Concretely, in contrast to recent works which embed learnable neural components within synchronous grammars [39; 23; 73], grammar prompting relies on the implicit in-context learning capabilities of LLMs for the learning component.
Grammar-based molecule generation.Grammar-based methods have gained significant interest in the realm of molecule generation, offering advantages in interpretability, data-efficiency, and controllability. One line of research involves integrating generic SMILES grammars with neural networks to generate syntactically meaningful molecules [43; 14]. Another approach centers on data-driven induction of grammars for generation [29; 37]. Our work aligns with the former, viewing grammar prompting as a straightforward method of integrating grammar into an LLM without the need for additional training.
LLMs for planning.Recently, LLMs have increasingly been utilized for planning in embodied agents, given their potential for extensive world knowledge and strong reasoning abilities. When given goals expressed in natural language in household environments, earlier work [3; 63; 33; 46] directly prompted LLMs to sequence executable actions. However, in PDDL domains, where the desired action sequences are much longer, recent work [66; 71] has found LLMs to underperform classical planners in PDDL planning. Grammar prompting represents a promising strategy for augmenting existing planners with LLMs. Future work could further integrate LLMs with planning by exploiting more reasoning capacities from LLMs in addition to existing efforts such as translating between problems and PDDL models [47], corrective re-prompting [56]. This might involve inducing action models [4], macro-actions [8], sub-tasks [19] or generalized planning [84].
Conclusion
We propose grammar prompting as a simple approach for improving the few-shot DSL generation with large language models. Experiments across a range of structured languages including DSLs for semantic parsing (SMCalFlow, GeoQuery, Overnight), PDDL planning (action DSL), and molecule generation (SMILES), show that grammar prompting can improve upon standard prompting baselines. The encouraging results in semantic parsing indicate its potential to assist LLMs with tool usage, and the promising results in other domains indicate that grammar prompting can enable application of LLMs in domains that intrinsically depend on DSLs.
## Acknowledgments
We thank Jacob Andreas, Gabriel Grand, Linlu Qiu, Tom Silver, and Hunter Lang for helpful discussion and feedback. This study was supported by funds from the Google-MIT research collaborations program and the GIST-MIT joint research program.
|
2304.10887 | Positive Solutions for Fractional p- Laplace Semipositone Problem with
Superlinear Growth | We consider a semipositone problem involving the fractional $p$ Laplace
operator of the form \begin{equation*} \begin{aligned} (-\Delta)_p^s u &=\mu(
u^{r}-1) \text{ in } \Omega,\\ u &>0 \text{ in }\Omega,\\ u &=0 \text{ on
}\Omega^{c}, \end{aligned} \end{equation*} where $\Omega$ is a smooth bounded
convex domain in $\mathbb{R}^N$, $p-1<r<p^{*}_{s}-1$, where
$p_s^{*}:=\frac{Np}{N-ps}$, and $\mu$ is a positive parameter. We study the
behaviour of the barrier function under the fractional $p$-Laplacian and use
this information to prove the existence of a positive solution for small $\mu$
using degree theory. Additionally, the paper explores the existence of a ground
state positive solution for a multiparameter semipositone problem with critical
growth using variational arguments. | R. Dhanya, Ritabrata Jana, Uttam Kumar, Sweta Tiwari | 2023-04-21T11:03:20Z | http://arxiv.org/abs/2304.10887v1 | # Positive solutions for fractional p- Laplace semipositone problem with superlinear growth
###### Abstract.
We consider a semipositone problem involving the fractional \(p\) Laplace operator of the form
\[(-\Delta)_{p}^{s}u =\mu(u^{r}-1)\text{ in }\Omega,\] \[u >0 \text{ in }\Omega,\] \[u =0 \text{ on }\Omega^{c},\]
where \(\Omega\) is a smooth bounded convex domain in \(\mathbb{R}^{N}\), \(p-1<r<p_{s}^{*}-1\), where \(p_{s}^{*}:=\frac{Np}{N-ps}\), and \(\mu\) is a positive parameter. We study the behaviour of the barrier function under the fractional \(p\)-Laplacian and use this information to prove the existence of a positive solution for small \(\mu\) using degree theory. Additionally, the paper explores the existence of a ground state positive solution for a multiparameter semipositone problem with critical growth using variational arguments.
\({}^{1}\)Indian Institute of Science Education and Research, Thiruvananthapuram 695551, India
\({}^{2}\)Department of Mathematics, Indian Institute of Technology, Guwahati-781039, India
\({}^{\mathbf{A}}\)_e-mail:[email protected]. \({}^{\mathbf{B}}\)_e-mail:[email protected]. \({}^{\mathbf{c}}\)_e-mail:[email protected]. \({}^{\mathbf{p}}\)_e-mail:[email protected].
**Acknowledgments:** R. Dhanya was supported by INSPIRE faculty fellowship with grant number DST/INSPIRE/04/2015/003221 when the work was being carried out. Ritabrata Jana is currently supported by the Prime Minister Research Fellowship during the execution of this research.
Introduction
The purpose of this paper is to study the behavior of a barrier function near the boundary of \(\Omega\) when acted upon by the fractional p Laplacian. The novelty of this work is to overcome the difficulty which arises due to the global integration as well as the nonlinearity of the operator while obtaining this pointwise bound. We anticipate that this estimate may have additional applications, such as providing an effective upper bound for solutions by utilizing relevant comparison principles and thereby establishing regularity results. We next state our main theorem which is proved in Section 5 of this paper.
**Theorem 1.1**.: _For \(p\geq 2\), there exists \(\mu_{0}>0\) such that the problem \((P^{\mu})\) admits a positive solution for \(\mu\in(0,\mu_{0})\) and \(p-1<r<p_{s}^{*}-1\)._
The use of apriori estimates for nonlinear equations is known to be a vital tool in proving the existence of solutions to nonlinear problems. This estimate is often used to verify assumptions of the Leray Schauder degree theory about the existence of a fixed point. Chen et al.[10] demonstrated the effectiveness of this approach in proving the existence
of solutions to fractional Laplace equation. Barrios et al. [2] examined more general nonlocal operators and considered the viscosity solutions of the problem in the presence of a gradient term. Both proofs rely on constructing a barrier function and analyzing its behavior near the boundary. In this article, we use the idea of Gidas-Spruck translated function to obtain the uniform \(L^{\infty}\) estimate for viscosity solutions by studying a proper barrier function under the fractional p-Laplacian. It is well known that Gidas-Spruck type estimate transforms the uniform \(L^{\infty}\) bound to that of Liouville type result. In order to obtain the results outlined in Theorem 1.1, we must rely on a nonexistence assumption, denoted \((\mathcal{IIA})\), which is similar to the nonexistence assumption made by Brasco et al. in [6]. We consider a sub-critical exponent problem with \(p-1<q<p_{s}^{*}-1\) either when \(\mathcal{H}\) is the entire space \(\mathbb{R}^{N}\) or a half-space in \(\mathbb{R}^{N}\) given by
\[\left\{\begin{array}{rl}(-\Delta)_{p}^{s}u&=u^{q}\ \mbox{in}\ \mathcal{H},\\ u&>0\ \ \mbox{in}\ \mathcal{H};\\ u&=0\ \ \mbox{on}\ \mathbb{R}^{N}\setminus\mathcal{H}.\end{array}\right. \tag{1.3}\]
* Problem (1.3) does not admit nontrivial viscosity solution in \(C^{\alpha}(\mathcal{H})\).
Although to the best of our knowledge this result has not been proven yet, we have a strong basis for believing that it is a plausible assumption. Specifically, we note that analogous results have been established for both the fractional Laplacian (i.e., for \(0<s<1\) and \(p=2\)) in Theorem 1.1 and 1.2 of Quass and Xia[31], and for the local case ( \(-\Delta_{p},\,1<p<\infty\)) in Theorem 1.1 and Lemma 2.8 of Zou [34]. Therefore, since nonexistence results are already known for these special cases, it is reasonable to assume \((\mathcal{IIA})\).
In a recent preprint, Lopera et al. [27] investigated a semipositone problem with superlinear nonlinearity that is similar to \((P^{\mu})\). They utilized the variational method to establish the existence of a nonnegative solution when \(r\) belongs to the interval \((p-1,p_{s}^{*}-1)\) and \(\mu\) is sufficiently small. However, their findings only guarantee the positivity of the solution for specific values of \(r\) and small \(\mu\). In contrast, our solution to problem \((P^{\mu})\) does not impose any such restrictions and guarantees the positivity of the solution for all values of \(r\).
In the last section of this article, we consider a multiparameter semipositone problem involving critical Sobolev exponent given as below:
\[\left.\begin{array}{rl}(-\Delta)_{p}^{s}u&=\lambda u^{p-1}+\mu(u^{p_{s}^{*}- 1}-1)\ \mbox{in}\ \Omega,\\ u&>0\
When \(\lambda=0\) and \(\mu>0,\) the problem (1.4) reduces to (1.1) for \(r=p_{s}^{*}-1.\) We observe that the scaling \(u\mapsto\mu^{\frac{1}{p_{s}^{2}-p}}u\) transforms the equation in the following critical semipositone nonlocal problem which we call as \((P_{\lambda}^{\mu})\)
\[(P_{\lambda}^{\mu})\left\{\begin{array}{rcl}(-\Delta)_{p}^{s}u&=\lambda u^{p- 1}+u^{p_{s}^{*}-1}-\mu&\text{in }\Omega,\\ u&>0&\text{in }\Omega,\\ u&=0&\text{on }\Omega^{c}.\end{array}\right. \tag{1.5}\]
where \(\lambda,\mu>0\) are parameters. For a given \(\lambda>0\), the solution of the problem (1.5) are the critical points of the energy functional \(E_{\mu}:D_{0}^{s,p}(\Omega)\to\mathbb{R}\) defined by
\[E_{\mu}(u)=\frac{1}{p}\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{|u(x)-u( y)|^{p}}{|x-y|^{N+sp}}\,dx\,dy-\frac{\lambda}{p}\int_{\Omega}u^{p}\,dx-\frac{1}{p _{s}^{*}}\int_{\Omega}u^{p_{s}^{*}}\,dx+\int_{\Omega}\mu u\,dx.\]
All the weak solutions of problem (1.5) lie on the set
\[\eta_{\mu}=\left\{\begin{array}{rcl}u\in D_{0}^{s,p}(\Omega):u>0&\text{in }\Omega&\text{and}\\ &\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{|u(x)-u(y)|^{p}}{|x-y|^{N+sp}} \,dx\,dy=\int_{\Omega}(\lambda u^{p}+u^{p_{s}^{*}}-\mu u)\,dx\end{array}\right\}\]
A weak solution that minimizes \(E_{\mu}\) on \(\eta_{\mu}\) is a ground state solution for (1.5). Let
\[\lambda_{1}=\inf_{u\in D_{0}^{s,p}(\Omega)\setminus\{0\}}\frac{\int_{\mathbb{ R}^{N}\times\mathbb{R}^{N}}\frac{|u(x)-u(y)|^{p}}{|x-y|^{N+sp}}\,dx\,dy}{ \int_{\Omega}|u(x)|^{p}\,dx} \tag{1.6}\]
is the first Dirichlet eigenvalue of the fractional \(p\) Laplacian, which is positive. We prove that,
**Theorem 1.2**.: _For \(p\geq 2\), \(N\geq sp^{2}\) and \(\lambda\in(0,\lambda_{1})\), there exists \(\mu^{*}>0\) such that \(\mu\in(0,\mu^{*})\), problem (1.5) has a ground state solution \(u_{\mu}\in C_{d}^{0,\alpha}(\overline{\Omega})\) for some \(\alpha\in(0,s]\)._
## 2. Preliminaries
### Function spaces
To begin, we will revisit the definitions of several Sobolev spaces with fractional orders that are utilized in this article. For a smooth, bounded domain \(\Omega\subset\mathbb{R}^{N}\) (where \(N\geq 2\)) we denote the standard \(L^{p}(\Omega)\) norm by \(|\cdot|_{L^{p}(\Omega)}\), where \(p\in[1,\infty]\). For a measurable function \(u:\mathbb{R}^{N}\to\mathbb{R}\) and for \(p\in(1,\infty)\) and \(s\in(0,1)\), let
\[[u]_{W^{s,p}(\mathbb{R}^{N})}:=\left(\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}} \frac{|u(x)-u(y)|^{p}}{|x-y|^{N+sp}}\,dx\,dy\right)^{1/p}\]
be the Gagliardo seminorm. For \(sp<N\), we consider the space
\[W^{s,p}(\mathbb{R}^{N}):=\left\{u\in L^{p}(\mathbb{R}^{N}):[u]_{W^{s,p}( \mathbb{R}^{N})}<\infty\right\}.\]
Then \(W^{s,p}(\mathbb{R}^{N})\) is a Banach space with respect to the norm
\[\|u\|_{W^{s,p}(\mathbb{R}^{N})}=\left(\|u\|_{L^{p}(\mathbb{R}^{N})}^{p}+[u]_{W^{ s,p}(\mathbb{R}^{N})}^{p}\right)^{\frac{1}{p}}.\]
To address the Dirichlet boundary condition, we consider the space
\[W^{s,p}_{0}(\Omega):=\left\{u\in W^{s,p}(\mathbb{R}^{N}):u=0\text{ in }\mathbb{R}^{N} \setminus\Omega\right\},\]
which is a Banach space endowed with the norm \(\|\cdot\|=\|\cdot\|_{W^{s,p}(\mathbb{R}^{N})}\). Moreover the embedding \(W^{s,p}_{0}(\Omega)\hookrightarrow L^{r}(\Omega)\) is continuous for \(1\leq r\leq p_{s}^{*}\) and compact for \(1\leq r<p_{s}^{*}\). Due to continuous embedding of \(W^{s,p}_{0}(\Omega)\hookrightarrow L^{r}(\Omega)\) for \(1\leq r\leq p_{s}^{*}\), we define the equivalent norm on \(W^{s,p}_{0}(\Omega)\) as
\[\|u\|_{W^{s,p}_{0}}:=\left(\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{|u( x)-u(y)|^{p}}{|x-y|^{N+sp}}\,dx\,dy\right)^{1/p}.\]
In order to handle the critical growth in problem (1.4), we will work with the following function spaces
\[D^{s,p}(\mathbb{R}^{N}):=\left\{u\in L^{p_{s}^{*}}(\mathbb{R}^{N}):[u]_{W^{s, p}(\mathbb{R}^{N})}<\infty\right\},\]
\[D^{s,p}_{0}(\Omega):=\left\{u\in D^{s,p}(\mathbb{R}^{N}):u=0\text{ in }\mathbb{R}^{N}\setminus\Omega\right\}.\]
Note that for the bounded domain \(\Omega\), \(D^{s,p}_{0}(\Omega)=W^{s,p}_{0}(\Omega).\) Next, we recall some weighted Holder spaces. Let the distance function \(\ d:\overline{\Omega}\to\mathbb{R}_{+}\) be defined by
\[d(x):=\text{dist}(x,\partial\Omega),\ x\in\overline{\Omega}.\]
The weighted Holder type spaces are defined as follows:
\[C^{0}_{d}(\overline{\Omega}):=\left\{u\in C^{0}(\overline{\Omega}):\frac{u}{d ^{s}}\text{ admits a continuous extension to }\overline{\Omega}\right\},\]
\[C^{0,\alpha}_{d}(\overline{\Omega}):=\left\{u\in C^{0}(\overline{\Omega}): \frac{u}{d^{s}}\text{ admits a }\alpha\text{ -H\"{o}lder continuous extension to }\overline{\Omega}\right\}\]
equipped with the norms
\[\|u\|_{C^{0}_{d}(\overline{\Omega})}:=\|\frac{u}{d^{s}}\|_{L^{ \infty}(\Omega)},\] \[\|u\|_{C^{0,\alpha}_{d}(\overline{\Omega})}:=\|u\|_{C^{0}_{d}( \overline{\Omega})}+\sup_{x,y\in\overline{\Omega},\,x\neq y}\frac{|u(x)/d^{s}( x)-u(y)/d^{s}(y)|}{|x-y|^{\alpha}},\]
respectively. The embedding \(C^{0,\alpha}_{d}(\overline{\Omega})\hookrightarrow C^{0}_{d}(\overline{\Omega})\) is compact, for all \(\alpha\in(0,1).\)
### Notions of solutions
Next, we specify the two notions of the solutions, viz. weak and the viscosity solutions of the following boundary value problem:
\[\left\{\begin{array}{rl}(-\Delta)^{s}_{p}u(x)&=h(x,u)&\text{ in }\Omega,\\ u(x)&=0&\text{ on }\Omega^{c}.\end{array}\right. \tag{2.7}\]
We also consider the equivalence between these two solutions and refer to [24], and [3] for the details. Let \(p\in(1,\infty),\,h:\Omega\times\mathbb{R}\to\mathbb{R}\) be a continuous function which satisfy the growth condition
\[|h(x,t)|\leq\gamma(|t|)+\phi(x), \tag{2.8}\]
where \(\gamma\geq 0\) is continuous and \(\phi\in L^{\infty}_{loc}(\Omega).\) We define,
\[L^{p-1}_{sp}(\mathbb{R}^{N}):=\left\{u\in L^{p-1}_{loc}(\mathbb{R}^{N}):\int_ {\mathbb{R}^{N}}\frac{|u(x)|^{p-1}}{(1+|x|)^{N+sp}}\,dx<\infty\right\}.\]
**Definition 2.1**.: _A function \(u\in W^{s,p}(\mathbb{R}^{N})\cap L^{p-1}_{sp}(\mathbb{R}^{N})\) is a weak super solution (sub solution) of (2.7) if_
\[\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))( \varphi(x)-\varphi(y))}{|x-y|^{N+sp}}\,dx\,dy\geq(\leq)\int_{\Omega}h(x,u) \varphi\,dx.\]
_for every \(\varphi\in W^{s,p}_{0}(\Omega)\) and \(\varphi\geq 0.\)_
We say that \(u\) is a weak solution of (2.7) if it is both a weak supersolution and subsolution to the problem.
Next, we define a viscosity sub and super solution of (2.7) as given in [3]. Let \(D\subset\Omega\) be an open set; we define
\[C^{2}_{\beta}(D):=\left\{u\in C^{2}(D):\sup_{x\in D}\left(\frac{\min\{d_{u}(x),1\}^{\beta-1}}{|\nabla u(x)|}+\frac{|D^{2}u(x)|}{d_{u}(x)^{\beta-2}}\right)< \infty\right\},\]
where \(d_{u}\) is the distance from the set of critical points of \(u\) denoted as \(N_{u}\), that is
\[d_{u}(x):=dist(x,N_{u}),\qquad N_{u}:=\{x\in\Omega:\nabla u(x)=0\}.\]
Motivation for the definition of the space \(C^{2}_{\beta}(D)\) comes from [24].
**Definition 2.2**.: _[_3_, Definition 2.2]_ _A function \(u\) is a viscosity super-solution (subsolution) of (2.7) if_
1. \(u<+\infty\ (u>-\infty)\) _a.e. in_ \(\mathbb{R}^{N}\)_,_ \(u>-\infty\ (u<+\infty)\) _a.e. in_ \(\Omega\)_._
2. \(u\) _is lower (upper) semicontinuous in_ \(\Omega\)_._
3. _If_ \(\phi\in C^{2}(B(x_{0},r))\cap L^{p-1}_{sp}(\mathbb{R}^{N})\) _for some_ \(B(x_{0},r)\subset\Omega\) _such that_ \(\phi(x_{0})=u(x_{0}),\phi\leq u(\phi\geq u)\) _in_ \(B(x_{0},r),\ \phi=u\) _in_ \(\Omega\setminus B(x_{0},r)\) _and one of the following conditions hold:_ 1. \(p>\frac{2}{2-s}\) _or_ \(\nabla\phi(x_{0})\neq 0\)_,_ 2. \(1<p\leq\frac{2}{2-s}\)_,_ \(\nabla\phi(x_{0})=0\) _such that_ \(x_{0}\) _is an isolated critical point of_ \(\phi\)_, and_ \(\phi\in C^{2}_{\beta}(B(x_{0},r))\) _for some_ \(\beta>\frac{sp}{p-1}\)_,_ _then_ \[(-\Delta)^{s}_{s}\phi(x_{0})\geq(\leq)h(x_{0},\phi(x_{0}).\]
_._
* \(u_{-}:=\max\{-u,0\}\)__\((u_{+}:=\max\{u,0\})\) _belongs to_ \(L^{p-1}_{sp}(\mathbb{R}^{N})\)_._
_We say that \(u\) is a viscosity solution of (2.7) if it is both a viscosity supersolution and subsolution to the problem._
**Notations:** The paper uses various notations, which we will clarify here. Sections 3 and 4 consider the case where \(p>\frac{2}{2-s}\). However, for sections 5 and 6, the paper restricts \(p\) to the interval \((2,\infty)\) due to the lack of improved weighted regularity results. Unless stated otherwise, \(C\), \(C_{i}\), \(C_{K,u}\), and other similar notations denote positive generic constants, which may vary in value within the same line. For simplicity, the notation \(a^{p-1}\) denotes \(a^{p-1}=|a|^{p-2}a\) for any real number \(a\).
## 3. Pointwise estimates for a barrier function under \(\mathbf{s}\)-fractional p-Laplace operator
In this section, we prove some important estimates which are used in subsequent sections to prove the main results on a nonlocal superlinear semipositone problem with subcritical growth.
We begin this section with an important observation that if \(u\) is smooth enough, then the principal value \(P.V.\) in the definition of fractional \(p\)-Laplacian can be replaced with integral over \(\mathbb{R}^{N}\) when \(p>\frac{2}{2-s}.\) We can also define
\[(-\Delta)^{s}_{p}u(x)=P.V.\int_{\mathbb{R}^{N}}\frac{(u(x)-u(x+z))^{p-1}+(u(x)- u(x-z))^{p-1}}{|z|^{N+sp}}\,dz. \tag{3.9}\]
The equivalence of the definitions (1.2) and (3.9) can be proved using a change of variable.
**Lemma 3.1**.: _For \(\frac{2}{2-s}<p<\infty,\) suppose that \(u\in L^{\infty}(\mathbb{R}^{N})\cap C^{1,1}_{loc}(\Omega),\) then for \(x\in\Omega,\)_
\[(-\Delta)^{s}_{p}u(x)=\int_{\mathbb{R}^{N}}\frac{(u(x)-u(x+z))^{p-1}+(u(x)-u( x-z))^{p-1}}{|z|^{N+sp}}\,dz.\]
Proof.: Let \(p\geq 2\) and \(x\) belongs to a compact subset \(K\) of \(\Omega.\) Using [21, Lemma 2.11], for proper choice of \(\gamma\in(0,1),\) we have
\[\begin{split}\int_{B(0,R_{K})}\frac{|(u(x)-u(x+z))^{p-1}+(u(x)-u (x-z))^{p-1}|}{|z|^{N+sp}}\\ \leq C_{K,u}\int_{B(0,R_{K})}|z|^{\gamma+p-1-N-sp}dz<\infty.\end{split} \tag{3.10}\]
Since, \(u\in L^{\infty}(\mathbb{R}^{N}),\) it can easily be seen that
\[\int_{B^{c}(0,R_{K})}\frac{|(u(x)-u(x+z))^{p-1}+(u(x)-u(x-z))^{p-1}|}{|z|^{N+ sp}}<\infty.\]
Therefore, \(\frac{|(u(x)-u(x+z))^{p-1}+(u(x)-u(x-z))^{p-1}|}{|z|^{N+sp}}\in L^{1}(\mathbb{R}^ {N})\) and hence by dominated convergence theorem we can write,
\[\lim_{\epsilon\to 0}\int_{z\in B^{c}(0,\epsilon)}\frac{(u(x)-u(x+z))^{p-1}+(u(x) -u(x-z))^{p-1}}{|z|^{N+sp}}dz=\] \[\int_{\mathbb{R}^{N}}\frac{(u(x)-u(x+z))^{p-1}+(u(x)-u(x-z))^{p-1} }{|z|^{N+sp}}dz.\]
Now if \(p\in(\frac{2}{2-s},2)\) then again using the second pointwise estimate in [21, Lemma 2.11], and arguing as in the above case, for a suitable choice of \(\gamma\), we can prove that
\[\frac{|(u(x)-u(x+z))^{p-1}+(u(x)-u(x-z))^{p-1}|}{|z|^{N+ps}}\in L^{1}(\mathbb{R }^{N})\]
and thus the conclusion follows as before.
### Barrier function under fractional p-Laplacian
Our aim in this section is to construct a barrier function and then study the behavior of the barrier function under the fractional \(p\)-Laplacian. Later this result is used to obtain the apriori \(L^{\infty}\) estimate in Theorem 4.2. To construct the barrier function, we start by considering a smooth perturbation of the distance function \(d(x):=\text{dist}(x,\partial\Omega)\), where \(x\in\Omega\) is a point in a smooth, bounded domain \(\Omega\subset\mathbb{R}^{N}\) with \(N\geq 2\). We define the set \(\Omega_{\delta}=\{x\in\Omega:d(x,\partial\Omega)<\delta\}\) for some small \(\delta>0\) such that \(d(x)\) is well-defined and \(C^{2}\) in \(\Omega_{\delta}\). The barrier function \(\xi(x)\) is then constructed as follows:
\[\xi(x)=\begin{cases}d^{\beta}(x)&\text{if }x\in\Omega_{\delta}\\ r(x)&\text{if }x\in\Omega\setminus\Omega_{\delta}\\ 0&\text{if }x\in\Omega^{c}\end{cases} \tag{3.11}\]
for \(\beta>0\) and a function \(r\) such that \(\xi\) is positive and \(C^{2}\) in \(\Omega\) with
\[r(x)\geq d^{\beta}(x)\text{ for }x\in\Omega\setminus\Omega_{\delta}. \tag{3.12}\]
The proof of the next lemma is inspired by section 3 of [17], where an estimate of a similar type is proved for the fractional Laplacian. However, since the operator we deal with is nonlinear, we have to estimate the integrals differently. We prove the estimate using a different technique that is tailored to the nonlinear operator.
**Lemma 3.2**.: _Let \(\frac{2}{2-s}<p<\infty\) then there exist \(\delta>0\), \(C>0\) and \(0<\beta_{0}<\frac{sp}{p-1}<\infty\) such that for all \(\beta\in(0,\beta_{0})\)_
\[-(-\Delta)^{s}_{p}\xi(x)\leq-Cd(x)^{\beta(p-1)-sp}\quad\text{in}\quad\Omega_{ \delta}.\]
Proof.: Throughout the proof, we assume that \(x\in\Omega_{\delta}\). For simplicity in notation, we write fractional \(p\)-Laplacian as
\[-(-\Delta)^{s}_{p}\xi(x)=\int_{\mathbb{R}^{N}}\frac{\varrho_{+}(\xi,x,y)+\varrho_ {-}(\xi,x,y)}{|y|^{N+sp}}\,dy,\quad x\in\mathbb{R}^{N}. \tag{3.13}\]
where
\[\varrho_{+}(\xi,x,y)=(\xi(x+y)-\xi(x))^{p-1}\]
and
\[\varrho_{-}(\xi,x,y)=(\xi(x-y)-\xi(x))^{p-1}.\]
Also
\[-(-\Delta)^{s}_{p}\xi(x)=\int_{B(0,\delta)} \frac{\varrho_{+}(\xi,x,y)+\varrho_{-}(\xi,x,y)}{|y|^{N+sp}}\,dy\] \[+\int_{B(0,\delta)^{c}}\frac{\varrho_{+}(\xi,x,y)+\varrho_{-}( \xi,x,y)}{|y|^{N+sp}}\,dy\]
Since \(\xi\) is an \(L^{\infty}\) function, it is easy to show that there is a constant \(C_{1}\) such that
\[\int_{y\in B^{c}(0,\delta)}\frac{|\varrho_{+}(\xi,x,y)+\varrho_{-}(\xi,x,y)|} {|y|^{N+sp}}\,dy\leq C_{1}. \tag{3.14}\]
Now we will estimate the integral over \(B(0,\delta)\). We write
\[\int_{B(0,\delta)}\frac{\varrho_{+}(\xi,x,y)+\varrho_{-}(\xi,x,y)}{|y|^{N+sp} }\,dy=I_{1}(x)+I_{2_{+}}(x)+I_{2_{-}}(x)+I_{3}(x), \tag{3.15}\]
where
\[\begin{split} I_{1}(x)=&\int_{D_{1}}\frac{-2\xi(x)^ {p-1}}{|y|^{N+sp}}\,dy,\\ I_{2_{+}}(x)=&\int_{D_{2_{+}}}\frac{\varrho_{+}(\xi,x,y)-\xi(x)^{p-1}}{|y|^{N+sp}}\,dy,\\ I_{2_{-}}(x)=&\int_{D_{2_{-}}}\frac{\varrho_{-}(\xi,x,y)-\xi(x)^{p-1}}{|y|^{N+sp}}\,dy,\\ I_{3}(x)=&\int_{D_{3}}\frac{\varrho_{+}(\xi,x,y)+ \varrho_{-}(\xi,x,y)}{|y|^{N+sp}}\,dy\end{split} \tag{3.16}\]
with the following domains of integration.
\[\begin{split} D_{1}=&\{y\in B(0,\delta):x+y\notin \Omega\quad\text{and}\quad x-y\notin\Omega\},\\ D_{2_{\pm}}=&\{y\in B(0,\delta):x\pm y\in\Omega \quad\text{and}\quad x\mp y\notin\Omega\},\\ D_{3}=&\{y\in B(0,\delta):x+y\in\Omega\quad\text{and} \quad x-y\in\Omega\}.\end{split} \tag{3.17}\]
For notational simplicity we denote \(d=d(x)\), whenever there is no confusion. We shall show that each of the integrals \(I_{i}\) is bounded above by \(-Cd^{\beta(p-1)-sp}.\) The first integral \(I_{1}(x)\) can be estimated similar to [17, Lemma 3.2] and we obtain
\[I_{1}(x)\leq-C_{2}d^{\beta(p-1)-sp}. \tag{3.18}\]
We will estimate \(I_{2_{+}}(x)\) in detail and \(I_{2_{-}}(x)\) follows in the similar way. We start the estimation of \(I_{2_{+}}(x)\) by making an important observation that \(dist(d^{-1}x+z,d^{-1}\partial\Omega))\leq d^{-1}(|z|+\inf_{y\in\partial\Omega}| x-y|).\) This implies for \(0<\beta\leq\beta_{0}<\frac{sp}{p-1},\)
\[dist(d^{-1}x+z,d^{-1}\partial\Omega))^{\beta}-1\leq|z|^{\beta_{0}}.\]
The constant \(\beta_{0}\) appearing in the above expression will be chosen later. We split the domain \(d^{-1}D_{2_{+}}\) into two parts,
\[d^{-1}D_{2_{+}}=(d^{-1}D_{2_{+}}\cap B(0,R))\cup(d^{-1}D_{2_{+}}\cap B(0,R)^{c })=:B_{1,R}\cup B_{2,R}.\]
Then we can write \(I_{2_{+}}(x)\) as
\[I_{2_{+}}(x) =d^{\beta(p-1)-sp}\int_{B_{1,R}}\frac{(dist(d^{-1}x+z,d^{-1} \partial\Omega)^{\beta}-1)^{p-1}}{|z|^{N+sp}}\,dz \tag{3.19}\] \[+d^{\beta(p-1)-sp}\int_{B_{2,R}}\frac{dist(d^{-1}x+z,d^{-1} \partial\Omega)^{\beta}-1)^{p-1}}{|z|^{N+sp}}\,dz\] \[-d^{\beta(p-1)-sp}\int_{d^{-1}D_{2_{+}}}\frac{1}{|z|^{N+sp}}\,dz\]
The last integral could be easily estimated by \(-C_{3}d^{\beta(p-1)-sp}.\) We will find a proper upper bound for the first two integrals in terms of \(C_{3}\). Observe that,
\[\int_{B_{2,R}}\frac{dist(d^{-1}x+z,d^{-1}\partial\Omega)^{\beta}-1)^{p-1}}{|z |^{N+sp}}\,dz\leq\int_{B_{2,R}}\frac{|z|^{\beta_{0}(p-1)}}{|z|^{N+sp}}\,dz \leq\alpha_{N}R^{\beta_{0}(p-1)-sp}.\]
where \(\alpha_{N}\) denotes the surface measure of unit sphere in \(\mathbb{R}^{N}\). Now we shall choose \(R\) large enough so that \(\alpha_{N}R^{\beta_{0}(p-1)-sp}=\frac{C_{3}}{4}.\) We also have
\[\lim_{\beta\to 0}\int_{B_{1,R}}\frac{dist(d^{-1}x+z,d^{-1}\partial\Omega)^{ \beta}-1)^{p-1}}{|z|^{N+sp}}\,dz=0.\]
Now we shall choose \(\beta_{0}\) small enough so that
\[\int_{B_{1,R}}\frac{dist(d^{-1}x+z,d^{-1}\partial\Omega)^{\beta}-1)^{p-1}}{|z |^{N+sp}}\,dz<\frac{C_{3}}{4}\text{ for all }0<\beta<\beta_{0}.\]
Combining all the above arguments, there exists \(\beta_{0}<<1\) and \(C_{4}>0\) such that
\[I_{2_{+}}(x)\leq-C_{4}d^{\beta(p-1)-sp}\text{ for }\beta\in(0,\beta_{0}) \tag{3.20}\]
Finally, we will estimate \(I_{3}(x)\) by splitting the domain of its integration
\[D_{3}=B(0,\epsilon d)\cup(D_{3}\setminus B(0,\epsilon d))=:B_{1}\cup B_{2}.\]
We rewrite the \(I_{3}(x)\) as
\[\begin{split} I_{3}(x)=\int_{B_{2}}\frac{\varrho_{+}(\xi,x,y)+ \varrho_{-}(\xi,x,y)}{|y|^{N+sp}}\,dy+\int_{B_{1}}&\frac{ \varrho_{+}(\xi,x,y)}{|y|^{N+sp}}\,dy\\ &+\int_{B_{1}}\frac{\varrho_{-}(\xi,x,y)}{|y|^{N+sp}}\,dy\end{split} \tag{3.21}\]
On \(B_{2}\) the estimation of \(I_{3}(x)\) is similar to \(I_{2_{+}}(x)\) hence
\[\int_{B_{2}}\frac{\varrho_{+}(\xi,x,y)+\varrho_{-}(\xi,x,y)}{|y|^{N+sp}}\,dy \leq-C_{4}d^{\beta(p-1)-sp} \tag{3.22}\]
We only estimate \(\int_{B_{1}}\frac{\varrho_{+}(\xi,x,y)}{|y|^{N+sp}}\,dy\), as the estimation of \(\int_{B_{1}}\frac{\varrho_{-}(\xi,x,y)}{|y|^{N+sp}}\,dy\) follows using the similar arguments.
Since \(\xi\) is a \(C^{2}\) function, using Taylor's expansion, we have
\[\xi(x+y)=\xi(x)+\nabla\xi(x)\cdot y+y^{\top}\cdot D^{2}\xi(\alpha)\cdot y\quad \text{for some}\quad\alpha=x+\theta y,\ \theta\in(0,1).\]
We denote
\[l(x+y)=\xi(x)+\nabla\xi(x)\cdot y,\]
so that we have
\[\xi(x+y)-l(x+y)=y^{\top}\cdot D^{2}\xi(\alpha)\cdot y.\]
Similarly, we have
\[\begin{split}\xi(x)=\xi(x+y)+\nabla\xi(x+y)\cdot(-y)+(-y)^{\top} \cdot& D^{2}\xi(\gamma)\cdot(-y)\\ &\text{for some}\ \ \gamma=x+y-\theta y,\theta\in(0,1).\end{split}\]
We denote
\[l(x)=\xi(x+y)+\nabla\xi(x+y)\cdot(-y),\]
so we have
\[\xi(x)-l(x)=(-y)^{\top}\cdot D^{2}\xi(\gamma)\cdot(-y)\]
and
\[l(x+y)-l(x)=\nabla\xi(x)\cdot y.\]
Let \(g(t):=|t|^{p-2}t\) and using [24, Lemma 3.1 and 3.4], we have
\[\begin{split}&\left|\int_{B(0,\epsilon d)}\frac{(\xi(x+y)-\xi(x))^{p- 1}}{|y|^{N+sp}}\,dy\right|\\ &\leq\int_{B(0,\epsilon d)}\frac{|g(\xi(x+y)-\xi(x))-g(l(x+y)-l( x))|}{|y|^{N+sp}}\,dy\\ &\leq C_{5}\int_{B(0,\epsilon d)}\frac{(|l(x+y)-l(x)|+|\xi(x)-l( x)|)^{p-2}|\xi(x)-l(x)|}{|y|^{N+sp}}\,dy\\ &\leq C_{5}\int_{B(0,\epsilon d)}\frac{(|\nabla\xi(x)\cdot y|+|(- y)^{\top}\cdot D^{2}\xi(\gamma)\cdot(-y)|)^{p-2}|(-y)^{\top}\cdot D^{2}\xi( \gamma)\cdot(-y)|}{|y|^{N+sp}}\,dy.\end{split} \tag{3.23}\]
Since \(x\in\Omega_{\delta}\), \(\xi(x)=d^{\beta}(x)\). Then
\[\nabla\xi(x)=\beta d^{\beta-1}\nabla d(x)\text{ and }\frac{\partial^{2}\xi(x) }{\partial x_{i}\partial x_{j}}=\beta(\beta-1)d^{\beta-2}A_{ij}(x)\text{ for }1\leq i,j\leq N,\]
where
\[A_{ij}(x)=\frac{\partial d(x)}{\partial x_{i}}\frac{\partial d(x)}{\partial x _{j}}+\frac{d(x)}{\beta-1}\frac{\partial^{2}d(x)}{\partial x_{i}\partial x_{ j}}.\]
Then for \(\gamma=x+y-\theta y,\,\theta\in(0,1)\) and \(x\in\Omega_{\delta},y\in B(0,\epsilon d),\) we denote
\[M_{i,j}:=\sup_{\gamma}\beta(\beta-1)A_{ij}(\gamma),\]
Thus we have \(|D^{2}\xi(\gamma)|\leq Md^{\beta-2}\) for some \(M>0\). Hence from (3.23) and with a change of variable \(y=d(x)z\) we have,
\[\begin{split}&\left|\int_{B(0,\epsilon d)}\frac{(\xi(x+y)-\xi(x))^{p- 1}}{|y|^{N+sp}}\,dy\right|\\ &\leq C_{6}d^{\beta(p-1)-sp}\int_{B(0,\epsilon)}\frac{(|\beta \nabla d(x)\cdot z|+M|z|^{2})^{p-2}M|z|^{2}}{|z|^{N+sp}}\,dz\\ &\leq C_{6}d^{\beta(p-1)-sp}\int_{0}^{\epsilon}\int_{S^{N}}\left( |\beta\nabla d(x)\cdot\omega|r+Mr^{2}\right)^{p-2}Mr^{1-sp}\,d\omega\,dr\\ &\leq C_{6}d^{\beta(p-1)-sp}\int_{0}^{\epsilon}\int_{S^{N}}\left( \frac{|\beta\nabla d(x)\cdot\omega|}{|\beta\nabla d(x)|}+\frac{Mr}{|\beta \nabla d(x)|}\right)^{p-2}Mr^{p-1-sp}|\beta\nabla d(x)|^{p-2}\,d\omega\,dr\\ &\leq C_{6}Md^{\beta(p-1)-sp}\int_{0}^{\epsilon}\left(1+\frac{Mr} {|\beta\nabla d(x)|}\right)^{p-2}r^{p-1-sp}|\beta\nabla d(x)|^{p-2}\,dr\end{split}\]
The last inequality here is obtained by applying [24, Lemma 3.5]. We can now directly use the estimates exactly as in [24, Lemma 3.6] and for the case \(p\geq 2\) we get
\[\bigg{|}\int_{B(0,\epsilon d)}\frac{(\xi(x+y)-\xi(x))^{p-1}}{|y|^{N +sp}}\,dy\bigg{|} \tag{3.24}\] \[\leq C_{5}Md^{\beta(p-1)-sp}\left(|\beta\nabla d(x)|^{p-2} \epsilon^{p-sp}+M^{p-2}\epsilon^{p-2+p(1-s)}\right)\] \[\leq C_{6}d^{\beta(p-1)-sp}\epsilon^{p-sp}\]
and for \(\frac{2}{2-s}<p<2\),
\[\left|\int_{B(0,\epsilon d)}\frac{(\xi(x+y)-\xi(x))^{p-1}}{|y|^{N+sp}}\,dy \right|\leq C_{6}d^{\beta(p-1)-sp}\epsilon^{p-2+p(1-s)}.\]
In both the cases choosing \(\epsilon\) small enough we have
\[\int_{B_{1}}\frac{\varrho_{+}(\xi,x,y)}{|y|^{N+sp}}\,dy+\int_{B_{1}}\frac{ \varrho_{-}(\xi,x,y)}{|y|^{N+sp}}\,dy\leq\frac{C_{4}}{2}d^{\beta(p-1)-sp} \tag{3.25}\]
Using (3.21) (3.22) and (3.25) we conclude that
\[I_{3}(x)\leq-C_{7}d^{\beta(p-1)-sp} \tag{3.26}\]
This concludes the proof of the Lemma.
We define for \(x_{0}\in\partial\Omega,\ \tau>0\)
\[\Omega_{x_{0}}^{\tau}:=\left\{x\in\mathbb{R}^{N}:x_{0}+\tau x\in\Omega\right\}.\]
Also for the function \(\xi\) defined in (3.11), we set \(\xi_{x_{0}}^{\tau}(x):=\xi(x_{0}+\tau x)\) and define
\[d_{\tau}(x):=dist(x,\partial\Omega_{x_{0}}^{\tau})=\tau^{-1}dist(x_{0}+\tau x, \partial\Omega). \tag{3.27}\]
**Lemma 3.3**.: _Let \(s\in(0,1)\) and \(\frac{2}{2-s}<p<\infty\). Also let \(\beta=sp-\theta\) in (3.11) for \(\theta\in(sp-\beta_{0},sp)\) for some \(\beta_{0}\in(0,\frac{sp}{p-1})\). Then there exist \(c_{0},\delta>0\) such that_
\[(-\Delta)_{p}^{s}\xi_{x_{0}}^{\tau}\geq c_{0}d_{\tau}^{sp^{2}-2sp-\theta(p-1) }\quad\text{ in }\ (\Omega_{x_{0}}^{\tau})_{\delta},\ 0<\tau<1. \tag{3.28}\]
_Moreover, if \(u\in L^{\infty}(\mathbb{R}^{N})\) satisfies the following in viscosity sense for \(c_{1}>0\)_
\[\left\{\begin{array}{rl}(-\Delta)_{p}^{s}u&\leq c_{1}d_{\tau}^{sp^{2}-2sp- \theta(p-1)}&\text{ in }\Omega_{x_{0}}^{\tau},\\ u&=0&\text{ on }(\Omega_{x_{0}}^{\tau})^{c}.\end{array}\right. \tag{3.29}\]
_then_
\[u(x)\leq c_{2}\left(c_{1}+\|u\|_{L^{\infty}(\Omega_{x_{0}}^{\tau})}\right)d_{ \tau}^{sp-\theta},\ x\in(\Omega_{x_{0}}^{\tau})_{\delta} \tag{3.30}\]
_for some \(c_{2}>0\) is only depending on \(s,\delta,\theta,p\) and \(c_{0}\)._
Proof.: We first prove the claim.
**Lemma 3.4**.: _Let \(s\in(0,1)\) and \(\frac{2}{2-s}<p<\infty\). Then there exist \(c_{0},\delta>0\) such that_
\[(-\Delta)_{p}^{s}\xi_{x_{0}}^{\tau}\geq c_{0}d_{\tau}^{sp^{2}-2sp-\theta(p-1) }\quad\text{ in }\ (\Omega_{x_{0}}^{\tau})_{\delta},\ 0<\tau<1. \tag{3.31}\]
_Moreover, if \(u\in L^{\infty}(\mathbb{R}^{N})\) satisfies the following in viscosity sense for \(c_{1}>0\)_
\[\left\{\begin{array}{rl}(-\Delta)_{p}^{s}u&\leq c_{1}d_{\tau}^{sp^{2}-2sp- \theta(p-1)}&\text{ in }\Omega_{x_{0}}^{\tau},\\ u&=0&\text{ on }(\Omega_{x_{0}}^{\tau})^{c}.\end{array}\right. \tag{3.32}\]
_then_
\[u(x)\leq c_{2}\left(c_{1}+\|u\|_{L^{\infty}(\Omega_{x_{0}}^{\tau})}\right)d_ {\tau}^{sp-\theta},\ x\in(\Omega_{x_{0}}^{\tau})_{\delta} \tag{3.33}\]
_for some \(c_{2}>0\)._
Proof.: We first prove the claim.
**Lemma 3.5**.: _Let \(s\in(0,1)\) and \(\frac{2}{2-s}<p<\infty\). Then there exist \(c_{0},\delta>0\) such that_
\[(-\Delta)_{p}^{s}\xi_{x_{0}}^{\tau}\geq c_{0}d_{\tau}^{sp^{2}-2sp-\theta(p-1) }\quad\text{ in }\ (\Omega_{x_{0}}^{\tau})_{\delta},\ 0<\tau<1. \tag{3.34}\]
_Moreover, if \(u\in L^{\infty}(\mathbb{R}^{N})\) satisfies the following in viscosity sense for \(c_{1}>0\)_
\[\left\{\begin{array}{rl}(-\Delta)_{p}^{s}u&\leq c_{1}d_{\tau}^{sp^{2}-2sp- \theta(p-1)}&\text{ in }\Omega_{x_{0}}^{\tau},\\ u&=0&\text{ on }(\Omega_{x_{0}}^{\tau})^{c}.\end{array}\right. \tag{3.35}\]
_then_
\[u(x)\leq c_{2}\left(c_{1}+\|u\|_{L^{\infty}(\Omega_{x_{0}}^{\tau})}\right)d_ {\tau}^{sp-\theta},\ x\in(\Omega_{x_{0}}^{\tau})_{\delta} \tag{3.36}\]
_for some \(c_{2}>0\)._
Proof.: We first prove the claim.
**Lemma 3.6**.: _Let \(s\in(0,1)\) and \(\frac{2}{2-s}<p<\infty\). Then there exist \(c_{0},\delta>0\) such that_
\[(-\Delta)_{p}^{s}\xi_{x_{0}}^{\tau}\geq c_{0}d_{\tau}^{sp^{2}-2sp-\theta(p-1) }\quad\text{ in }\ (\Omega_{x_{0}}^{\tau})_{\delta},\ 0<\tau<1. \tag{3.37}\]
_Moreover, if \(u\in L^{\infty}(\mathbb{R}^{N})\) satisfies the following in viscosity sense for \(c_{1}>0\)_
\[\left\{\begin{array}{rl}(-\Delta)_{p}^{s}u&\leq c_{1}d_{\tau}^{sp^{2}-2sp- \theta(p-1)}&\text{ in }\Omega_{x_{0}}^{\tau},\\ u&=0&\text{ on }(\Omega_{x_{0}}^{\tau})^{c}.\end{array}\right. \tag{3.38}\]
_then_
\[(-\Delta)_{p}^{s}\xi_{x_{0}}^{\tau}\geq c_{0}d_{\tau}^{sp^{2}-2sp-\theta(p-1) }\quad\text{ in }\ (\Omega_{x_{0}}^{\tau})_{\delta},\ 0<\tau<1. \tag{3.39}\]
_Moreover, if \(u\in L^{\infty}(\mathbb{R}^{N})\) satisfies the following in viscosity sense for \(c_{1}>0\)_
\[\left\{\begin{array}{rl}(-\Delta)_{p}^{s}u&\leq c_{1}d_{\tau}^{sp^{2}-2sp- \theta(p-1)}&\text{ in }\Omega_{x_{0}}^{\tau},\\ u&=0&\text{ on }(\Omega_{x_{0}}^{\tau})^{c}.\end{array}\right. \tag{3.40}\]
_then_
\[(-\Delta)_{p}^{s}\xi_{x_{0}}^{\tau}\geq c_{0}d_{\tau}^{sp^{2}-2sp-\theta(p-1) }\quad\text{ in }\ (\Omega_{x_{0}}^{\tau})_{\delta},\ 0<\tau<1. \tag{3.41}\]
_Moreover, if \(u\in L^{\infty}(\mathbb{R}^{N})\) satisfies the following in viscosity sense for \(c_{1}>0\)_
\[\left\{\begin{array}{rl}(-\Delta)_{p}^{s}u&\leq c_{1}d_{\tau}^{sp^{2}-2sp- \theta(p-1)}&\text{ in }\Omega_{x_{0}}^{\tau},\\ u&=0&\text{ on }(\Omega_{x_{0}}^{\tau})^{c}.\end{array}\right. \tag{3.42}\]
_then_
\[(-\Delta)_{p}^{s}\xi_{x_{0}}^{\tau}\geq c_{0}d_{\tau}^{sp^{2}-2sp-\theta(p-1) }\quad\text{ in }\ (\Omega_{x_{0}}^{\
Proof.: Observe that for \(x\in(\Omega^{\tau}_{x_{0}})_{\delta}\) we have \(x_{0}+\tau x\in(\Omega)_{\tau\delta}\subset(\Omega)_{\delta}\) as \(\tau<1\). The rest of the argument for establishing (3.28) follows using similar calculations as in the proof of the Lemma 3.2 by taking \(\beta=sp-\theta\) and translating \(\xi(x)\) to \(\xi^{\tau}_{x_{0}}(x)\) and \((\Omega)_{\delta}\) to \((\Omega^{\tau}_{x_{0}})_{\delta}\).
Next, to obtain (3.30), we define \(v=R\xi^{\tau}_{x_{0}}\) where
\[R=\left(\frac{c_{1}}{c_{0}}\right)^{1/(p-1)}+(\tau\delta)^{\theta-sp}\|u\|_{L^ {\infty}(\Omega^{\tau}_{x_{0}})}.\]
and observe that
\[\left\{\begin{array}{rl}(-\Delta)^{s}_{p}v&\geq(-\Delta)^{s}_{p}u&\text{ in }(\Omega^{\tau}_{x_{0}})_{\delta},\\ u=v&=0&\text{ on }(\Omega^{\tau}_{x_{0}})^{c}\\ v&\geq u&\text{ in }\Omega^{\tau}_{x_{0}}\setminus(\Omega^{\tau}_{x_{0}})_{ \delta}.\end{array}\right. \tag{3.31}\]
Using the comparison principle we can get
\[u(x)\leq c_{2}\left((c_{1})^{1/(p-1)}+\|u\|_{L^{\infty}(\Omega^{\tau}_{x_{0}}) }\right)d^{sp-\theta}_{\tau}\quad\text{ in }\;(\Omega^{\tau}_{x_{0}})_{\delta}\]
where \(c_{2}=\left((\tau\delta)^{\theta-sp}+(c_{0})^{-1/(p-1)}\right)\).
## 4. An apriori uniform boundedness estimate for subcritical problem
In this section we give the uniform \(L^{\infty}\) estimates for the solutions of the semipositone subcritical problem. We recall the local Holder estimates for viscosity solutions from [26, Theorem 1].
**Theorem 4.1**.: _For \(1<p<\infty\), assume \(f\in C(B_{2}(0))\cap L^{\infty}(B_{2}(0))\) and let \(u\in L^{\infty}(\mathbb{R}^{N})\) be viscosity solution of_
\[(-\Delta)^{s}_{p}u=f\quad\text{ in }B_{2}(0).\]
_Then \(u\) is Holder continuous in \(B_{1}(0)\) and in particular there exist \(\alpha\in(0,1)\) and \(c\) depending on \(s,p\) such that_
\[\|u\|_{C^{\alpha}(B_{1}(0))}\leq c\left(\|u\|_{L^{\infty}(\mathbb{R}^{N})}+\| f\|_{L^{\infty}(B_{2}(0))}^{\frac{1}{p-1}}\right). \tag{4.32}\]
**Theorem 4.2**.: _Assume that, \(0<s<1,\) and \(\frac{2}{2-s}<p<\infty,\)\(p-1<q<p_{s}^{*}-1\), and \(g\in C(\overline{\Omega}\times\mathbb{R})\) satisfies that \(|g(x,z)|\leq c|z|^{r},\)\(x\in\Omega,\)\(z\in\mathbb{R}\) where \(c>0\) and \(0<r<q\). Let \(u\) be a positive viscosity solution to the problem_
\[\left\{\begin{array}{rl}(-\Delta)^{s}_{p}u(x)&=u^{q}+g(x,u)&\text{ in }\Omega,\\ u(x)&=0&\text{ on }\Omega^{c}.\end{array}\right. \tag{4.33}\]
_Then there exists a constant \(C>0\) such that_
\[\|u\|_{L^{\infty}(\Omega)}\leq C.\]
Proof.: We begin our proof by assuming the existence of a sequence of positive solutions \(\{u_{k}\}\) of problem (4.33) such that \(M_{k}=\|u_{k}\|_{L^{\infty}(\Omega)}\to\infty\). For \(x_{k}\in\Omega\) such that \(u_{k}(x_{k})=M_{k}\), we define the functions
\[v_{k}(y)=\frac{u_{k}(x_{k}+\mu_{k}y)}{M_{k}},\ y\in\Omega^{k},\]
where
\[\Omega^{k}:=\left\{y\in\mathbb{R}^{N}:x_{k}+\mu_{k}y\in\Omega\right\},\]
\[\mu_{k}=M_{k}^{\frac{-(g-(p-1))}{sp}}\in\mathbb{R}\longrightarrow 0,\]
and functions \(v_{k}\) satisfies \(0<v_{k}\leq 1\) and \(v_{k}(0)=1\). Next we have
\[(-\Delta)_{p}^{s}v_{k}(y)=v_{k}^{q}+h_{k}\quad\text{in}\quad\Omega^{k}. \tag{4.34}\]
where \(|h_{k}|\leq CM_{k}^{r-q}\) and \(h_{k}\in C(\Omega^{k})\). Next passing to subsequences, there are two cases, either \(d(x_{k})\mu_{k}^{-1}\to\infty\) or \(d(x_{k})\mu_{k}^{-1}\to\tilde{d}\) for some non-negative constant \(\tilde{d}\). The first case implies \(\Omega_{k}\to\mathbb{R}^{N}\). The right hand side in (4.34) is uniformly bounded, now as in [2] using estimates in (4.32) with an application of Arzela-Ascoli's theorem and a diagonal argument we get \(v_{k}\to v\) locally uniformly in \(\mathbb{R}^{N}\) up to a subsequence. Thanks to [7, Corolary 4.7], the limiting function \(v\) is indeed a nontrivial positive viscosity solution to the problem \((-\Delta)_{p}^{s}v=v^{p}\) in \(\mathbb{R}^{N}\). Then by Theorem 4.1, we get \(v\in C^{\alpha}(\mathbb{R}^{N})\) for some \(\alpha\in(0,1)\) which contradicts the nonexistence hypothesis \((\mathcal{IIA})\).
Next we shall consider the case \(d(x_{k})\mu_{k}^{-1}\to\tilde{d}\geq 0\) which implies \(x_{k}\to x_{0}\) for some \(x_{0}\in\partial\Omega\). Without loss of generality, we assume outward normal at \(x_{0}\), i.e \(\nu(x_{0})=-e^{N}\). Define
\[w_{k}(y)=\frac{u_{k}(\zeta_{k}+\mu_{k}y)}{M_{k}}\quad y\in D^{k},\]
where \(\zeta_{k}\in\partial\Omega\) is the projection of \(x_{k}\) on \(\partial\Omega\) and
\[D^{k}:=\{y\in\mathbb{R}^{N}:\zeta_{k}+\mu_{k}y\in\Omega\}.\]
Observe that
\[0\in\partial D^{k}, \tag{4.35}\]
and
\[D^{k}\to\mathbb{R}^{N}_{+}=\{y\in\mathbb{R}^{N}:y_{N}>0\}\ \text{as}\ k\to+\infty.\]
It is easy to show that \(w_{k}\) satisfies (4.34) in \(D^{k}\) with a different function \(h_{k}\), but with the same bounds. Setting
\[y_{k}:=\frac{x_{k}-\zeta_{k}}{\mu_{k}},\]
so that \(|y_{k}|=d(x_{k})\mu_{k}^{-1}\), we observe that \(w_{k}(y_{k})=1\). Since \(|y_{k}|\to\tilde{d},\) by passing to further subsequence \(y_{k}\to y_{0}\); \(|y_{0}|=\tilde{d}\geq 0\). Next we claim the following:
Claim: \(y_{0}\) is in the interior of half-space \(\mathbb{R}^{N}_{+}\). For this it is sufficient to show that
\[\tilde{d}=\lim_{k\to\infty}d(x_{k})\mu_{k}^{-1}>0. \tag{4.36}\]
Now our estimate, namely Lemma 3.3, plays a crucial role in establishing this claim. We observe that by (4.34), and since \(r<q\), we get
\[(-\Delta)_{p}^{s}w_{k}\leq C\leq C_{1}d_{k}^{sp^{2}-2sp-\theta(p-1)}\text{ in }D^{k},\]
where \(d_{k}(y)=dist(y,\partial D^{k})\) and for a fixed \(\theta\in(sp-\beta_{0},sp)\) as given in Lemma 3.3. If \(d_{k}(y)<\delta\), by (3.30) we have \(w_{k}(y)\leq C_{0}d_{k}(y)^{sp-\theta}\) for some \(C_{0}>0.\) Now from (4.35) and using the definition of \(d_{k}(y_{k})\) clearly \(|y_{k}|\geq d_{k}(y_{k}).\) Combining all these we get the following if \(d_{k}(y_{k})<\delta\)
\[1=w_{k}(y_{k})\leq C_{0}d_{k}(y_{k})^{sp-\theta}\leq C_{0}|y_{k}|^{sp-\theta}.\]
This asserts that \(|y_{k}|\) is bounded below by a positive constant, thus the claim follows.
Next, similar to [2, Page no 208] we can use the estimates in (4.32) with an application of Arzela-Ascoli's theorem and a diagonal argument to conclude \(w_{k}\to w\) uniformly on a compact set of \(\mathbb{R}^{N}_{+}\), with \(0\leq w\leq 1.\) Since \(y_{k}\to y_{0},\) for some \(y_{0}\) is in the interior of \(\mathbb{R}^{N}_{+},\) the uniform convergence of \(w_{k}\) in the compact subsets of \(\mathbb{R}^{N}_{+}\) implies that \(w(y_{0})=1.\) and \(w(y)\leq Cy_{N}^{sp-\theta}\) for \(y_{N}<\delta.\) Thus \(w\in C(\mathbb{R}^{N}_{+})\) is non-negative, bounded solution of
\[\left\{\begin{array}{rl}(-\Delta)_{p}^{s}w&=w^{q}&\text{ in }\mathbb{R}^{N}_{+},\\ w&=0&\text{ in }\mathbb{R}^{N}\setminus\mathbb{R}^{N}_{+}.\end{array}\right.\]
Again using Theorem 4.1 we get \(w\in C^{\alpha}(\mathbb{R}^{N})\) for some \(\alpha\in(0,1)\). Since \(w(y_{0})=1\), the strong maximum principle implies \(w>0\). So we conclude our theorem by contradiction to the nonexistence assumption \((\mathcal{IA})\).
**Remark 4.3**.: _Results discussed in section 3 and section 4 do not use the convexity of the domain \(\Omega\)._
## 5. Nonlocal superlinear semipositone problem with subcritical growth
In this section we look for a positive solution to following the Dirichlet boundary value problem
\[(P^{\mu})\left\{\begin{array}{rl}(-\Delta)_{p}^{s}u&=\mu(u^{r}-1)\text{ in }\Omega,\\ u&>0\qquad\text{ in }\Omega,\\ u&=0\qquad\quad\text{ on }\Omega^{c},\end{array}\right. \tag{5.37}\]
where \(p-1<r<p_{s}^{*}-1\). Using the substitution \(w=\gamma u\) where \(\gamma^{r+1-p}=\mu\), we see that (5.37) is equivalent to the nonlocal problem
\[\left\{\begin{array}{rl}(-\Delta)_{p}^{s}w&=w^{r}-\gamma^{r}&\mbox{in } \Omega,\\ w&>0&\mbox{in }\Omega,\\ w&=0&\mbox{on }\Omega^{c}.\end{array}\right. \tag{5.38}\]
We use this observation to study the equivalent problem (5.38).
**Definition 5.1**.: _We define the operator \(K:C(\overline{\Omega})\to C(\overline{\Omega})\cap W_{0}^{s,p}(\Omega)\) as \(K(f)=u\) where \(u\) is the unique weak solution of \((-\Delta)_{p}^{s}\,u=f\) in \(\Omega\) and \(u=0\) on \(\Omega^{c}\)._
The weak solution \(u\) can be obtained as the minimizer of the associated functional in \(W_{0}^{s,p}(\Omega)\). Using [21, Theorem 1.1], \(u\in C^{\alpha}(\overline{\Omega})\) for some \(\alpha\in(0,s]\) and thus the map \(K\) is well defined. From Theorem [3, Theorem 1.3], we also infer that the weak solution \(u\) is in fact a viscosity solution. Now, if we set
\[F(\mu,u)=\mu(|u|^{r}-1) \tag{5.39}\]
then finding a weak solution to the nonlinear problem
\[\left\{\begin{array}{rl}(-\Delta)_{p}^{s}u&=F(\mu,u)\ \ \mbox{in }\Omega,\\ u&>0&\mbox{in }\Omega,\\ u&=0&\mbox{on }\Omega^{c}.\end{array}\right. \tag{5.40}\]
is equivalent to finding a fixed point for the map \(K(F(\mu,u)).\) By the regularity results for subcritical problem (see [20]), the weak solution of (5.40) is infact a viscosity solution. Using the rescaling argument once again, we see that \(u=K(F(\mu,u))\) iff \(w=K(\tilde{F}(\gamma,w))\) where \(\tilde{F}(\gamma,w)=|w|^{r}-\gamma^{r}\) i.e.
\[\left\{\begin{array}{rl}(-\Delta)_{p}^{s}w&=|w|^{r}-\gamma^{r}\ \mbox{in } \Omega,\\ w&=0&\mbox{on }\Omega^{c}.\end{array}\right. \tag{5.41}\]
We shall denote
\[S(\gamma,w)=w-K(\tilde{F}(\gamma,w))\ \mbox{for}\ 0\leq\gamma<\infty. \tag{5.42}\]
For \(\gamma=0,\) the map \(S(0,w)\) is denoted as \(S_{0}(w).\) The solutions of \(S_{0}(w)=0\) are nothing but the solutions of
\[\left\{\begin{array}{rl}(-\Delta)_{p}^{s}w&=|w|^{r}\ \mbox{in }\Omega,\\ w&=0&\mbox{on }\Omega^{c}.\end{array}\right. \tag{5.43}\]
Next we state [13, Proposition 2.1 and Remark 2.1] here which will be used in the proof of Lemma 5.3.
**Proposition 5.2**.: _Let \(C\) be a cone in a Banach space \(X\) and \(\Phi:C\to C\) be a compact map such that \(\Phi(0)=0.\) Assume there exists \(0<R_{1}<R_{2}\) such that_
* \(x\neq t\Phi(x)\) _for all_ \(t\in[0,1]\) _and_ \(\|x\|_{X}=R_{1}\)_._
* _There exists a map_ \(T:\overline{B}_{R_{2}}\times[0,\infty)\to C\) _such that_ \(T(x,0)=\Phi(x)\) _for all_ \(\|x\|_{X}=R_{2},T(x,t)\neq x\) _for_ \(\|x\|_{X}=R_{2}\) _and_ \(0\leq t<\infty\) _and_ \(T(x,t)=x\) _has no solution for_ \(x\in\overline{B}_{R_{2}},\,t\geq t_{0}.\)__
_If we denote \(U=\{x\in C:R_{1}<\|x\|_{X}<R_{2}\}\) and \(B_{\rho}=\{x\in C:\|x\|_{X}<\rho\}\), then \(deg(I-\Phi,B_{R_{2}},0)=0,deg(I-\Phi,B_{R_{1}},0)=1\) and \(deg(I-\Phi,U,0)=-1.\)_
**Lemma 5.3**.: _There exists \(0<R_{1}<R_{2}\) such that \(S_{0}(w)\neq 0\) for all \(w\) belonging to the set \(\{w:\|w\|_{\infty}=R_{1}\text{ or }R_{2}\}\) and \(deg(S_{0},B_{R_{2}}\setminus\overline{B}_{R_{1}},0)=-1.\)_
Proof.: Let us consider \(X=C(\overline{\Omega}),\) and \(C=\{u\in X:u(x)\geq 0\}\). We define a map \(\Phi:C\to C\) by setting \(\Phi(\cdot):=K(\tilde{F}(0,\cdot))\). It is straightforward to show that \(\Phi\) is a compact map. Suppose that \(t\Phi(u)=u\) for some \(t\in[0,1]\) then,
\[\left\{\begin{array}{rl}(-\Delta)_{p}^{s}u&=t^{p-1}|u|^{r}&\text{in }\Omega,\\ u&=0&\text{on }\Omega^{c}.\end{array}\right. \tag{5.44}\]
Using the variational characterization of the principal eigenvalue and the \(L^{\infty}\) regularity of the weak solution of (5.44)
\[\lambda_{1}^{p}\int_{\Omega}|u|^{p}\leq\|u\|_{W^{s,p}_{0}(\Omega)}=t^{p-1}\int _{\Omega}|u|^{r+1}\leq\|u\|_{L^{\infty}(\Omega)}^{r+1-p}\int_{\Omega}|u|^{p}.\]
which imples, \(\|u\|_{L^{\infty}(\Omega)}\geq c(s,p,r)\). Now by choosing \(R_{1}\) small enough we have condition \((i)\) listed in Proposition 5.2.
Next we define \(T(u,t)=K(\tilde{F}(0,(|u|+t))).\) Then \(T(u,0)=\Phi(u)\) and we will verify two more conditions of the map \(T\), viz.,
* \(T(u,t)\neq u\) for all \(\|u\|_{L^{\infty}(\Omega)}=R_{2}\) and \(0\leq t<\infty.\)
* \(T(u,t)=u\) has no solution for \(u\in\overline{B}_{R_{2}}\) and \(t\geq t_{0}.\)
To verify (b), we infact claim that \(T(u,t)=u\) has no solution if \(t\geq t_{0}.\) Suppose that for any arbitrary \(t\) there exists a solution \(u_{t}\in C(\overline{\Omega})\) of \(T(u_{t},t)=u_{t}.\) Taking \(\frac{\varphi_{1}^{p}}{u_{t}^{p-1}}\) as the test function and using [5, Proposition 4.2] we have
\[\int_{\Omega}\frac{(u_{t}+t)^{r}\varphi_{1}^{p}}{u_{t}^{p-1}}\] \[\qquad=\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{|u_{t}(x)- u_{t}(y)|^{p-2}(u_{t}(x)-u_{t}(y))}{|x-y|^{N+sp}}\left(\frac{\varphi_{1}(x)^{p}}{u _{t}(x)^{p-1}}-\frac{\varphi_{1}(y)^{p}}{u_{t}(y)^{p-1}}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\leq\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{|\varphi_{1}(x)- \varphi_{1}(y)|^{p}}{|x-y|^{N+sp}}=\lambda_{1}\int_{\Omega}|\varphi_{1}|^{p}.\]
where \(\varphi_{1}\) is the first eigenfunction of \((-\Delta)_{p}^{s}.\) Using the strict convexity of the domain and the monotonicity of the solutions \(u_{t}\) as given in [12, Theorem 1.1] we appeal to [13, eqn. 8] and obtain for any \(x\in\Omega_{\epsilon}\)
\[\gamma(\inf_{\Omega\setminus\Omega_{\frac{\epsilon}{2}}}\varphi_{ 1}^{p})u_{t}(x)^{r-p+1}\leq\int_{I_{x}}u_{t}^{r-p+1}(\xi)\varphi_{1}^{p}(\xi) d\xi\leq\int_{\Omega}u_{t}^{r-p+1}(\xi)\varphi_{1}^{p}(\xi)dx\] \[\leq\int_{\Omega}\frac{(u_{t}+t)^{r}\varphi_{1}^{p}}{u_{t}^{p-1}} \ \leq\ \lambda_{1}\int_{\Omega}|\varphi_{1}|^{p}.\]
where \(I_{x}\) is a measurable set as defined in [13]. Thus we get that \(\|u_{t}\|_{L^{\infty}(\Omega_{\epsilon})}\leq C_{\epsilon}\) for all \(t.\) Now for \(t\) large enough, using (5.45) we have the following estimate
\[t\int_{\Omega_{\epsilon}}\frac{\varphi_{1}^{p}}{C_{\epsilon}^{p-1}}\leq t\int _{\Omega_{\epsilon}}\frac{\varphi_{1}^{p}}{u_{t}^{p-1}}\leq t\int_{\Omega} \frac{\varphi_{1}^{p}}{u_{t}^{p-1}}\leq\lambda_{1}\int_{\Omega}|\varphi_{1}|^ {p}. \tag{5.45}\]
Thus, from (5.45), we infer that \(T(u,t)=u\) has no solution for \(t\geq t_{0}.\) This proves (b).
We will next show that if \(u\) solves \(T(u,t)=u\) for \(t\in[0,\infty),\) then \(\|u\|_{\infty}\leq M\) (independent of \(t\)) and this verifies condition \((a)\). We proceed as in [18]. On the contrary let us assume that there exists \(t_{k}\in[0,\infty)\) such that for the corresponding solutions \(u_{k}\), \(\|u_{k}\|_{L^{\infty}(\Omega)}\to\infty\). Denote \(M_{k}=\|u_{k}\|_{L^{\infty}(\Omega)}\to\infty\) and let \(x_{k}\in\Omega\) be points with \(M_{k}=u_{k}(x_{k})\). First we claim that, up to a subsequence,
\[\frac{t_{k}}{\|u_{k}\|_{L^{\infty}(\Omega)}}\to 0\ \text{as}\ k\to\infty.\]
Indeed, without loss of generality, we assume that \(t_{k}>0\) for all \(k\) and \(t_{k}\to\infty\). Define \(w_{k}:=\frac{u_{k}}{t_{k}}\) and \(\lambda_{k}:=t_{k}^{r-p+1}\). Then, it is easy to check that \((w_{k},\lambda_{k})\) satisfies weakly
\[(-\Delta)_{p}^{s}w_{k}=\lambda_{k}(w_{k}+1)^{r}\]
Then from the comparison principle, we have \(w_{k}\geq\bar{w}_{k}\) where
\[(-\Delta)_{p}^{s}\bar{w}_{k}=\lambda_{k}\ \text{in}\ \Omega\quad\text{and}\quad\bar{w}_{k}=0\ \text{in}\ \Omega^{c} \tag{5.46}\]
Suppose \(\sup_{k}\|\bar{w}_{k}\|_{L^{\infty}(\Omega)}\) is bounded by \(C\), then using the weak formulation of (5.46), we get
\[\|\bar{w}_{k}\|_{W^{s,p}_{0}(\Omega)}^{p}\leq C\lambda_{k}|\Omega|. \tag{5.47}\]
Let \(\phi\geq 0\) be a nontrivial function in \(C^{\infty}_{c}(\Omega),\) then using Holder's inequality we get
\[0<\int_{\Omega}\phi\leq\frac{1}{\lambda_{k}}\int_{\mathbb{R}^{N} \times\mathbb{R}^{N}}\frac{|\bar{w}_{k}(x)-\bar{w}_{k}(y)|^{p-2}(\bar{w}_{k}(x )-\bar{w}_{k}(y))(\phi(x)-\phi(y))}{|x-y|^{N+sp}}\] \[\qquad\qquad\leq\lambda_{k}^{-1}\|\bar{w}_{k}\|_{W^{s,p}_{0}( \Omega)}^{p-1}\|\phi\|_{W^{s,p}_{0}(\Omega)}\ \leq C_{1}\lambda_{k}^{-1/p}\to 0, \tag{5.48}\]
which is a contradiction. Thus \(\sup_{k}\|\bar{w}_{k}\|_{L^{\infty}(\Omega)}\) must be unbounded. Therefore we have
\[\sup_{k}\|w_{k}\|_{L^{\infty}(\Omega)}\geq\sup_{k}\|\bar{w}_{k}\|_{L^{\infty}( \Omega)}=\infty.\]
This verifies the claim since \(\|w_{k}\|_{\infty}=\frac{\|u_{k}\|_{\infty}}{t_{k}}.\) Now we introduce the Gidas-Spruck translated function
\[v_{k}(y)=\frac{u_{k}(x_{k}+\mu_{k}y)}{M_{k}},\quad y\in\Omega^{k}\]
where
\[\Omega^{k}=\left\{y\in\mathbb{R}^{N}:x_{k}+\mu_{k}y\in\Omega\right\}\text{ and }\mu_{k}=M_{k}^{-\frac{r-(p-1)}{sp}}.\]
A straightforward calculation yields
\[(-\Delta)_{p}^{s}v_{k}(x)=\left(v_{k}(x)+\frac{t_{k}}{M_{k}}\right)^{r}, \tag{5.49}\]
Since \(v_{k}\)'s are also viscosity solutions of (5.49) and as \(\ \frac{t_{k}}{M_{k}}\to 0,\) using the arguments as in the proof of Theorem 4.2, we can pass through the limit to get a contradiction to the nonexistence results for the sub-critical problem (1.3). Hence, if \(T(u,t)=u\) then \(\|u\|_{L^{\infty}(\Omega)}\) is bounded independent of \(t\). Finally, we use Proposition 5.2 with the proper choice of \(R_{1}\) and \(R_{2}\) and conclude that \(deg(S_{0},B_{R_{2}}\setminus\overline{B}_{R_{1}},0)=-1.\)
Next we prove the main result of our paper for which we proceed as in [11]. We determine the degree of \(S(\gamma,.)\) by connecting \(S(\gamma,.)\) and \(S(0,.)\) using the homotopy invariance of degree with respect to \(\gamma\). Since \(deg(S_{0},B_{R_{2}}\setminus\overline{B}_{R_{1}},0)\neq 0,\) in particular, this will imply that \(S(\gamma,w)\) has a solution \(w\) satisfying \(R_{1}<\|w\|_{L^{\infty}(\Omega)}<R_{2}\). Finally, we will show that the solution hence obtained for (5.41) is infact positive for \(\gamma\) small enough.
**Theorem 5.4**.: _For \(p\geq 2\), the problem (5.41) admits a positive solution for \(\gamma\in[0,\gamma_{0}].\)_
Proof.: We prove the theorem in two steps:
**STEP I:** There exists a \(\gamma_{0}>0\) such that \(deg(S(\gamma,\cdot),B_{R_{2}}\setminus\overline{B}_{R_{1}},0)=-1\) for all \(\gamma\in[0,\gamma_{0}]\).
If \(S(\gamma,w)\neq 0\) for \(\|w\|_{\infty}\in\{R_{1},R_{2}\}\), then using Lemma 5.3 and the homotopy invariance of degree we know that \(deg(S(\gamma,\cdot),B_{R_{2}}\setminus\overline{B}_{R_{1}},0)=-1.\) Suppose that \(S(\gamma_{n},w_{n})=0\) for some sequence \(\gamma_{n}\to 0\) and \(\|w_{n}\|\in\{R_{1},R_{2}\}.\) Since \(K\tilde{F}(\gamma,\cdot):C(\overline{\Omega})\to C(\overline{\Omega})\) is compact, we can find a function \(w_{0}\in C(\overline{\Omega})\) with \(\|w_{0}\|\in\{R_{1},R_{2}\}\) and \(S(0,w_{0})=0\). This contradicts the previous lemma 5.3 and hence Step I is proved.
Clearly, Step I implies that the set of solutions of \(S(\gamma,w)=0\) is nonempty for \(\gamma\) small enough. If \(w\) is a positive function solving the equation \(S(\gamma,w)=0,\) then \(w\) solves the PDE (5.41). With this observation, the proof of Theorem 5.4 is complete if we prove Step II given below.
**STEP II:** For \(\gamma\) small enough, \(S(\gamma,w)=0\) for some \(w\) in \(B_{R_{2}}\setminus\overline{B}_{R_{1}}\) implies \(w>0.\)
Let \(S(\gamma_{n},w_{n})=0\) for \(\gamma_{n}\to 0\) and \(w_{n}\in B_{R_{2}}\setminus\overline{B}_{R_{1}}\). Since \(\|w_{n}\|_{C(\overline{\Omega})}\) is bounded, using [21,
Theorem 1.1 ] and [22, Theorem 1.1 ], we have
\[\|w_{n}\|_{C^{\alpha}(\overline{\Omega})}\leq C\text{ and }\left\|\frac{w_{n}}{d ^{s}(x)}\right\|_{C^{\alpha}(\overline{\Omega})}\leq C.\]
Thus, by Ascoli Arzela theorem, up to a subsequence
\[w_{n}\to w_{0}\text{ in }C^{0}(\overline{\Omega})\text{ \ \ and \ \ }\frac{w_{n}}{d^{s}(x)}\to\frac{w_{0}}{d^{s}(x)}\text{ in }C^{0}(\overline{\Omega}).\]
We also know that \(w_{0}\) is a non-zero solution of \(S(w_{0},0)=0.\) By regularity results \(w_{0}\in C^{0}_{d^{s}}(\overline{\Omega})\) and Hopf type lemma [14], \(\inf_{x\in\Omega}\frac{w_{0}(x)}{d^{s}(x)}>0.\) Finally, the positivity of \(w_{n}\) follows by using the above uniform convergence and the fact that \(\frac{w_{0}(x)}{d^{s}(x)}>0\) in \(\overline{\Omega}.\) Hence we conclude that (5.41) admits a positive solution for \(\gamma\in[0,\gamma_{0}].\)
**Proof of the Theorem 1.1:** As per the discussion presented at the beginning of section 5, we can infer that the existence of positive solutions for \((P^{\mu})\) is equivalent to the existence of positive solutions for (5.41). Since theorem 5.4 has been established, it follows that problem \((P^{\mu})\) has a positive solution for all \(\mu\in(0,\mu_{0}).\)
**Remark 5.5**.: _We remark that with slight modification, the above theorem can be proved for more general Dirichlet problems like \((-\Delta)_{p}^{s}\,u=\lambda f(u)\) in \(\Omega\) and \(u=0\) in \(\mathbb{R}^{N}\setminus\Omega\) where \(f:[0,\infty)\to\mathbb{R}\) is such that \(f(0)<0,\)\(f(s)\geq 0\) for \(s>>1\) and an additional growth assumption \(\lim_{s\to\infty}\frac{f(s)}{s^{q-1}}=b\) for some \(b>0\) and \(q\in(p-1,p_{s}^{*}-1).\) The details are exactly as discussed in [11]._
**Remark 5.6**.: _Notably, in the local case of the \(p\)-Laplacian when \(p\geq 2\), Theorem 1.1 represents a unique contribution as it employs degree theory, which has not been utilized previously to establish similar results. Prior research by [11] could only examine the case where \(p\in(1,2]\) due to the absence of a uniform \(L^{\infty}\) bound. However, by employing the ideas discussed previously for the local scenario, we can now expand the results of [11] to encompass the case of p Laplacian when \(p>2.\)_
## 6 Semipositone Fractional p-Laplace Problem with Critical Growth
Semipositone problems with critical exponents have been extensively studied in the literature. It is well-established that, by using the Pohozaev identity for the p-Laplacian, one can prove the non-existence of positive solutions for the equation \(-\Delta_{p}u=\mu(w^{p^{*}-1}-1)\) subject to zero Dirichlet boundary conditions in any star-shaped domain \(\Omega\). For the fractional Laplace equation, where the Pohozaev identity is available (as shown in [33]), similar non-existence results can be proven for the semipositone critical exponent problem. However, for the fractional p Laplacian, a Pohozaev identity in a bounded domain has not been
established, which makes it difficult to confirm this fact completely. Nonetheless, recent results in [1] have made progress towards understanding the Pohozaev identity for the fractional p Laplacian in \(\mathbb{R}^{N}\).
Despite this uncertainty, we can still draw motivation from the work of [30] and consider a perturbed multiparameter problem in the spirit of the Brezis-Nirenberg problem as described in \((\ref{berezis})\). In [30], for the local case, a positive solution to this perturbed problem was shown to exist. In this section, we extend this study to a similar problem involving the fractional p Laplacian operator.
The main objective of this section is to establish the existence of a positive solution to the \(p\)-superlinear semipositone problem with critical growth involving the fractional \(p\)-Laplace operator, as presented in equation (\(P_{\lambda}^{\mu}\)) for the conditions \(p\geq 2\), \(N\geq sp^{2}\), and \(\lambda\in(0,\lambda_{1})\). Therefore, from now on in this section, we will assume that these conditions hold. To prove Theorem 1.2, we follow a similar approach to that of [30] by utilizing variational tools as the background framework for the mountain pass lemma. However, a significant challenge we face in this study is demonstrating the positivity of the solution. This issue is resolved through the implementation of Hopf's Lemma, which is discussed in [20], specifically when \(p\geq 2\). We now consider the modified problem for \(\lambda,\mu>0\)
\[\left.\begin{array}{rl}(-\Delta)_{p}^{s}u&=\lambda u_{+}^{p-1}+u_{+}^{p_{s} -1}-\mu f(u)\text{ in }\Omega,\\ u&=0\qquad\qquad\qquad\qquad\qquad\text{ on }\Omega^{c},\end{array}\right\} \tag{6.50}\]
where \(u_{+}(x)=\max\{u(x),0\}\) and
\[f(t)=\begin{cases}1&t\geq 0,\\ 1-|t|^{p-1}&-1<t<0,\\ 0&t\leq-1.\end{cases}\]
We are interested in finding the critical points of the \(C^{1}\)-functional
\[E_{\mu}(u)= \frac{1}{p}\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{|u(x)-u (y)|^{p}}{|x-y|^{N+sp}}\,dx\,dy+\int_{\Omega}\left(-\frac{\lambda u_{+}^{p}}{ p}-\frac{u_{+}^{p_{s}^{*}}}{p_{s}^{*}}\right)\,dx\] \[+\mu\left[\int_{\{-1<u<0\}}\left(u-\frac{u|u|^{p-1}}{p}\right)\, dx-\left(1-\frac{1}{p}\right)|\{u\leq-1\}|\right]\] \[+\mu\int_{\{u\geq 0\}}u\,dx\]
where \(|.|\) denotes the Lebesgue measure in \(\mathbb{R}^{N}\). For all \(v\in D^{s,p}_{0}(\Omega)\), we also have
\[\langle DE_{\mu}(u), v\rangle= \int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{|u(x)-u(y)|^{p-2}(u( x)-u(y))(v(x)-v(y))}{|x-y|^{N+sp}}\,dx\,dy\] \[+\mu\left[\int_{\{u\geq 0\}}v\,dx+\int_{\{-1<u<0\}}\left(1-|u|^{p-1 }\right)v\,dx\right]\] \[+\int_{\Omega}\left(\lambda u_{+}^{p-1}v-u_{+}^{p_{s}^{*}-1}v \right)\,dx.\]
We now define \(S_{s,p}\), the best constant in Sobolev inequality by
\[S_{s,p}=\inf_{u\in D^{s,p}_{0}(\Omega)\setminus\{0\}}\frac{\int_{\mathbb{R}^{ N}\times\mathbb{R}^{N}}\frac{|u(x)-u(y)|^{p}}{|x-y|^{N+sp}}\,dx\,dy}{\left( \int_{\Omega}|u(x)|^{p_{s}^{*}}\,dx\right)^{p/p_{s}^{*}}} \tag{6.51}\]
We encourage the readers to go through [28] for more results regarding the minimization problem (6.51). We also recall the definition of the fractional gradient here.
**Definition 6.1**.: _[_4_]__\((s,p)\) gradient of a function \(v\in D^{s,p}_{0}(\mathbb{R}^{N})\) is defined as_
\[|D^{s}v(x)|^{p}=\int_{\mathbb{R}^{N}}\frac{|v(x+h)-v(x)|^{p}}{|h|^{N+sp}}\,dh.\]
We note that \((s,p)\) gradient is well defined a.e in \(\mathbb{R}^{\mathbb{N}}\) and \(|D^{s}v|\in L^{p}(\mathbb{R}^{\mathbb{N}})\). Next we recall the concentration compactness theorem [4].
**Theorem 6.2**.: _[_4_, Theorem 1.1]_ _Let \((u_{n})\subset D^{s,p}_{0}(\mathbb{R}^{\mathbb{N}})\) be a weakly convergent subsequence with weak limit \(u\). Then there exist two bounded measures \(\kappa\) and \(\nu\), an atmost enumerable set of indices \(I\), and positive real numbers \(\kappa_{i},\nu_{i}\), points \(x_{i}\in\overline{\Omega},i\in I\), such that the following convergence hold weakly\({}^{*}\) in the sense of measures._
\[|D^{s}u_{n}|^{p}dx\rightharpoonup\kappa\geq|D^{s}u|^{p}dx+\sum_{i \in I}\kappa_{i}\delta_{x_{i}}\] \[|u_{n}|^{p_{s}^{*}}dx\rightharpoonup\nu=|u|^{p_{s}^{*}}dx+\sum_{i \in I}\nu_{i}\delta_{x_{i}}\] \[S^{1/p}_{s,p}\nu_{i}^{1/p_{s}^{*}}\leq\kappa_{i}^{1/p}\quad\text {for all}\quad i\in I\]
_where \(S_{s,p}\) as in (6.51)._
We start this section by finding the level of PS-condition.
**Lemma 6.3**.: _For any fixed \(\lambda,\mu>0\), \(E_{\mu}\) satisfies the \((PS)_{c}\) condition for all_
\[c<\frac{s}{N}S^{N/sp}_{s,p}-\left(1-\frac{1}{p}\right)\mu|\Omega|. \tag{6.52}\]
Proof.: Let \((u_{n})\) be a \((PS)_{c}\) sequence. We can show that sequence \((u_{n})\) is bounded in \(D_{0}^{s,p}(\Omega)\) following similar steps of [30, Lemma 2.1]. Since \((u_{n})\) is bounded so is \((u_{n+})\), a renamed subsequence which converges to some \(u_{+}\geq 0\) weakly in \(D_{0}^{s,p}(\Omega)\), strongly in \(L^{q}(\Omega)\) for all \(q\in[1,p_{s}^{*})\), \(a.e.\) in \(\Omega\) and
\[|D^{s}u_{n+}|^{p}dx\rightharpoonup\kappa,\quad u_{n+}^{p_{s}^{*}}dx\rightharpoonup\nu \tag{6.53}\]
The convergence holds weakly* in the sense of measure, where \(\kappa\) and \(\nu\) are bounded measures. Using concentration compactness theorem 6.2 there exists a countable index set \(I\) and positive real numbers \(\kappa_{i},\nu_{i}\), points \(x_{i}\in\overline{\Omega},\,i\in I\) such that
\[\kappa\geq|D^{s}u_{+}|^{p}dx+\sum_{i\in I}\kappa_{i}\delta_{x_{i}},\quad\nu=u _{+}^{p_{s}^{*}}dx+\sum_{i\in I}\nu_{i}\delta_{x_{i}} \tag{6.54}\]
and \(S_{s,p}^{1/p}\nu_{i}^{1/p_{s}^{*}}\leq\kappa_{i}^{1/p}\) for all \(i\in I\). Our aim is to prove \(I=\emptyset\). Suppose by contradiction we fix a concentration point \(x_{i}\). We define a smooth function \(\varphi:\mathbb{R}^{\mathbb{N}}\to[0,1]\) such that \(\varphi(x)=1\) for \(|x|\leq 1\) and \(\varphi(x)=0\) for \(|x|\geq 2\) and for \(i\in I\) and \(\rho>0\) set
\[\varphi_{i,\rho}(x)=\varphi\left(\frac{x-x_{i}}{\rho}\right),\;x\in\mathbb{R} ^{N}.\]
Clearly \(\varphi_{i,\rho}:\mathbb{R}^{N}\to[0,1]\) is a smooth function. We note that the sequence \((\varphi_{i,\rho}u_{n+})\) is bounded in \(D_{0}^{s,p}(\Omega)\). Taking \(v=\varphi_{i,\rho}u_{n+}\) in \(\langle DE_{\mu}(u_{n}),\)\(v\rangle\), we get
\[\begin{split}&\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{|u_{n}(x )-u_{n}(y)|^{p-2}(u_{n}(x)-u_{n}(y))(\varphi_{i,\rho}u_{n+}(x)-\varphi_{i,\rho }u_{n+}(y))}{|x-y|^{N+sp}}\,dx\,dy\\ &=\int_{\Omega}\left(\lambda u_{n+}^{p-1}\varphi_{i,\rho}u_{n+}+u _{n+}^{p_{s}^{*}-1}\varphi_{i,\rho}u_{n+}\right)\,dx-\mu\int_{\Omega}\varphi_ {i,\rho}u_{n+}\,dx+o_{n}(1)\|u_{n+}\|.\end{split} \tag{6.55}\]
The term on the left-hand side of (6.55) can be estimated as
\[\begin{split}&\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{|u_{n} (x)-u_{n}(y)|^{p-2}(u_{n}(x)-u_{n}(y))(\varphi_{i,\rho}u_{n+}(x)-\varphi_{i, \rho}u_{n+}(y))}{|x-y|^{N+sp}}\,dx\,dy\\ &\geq\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{|u_{n}(x)-u_{ n}(y)|^{p-2}(u_{n}(x)-u_{n}(y))(\varphi_{i,\rho}(x)-\varphi_{i,\rho}(y))}{|x-y|^{N+ sp}}u_{n+}(y)\,dx\,dy\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\int_{\mathbb{R }^{N}\times\mathbb{R}^{N}}\frac{|u_{n+}(x)-u_{n+}(y)|^{p}}{|x-y|^{N+sp}} \varphi_{i,\rho}(x)\,dx\,dy.\end{split} \tag{6.56}\]
For the second term on right hand side of (6.56), for a fixed \(x\), taking the transformation \(y-x=h\) and using the definition (6.1) and (6.53), we have
\[\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{|u_{n+}(x)-u_{n+}(y)|^{p}}{|x-y |^{N+sp}}\varphi_{i,\rho}(x)\,dx\,dy\to\int_{\mathbb{R}^{N}}\varphi_{i,\rho} \,d\kappa. \tag{6.57}\]
For the first term in right hand side of (6.56), using the definition of \((s,p)\) gradient, we have
\[\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{|u_{n}(x)-u_{n}(y)|^{p -2}(u_{n}(x)-u_{n}(y))(\varphi_{i,\rho}(x)-\varphi_{i,\rho}(y))}{|x-y|^{N+sp}}u_ {n+}(y)\,dx\,dy \tag{6.58}\] \[\leq C\left(\int_{\mathbb{R}^{N}}|u_{n+}|^{p}|D^{s}\varphi_{i,\rho }|^{p\,dx}\right)^{\frac{1}{p}}.\]
Now we consider the first term on right hand side of (6.55) i.e.,
\[\int_{\Omega}\left(\lambda u_{n+}^{p}\varphi_{i,\rho}+u_{n+}^{p^{*}_{s}} \varphi_{i,\rho}-\mu\varphi_{i,\rho}u_{n+}\right)\,dx.\]
Using (6.53) we have
\[\int_{\Omega}\varphi_{i,\rho}u_{n+}^{p^{*}_{s}}\,dx\to\int_{\Omega}\varphi_{i,\rho}\,d\nu. \tag{6.59}\]
Now we observe that
\[\bigg{|}\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{|u_{n}(x)- u_{n}(y)|^{p-2}(u_{n}(x)-u_{n}(y))(\varphi_{i,\rho}(x)-\varphi_{i,\rho}(y))}{|x-y|^ {N+sp}}u_{n+}(y)\,dx\,dy\] \[+\int_{\Omega}\left(-\lambda u_{n+}^{p}\varphi_{i,\rho}+\mu \varphi_{i,\rho}u_{n+}\right)\,dx\bigg{|}\] \[\leq C\left[\left(\int_{\mathbb{R}^{N}}|u_{n+}|^{p}|D^{s}\varphi_ {i,\rho}|^{p}\,dx\right)^{\frac{1}{p}}+\int_{\Omega\cap B_{2\rho}(x_{i})}u_{n +}^{p}\,dx+\mu\int_{\Omega\cap B_{2\rho}(x_{i})}u_{n+}\,dx\right].\]
We also have
\[\int_{\Omega\cap B_{2\rho}(x_{i})}u_{n+}^{p}\,dx\to\int_{\Omega\cap B_{2\rho}( x_{i})}u_{+}^{p}\,dx. \tag{6.60}\]
Next passing to the limit in (6.55) and using (6.56)-(6.60), we get
\[\int_{\mathbb{R}^{N}}\varphi_{i,\rho}\,d\kappa-\int_{\Omega} \varphi_{i,\rho}\,d\nu\] \[\leq C\left[\left(\int_{\mathbb{R}^{N}}|u_{n+}|^{p}|D^{s}\varphi_ {i,\rho}|^{p}\,dx\right)^{\frac{1}{p}}+\int_{\Omega\cap B_{2\rho}(x_{i})}u_{+} ^{p}\,dx+\mu\int_{\Omega\cap B_{2\rho}(x_{i})}u_{+}\,dx\right].\]
Now letting \(\rho\to 0\), the right hand side of the above inequality goes to \(0\). This implies \(\kappa_{i}\leq\nu_{i}\), which together with \(\nu_{i}>0\) and \(S_{s,p}^{1/p}\nu_{i}^{1/p^{*}_{s}}\leq\kappa_{i}^{1/p}\) gives \(\nu_{i}\geq S_{s,p}^{N/sp}\). On the other hand, similar to [30, Lemma 2.1] and using (6.53) and (6.54) we get
\[\nu_{i}\leq\frac{N}{s}\left[\left(1-\frac{1}{p}\right)\mu|\Omega|+c\right]<S_{ s,p}^{N/sp}\]
a contradiction. Hence \(I=\emptyset\) and
\[\int_{\Omega}u_{n+}^{p^{*}_{s}}\,dx\to\int_{\Omega}u_{+}^{p^{*}_{s}}\,dx. \tag{6.61}\]
Next passing to further subsequence, \(u_{n}\) converges weakly to \(u\) i.e., \(u_{n}\rightharpoonup u\) in \(D_{0}^{s,p}(\Omega)\), strongly in \(L^{r}(\Omega)\) for \(r\in[1,p_{s}^{*})\) and a.e. in \(\Omega\).
We note that \(u\) weakly solves the problem (6.50) and using the Brezis-Lieb type lemma and (6.61) we can show that \(u_{n}\to u\) in \(D_{0}^{s,p}(\Omega)\). This completes the proof of Lemma 6.3.
Next, we state some results regarding the minimization problem (6.51), which is employed to establish mountain pass results, and introduce \(u_{\epsilon,\delta}\) in the same way as depicted in [28, Equation (2.16)]. For the sake of completeness, we shall discuss the definition here. Let \(U\) be a normalised radially symmetric nonnegative decreasing minimizer of \(S_{s,p}\) ( see [28, propsition 2.1]) which satisfies
\[(-\Delta)_{p}^{s}U=U^{p_{s}^{*}-1},\text{ and }\|U\|^{p}=\|U\|_{L_{p_{s}^{*}}( \mathbb{R}^{N})}^{p_{s}^{*}}=S_{s,p}^{N/sp}. \tag{6.62}\]
For any \(\epsilon>0\),
\[U_{\epsilon}(x)=\frac{1}{\epsilon^{(N-sp)/p}}U\left(\frac{|x|}{\epsilon}\right) \tag{6.63}\]
denotes the associated family of minimisers for \(S_{s,p}\) and also satisfies (6.62). Due to the absence of explicit formula for \(U\), we have the following asymptotic estimates.
**Lemma 6.4**.: _There exists \(c_{1},c_{2}>0\) and \(\theta>1\) such that for all \(r\geq 1\),_
\[\frac{c_{1}}{r^{(N-sp)/(p-1)}}\leq U(r)\leq\frac{c_{2}}{r^{(N-sp)/(p-1)}}\]
_and_
\[\frac{U(\theta r)}{U(r)}\leq\frac{c_{2}}{c_{1}}\frac{1}{\theta^{(N-sp)/(p-1)}}.\]
We have some auxiliary estimates from [28]. For \(\epsilon,\delta>0\), and \(\theta\) as in Lemma 6.4, set
\[m_{\epsilon,\delta}=\frac{U_{\epsilon}(\delta)}{U_{\epsilon}(\delta)-U_{ \epsilon}(\theta\delta)}\]
and
\[g_{\epsilon,\delta}(t)=\begin{cases}0&\text{ if }0\leq t\leq U_{\epsilon}(\theta \delta)\\ m_{\epsilon,\delta}^{p}(t-U_{\epsilon}(\theta\delta))&\text{ if }U_{\epsilon}( \theta\delta)\leq t\leq U_{\epsilon}(\delta)\\ t+U_{\epsilon}(\delta)(m_{\epsilon,\delta}^{p-1}-1)&\text{ if }t\geq U_{\epsilon}( \delta),\end{cases}\]
and let
\[G_{\epsilon,\delta}(t)=\int_{0}^{t}g_{\epsilon,\delta}^{\prime}(\tau)^{\frac {1}{p}}\,d\tau=\begin{cases}0&\text{ if }0\leq t\leq U_{\epsilon}(\theta\delta)\\ m_{\epsilon,\delta}(t-U_{\epsilon}(\theta\delta))&\text{ if }U_{\epsilon}( \theta\delta)\leq t\leq U_{\epsilon}(\delta)\\ t&\text{ if }t\geq U_{\epsilon}(\delta).\end{cases} \tag{6.64}\]
The functions \(g_{\epsilon,\delta}\) and \(G_{\epsilon,\delta}\) are nondecreasing and absolutely continuous. Consider the radially symmetric nonincreasing function
\[u_{\epsilon,\delta}(r)=G_{\epsilon,\delta}(U_{\epsilon}(r)),\]
which satisfies
\[u_{\epsilon,\delta}(r)=\begin{cases}U_{\epsilon}(r)&\text{if }r\leq\delta\\ 0&\text{if }r\geq\theta\delta.\end{cases}\]
Now similar to [30, Lemma 3.1] and thanks to \(N\geq sp^{2}\), we can show that for all sufficiently small \(\mu>0,\)\(E_{\mu}\) has a uniformly positive mountain pass level below the threshold for compactness given in Lemma 6.3.
**Lemma 6.5**.: _There exist positive \(\mu_{0},\rho,c_{0}>0,R>\rho\) and \(\beta<\frac{s}{N}S_{s,p}^{N/sp}\) such that the following hold for all \(\mu\in(0,\mu_{0})\) and \(\lambda\in(0,\lambda_{1})\):_
* \(E_{\mu}(0)=0\) _and_ \(E_{\mu}(u)\geq c_{0}\) _for all_ \(u\) _such that_ \(\|u\|=\rho\)_,_
* \(E_{\mu}(tu_{\epsilon,\delta})\leq 0\) _for all_ \(t\geq R\) _and_ \(\epsilon\leq\delta/2\) _and_ \(\delta\in(0,1]\)_,_
* _denoting by_ \(\Gamma=\{\gamma\in C([0,1],D_{0}^{s,p}(\Omega)):\gamma(0)=0,\gamma(1)=Ru_{ \epsilon,\delta}\}\) _the class of paths joining the origin to_ \(Ru_{\epsilon,\delta}\)__ \[c_{0}\leq c_{\mu}:=\inf_{\gamma\in\Gamma}\max_{0\leq t\leq 1}E_{\mu}(\gamma(t)) \leq\beta-\left(1-\frac{1}{p}\right)\mu|\Omega|\] (6.65) _for all sufficiently small_ \(\epsilon>0\)_,_
* \(E_{\mu}\) _has a critical point_ \(u_{\mu}\) _at the level_ \(c_{\mu}\)_._
From the above lemma we conclude that there exists \(u_{\mu}\) which is a weak solution of (6.50). Now we shall prove some more properties of \(u_{\mu}\).
**Lemma 6.6**.: _There exists \(\mu_{*}\in(0,\mu_{0}]\) such that the following hold for all \(\mu\in(0,\mu_{*})\):_
* \(u_{\mu}\) _is uniformly bounded in_ \(D_{0}^{s,p}(\Omega)\)_,_
* \(\int_{E}|u_{\mu}|^{p_{*}^{*}}\,dx\to 0\) _as_ \(|E|\to 0,\) _uniformly in_ \(\mu\)_,_
* \(u_{\mu}\) _is uniformly bounded in_ \(C_{d}^{0,\alpha}(\overline{\Omega})\) _for some_ \(\alpha\in(0,s]\)_._
Proof.: As mentioned in the proof of Lemma 6.3 the sequence \(\{u_{\mu}\}\) is uniformly bounded in \(D_{0}^{s,p}(\Omega)\) for \(\mu\in(0,\mu_{0})\) which proves \((i)\).
Next we shall prove \((ii).\) On contrary let us suppose that for some sequence \(\mu_{j}\to 0\) and \(|E_{j}|\to 0\) we have \(\liminf_{j\to\infty}\int_{E_{j}}|u_{\mu_{j}}|^{p_{*}^{*}}>0.\) Since \((u_{\mu_{j}})\) is bounded, a renamed subsequence \((u_{\mu_{j+}})\) converges to some \(u_{+}\geq 0\) weakly in \(D_{0}^{s,p}(\Omega)\). Following the argument as in Lemma 6.3 we get
\[\int_{\Omega}u_{\mu_{j+}}^{p_{*}^{*}}\,dx\to\int_{\Omega}u_{+}^{p_{*}^{*}}\,dx. \tag{6.66}\]
This implies, \((u_{\mu_{j}})\) then converges to \(u\) in \(D_{0}^{s,p}(\Omega)\), and so also in \(L^{p_{*}^{*}}(\Omega)\). Then
\[\|u_{\mu_{j}}\|_{L^{p_{*}^{*}}(E_{j})}\leq\|u_{\mu_{j}}-u\|_{L^{p_{*}^{*}}( \Omega)}+\|u\|_{L^{p_{*}^{*}}(E_{j})}\to 0,\]
which is a contradiction and hence (ii).
Now Theorem A.3 is applicable and \(\|u_{\mu_{j}}\|_{\infty}\) is uniformly bounded. Since \(p\geq 2\), [22, Theorem 1.1] is applicable and \(u_{\mu}\) is uniformly bounded in \(C_{d}^{0,\alpha}(\overline{\Omega})\).
**Proof of the Theorem 1.2:** We claim that \(u_{\mu}\) is positive in \(\Omega\) and hence a solution of (1.5). It is sufficient to show that for every sequence \(\mu_{j}\to 0\), a subsequence \(u_{\mu_{j}}\) is positive in \(\Omega\). By Lemma 6.5, we have
\[E_{\mu_{j}}(u_{\mu_{j}})=c_{\mu_{j}}\geq c_{0}\]
and
\[\langle DE_{\mu_{j}}(u_{\mu_{j}}),v\rangle=0\quad\forall\,v\in D_{0}^{s,p}( \Omega).\]
From Lemma 6.6, we have upto a subsequence \(u_{\mu_{j}}\rightharpoonup u\) in \(D_{0}^{s,p}(\Omega)\). Now with the help of the ideas of [9, Theorem 2.15] and Brezis-Lieb type lemma, we can conclude for \(\mu_{j}\to 0\)
\[\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{|u_{\mu_{j}}(x)-u_{\mu_{j}}(y)| ^{p}}{|x-y|^{N+sp}}\,dx\,dy\to\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{ |u(x)-u(y)|^{p}}{|x-y|^{N+sp}}\,dx\,dy.\]
We also have \(u_{\mu_{j}}\to u\) in \(C^{\alpha}(\overline{\Omega})\) for \(\mu_{j}\to 0\). So we have
\[\frac{1}{p}\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{|u(x)-u(y)|^{p}}{|x -y|^{N+sp}}\,dx\,dy+\int_{\Omega}\left(-\frac{\lambda u_{+}^{p}}{p}-\frac{u_{ +}^{p_{*}^{*}}}{p_{*}^{*}}\right)\,dx\geq c_{0} \tag{6.67}\]
and \(u\) weakly satisfy (6.50) with \(\mu=0\). From mountain pass Lemma 6.5 we have \(c_{0}>0.\) Since \(u\) satisfy (6.67), this implies that \(u\) can not be identically zero. For each \(x\in\Omega,\lambda u_{+}^{p-1}(x)+u_{+}^{p_{*}^{*}-1}(x)\geq 0\) and using strong minimum principle and Hopf's lemma [14], we have
\[u(x)\geq cd^{s}(x)>0\text{ in }\Omega.\]
Now upto a subsequence \(u_{\mu_{j}}\to u\) in \(C_{d}^{0,\alpha}(\overline{\Omega})\) for some \(\alpha\in(0,s).\) This implies that \(u_{\mu_{j}}>0\) in \(\Omega\) for sufficiently large \(j\). So we conclude that for small \(\mu,\)\(u_{\mu}>0\) is solution of the problem (1.5) with \(DE_{\mu}(u_{\mu})=0\) and \(E_{\mu}(u_{\mu})=c_{\mu}\) where \(c_{\mu}\) is as given in Lemma 6.5 viz.,
\[c_{\mu}:=\inf_{\gamma\in\Gamma}\max_{0\leq t\leq 1}E_{\mu}(\gamma(t))\]
where \(\Gamma=\{\gamma\in C([0,1],D_{0}^{s,p}(\Omega)):\gamma(0)=0,\gamma(1)=Ru_{ \epsilon,\delta}\}\). Now we can show that \(u_{\mu}\) is ground state solution by following the ideas of [30, Theorem 1.1]. This completes the proof of the Theorem 1.2.
## Appendix A
We first state some of the well known auxiliary results.
**Auxiliary results**
**Lemma A.1**.: _Let \(1<p<\infty\) and \(\beta\geq 1.\) For every \(a,b,t\geq 0,\) it holds that_
\[|a-b|^{p-2}(a-b)(a_{t}^{\beta}-b_{t}^{\beta})\geq\frac{\beta p^{p}}{(\beta+p-1)^ {p}}\left|a_{t}^{\frac{\beta+p-1}{p}}-b_{t}^{\frac{\beta+p-1}{p}}\right|^{p},\]
_where \(a_{t}=\min\{a,t\}\) and \(b_{t}=\min\{b,t\}.\)_
**Lemma A.2**.: _Let \(1<p<\infty\) and \(\phi:\mathbb{R}\rightarrow\mathbb{R}\) be a differentiable convex function. Then_
\[|a-b|^{p-2}(a-b)\left[c\;|\phi^{\prime}(a)|^{p-2}\phi^{\prime}(a) -t\;|\phi^{\prime}(b)|^{p-2}\phi^{\prime}(b)\right]\] \[\geq|\phi(a)-\phi(b)|^{p-2}(\phi(a)-\phi(b))(c-t),\]
_for every \(a,b\in\mathbb{R}\) and every \(c,t\geq 0.\)_
Now, we state the regularity result and prove it.
**Theorem A.3**.: _Let \(\Omega\subset\mathbb{R}^{N},N\geq 2\) be a bounded domain and \(u\in D_{0}^{s,p}(\Omega)\) weakly solve \((-\Delta)_{p}^{s}u=f_{\mu}(x,u)\) in \(\Omega\) for a parameter \(\mu>0\) and \(f\) satisfying the following growth condition_
\[|f_{\mu}(x,t)|\leq C_{1}\left(1+|t|^{p_{s}^{*}-1}\right), \tag{1.68}\]
_where \(C_{1}>0\) is constant. If \(u\) is uniformly bounded in \(D_{0}^{s,p}(\Omega)\) and \(\int_{E}|u|^{p_{s}^{*}}\,dx\to 0\) as \(|E|\to 0,\) uniformly in \(\mu\) then there exist two positive constants \(C_{*}\) and \(C^{*}\) depending upon \(s,p,N,\Omega\) such that_
\[\|u\|_{L^{\infty}(\Omega)}\leq(C_{*})^{\overline{p_{s}^{*}-p}}\;(C^{*})^{ \overline{p}(\sqrt{p_{s}^{*}}-\sqrt{p})}\;\|u\|_{L^{p_{s}^{*}}(\Omega)}.\]
_that is, \(\|u\|_{L^{\infty}(\Omega)}\)are uniformly bounded._
Proof.: For every \(0<\epsilon<<1,\) we define the smooth convex Lipschitz function
\[h_{\epsilon}(t)=(\epsilon^{2}+t^{2})^{\frac{1}{2}}\]
and take \(\psi|h_{\epsilon}^{\prime}(u)|^{p-2}h_{\epsilon}^{\prime}(u)\) as the test function in weak formulation, where \(\psi\in C_{c}^{\infty}(\Omega),\psi>0.\) By Lemma A.2, we obtain
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|h_{\epsilon}(u(x) )-h_{\epsilon}(u(y))|^{p-2}(h_{\epsilon}(u(x))-h_{\epsilon}(u(y)))(\psi(x)- \psi(y))}{|x-y|^{N+sp}}dxdy\] \[\leq\int_{\Omega}|f(x,u)|\;|h_{\epsilon}^{\prime}(u(x))|^{p-1} \psi(x)\;dx. \tag{1.69}\]
Since \(h_{\epsilon}(t)\) converges to \(h(t)=|t|\) as \(\epsilon\to 0^{+}\) and \(|h_{\epsilon}^{\prime}(t)|\leq 1\), We note that, using Young's inequality and Lipschitz continuity of \(h_{\epsilon},\)
\[\frac{|h_{\epsilon}(u(x))-h_{\epsilon}(u(y))|^{p-2}(h_{\epsilon}(u (x))-h_{\epsilon}(u(y)))(\psi(x)-\psi(y))}{|x-y|^{N+sp}}\] \[\geq-\frac{|h_{\epsilon}(u(x))-h_{\epsilon}(u(y))|^{p-1}|\psi(x)- \psi(y)|}{|x-y|^{N+sp}}\] \[\geq-\frac{(p-1)}{p}\frac{|h_{\epsilon}(u(x))-h_{\epsilon}(u(y))| ^{p}}{|x-y|^{N+sp}}-\frac{1}{p}\frac{|\psi(x)-\psi(y)|}{|x-y|^{N+sp}}\] \[\geq-\frac{(p-1)}{p}\frac{|u(x)-u(y)|^{p}}{|x-y|^{N+sp}}-\frac{1} {p}\frac{|\psi(x)-\psi(y)|}{|x-y|^{N+sp}}\]
By using generalized Fatou's lemma in (1.69), we get
\[\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{\Big{|}|u(x)|-|u(y )|\Big{|}^{p-2}\Big{(}|u(x)|-|u(y)|\Big{)}\Big{(}\psi(x)-\psi(y)\Big{)}}{|x-y| ^{N+sp}}dxdy\] \[\leq\int_{\Omega}\ |f(x,u)|\psi(x)\ dx. \tag{1.70}\]
Since \(C_{c}^{\infty}(\Omega)\) is dense in \(D_{0}^{s,p}(\Omega),\) (1.70) holds true, for \(0\leq\psi\in D_{0}^{s,p}(\Omega).\) Next, for \(l\in\mathbb{N},\) we define \(u_{l}=\min\{l,|u(x)|\}.\) Clearly \(u_{l}\in D_{0}^{s,p}(\Omega).\) For \(k\geq 1,\) let us set \(\beta:=kp-p+1.\) So, \(\beta>1.\) Choosing \(\psi=u_{l}^{\beta}\) in (1.70) and using Lemma A.1, we obtain
\[\frac{\beta p^{p}}{(\beta+p-1)^{p}}\int_{\mathbb{R}^{N}}\int_{ \mathbb{R}^{N}}\frac{\Big{|}(u_{l}(x))^{\frac{\beta+p-1}{p}}-(u_{l}(y))^{\frac {\beta+p-1}{p}}\Big{|}^{p}}{|x-y|^{N+sp}}dxdy\] \[\leq\int_{\Omega}\ |f(x,u)|(u_{l}(x))^{\beta}\ dx. \tag{1.71}\]
By observing that
\[\frac{1}{\beta}\left(\frac{\beta+p-1}{p}\right)^{p}\leq\left(\frac{\beta+p-1} {p}\right)^{p-1},\ \ \text{for large}\ \beta\]
and using the the relation \(k=\frac{\beta+p-1}{p}\) along with continuous embedding \(D_{0}^{s,p}(\Omega)\hookrightarrow L^{p_{s}^{*}}(\Omega),\) from (1.71), we get
\[\big{\|}u_{l}^{k}\big{\|}_{L^{p_{s}^{*}}(\Omega)}^{p}\leq\frac{(k)^{p-1}}{S_{s,p}^{p}}\int_{\Omega}\ |f(x,u)|(u_{l}(x))^{\beta}\ dx, \tag{1.72}\]
where the best Sobolev constant \(S_{s,p}\). Now we will estimate the right-hand side in (1.72). Using the fact \(u_{l}\leq|u|,\) we deduce
\[\int_{\Omega}\ |f(x,u)|(u_{l}(x))^{\beta}\ dx\leq C_{1}\left(\int_{\Omega}|u_{l }|^{\beta}\,dx+\int_{\Omega}|u|^{p_{s}^{*}-1}|u_{l}^{\beta}|\,dx\right)\]
\[=C_{1}\Bigg{[}\int_{\Omega\cap\{|u|<\Lambda\}}|u_{l}|^{\beta}dx+\int_ {\Omega\cap\{|u|\geq\Lambda\}}|u_{l}|^{\beta}dx+\int_{\Omega\cap\{|u|<\Lambda\}} |u|^{p_{s}^{*}-1}|u_{l}^{\beta}|\,dx\] \[+\int_{\Omega\cap\{|u|\geq\Lambda\}}|u|^{p_{s}^{*}-1}|u_{l}^{ \beta}|\,dx\Bigg{]}\] \[\leq C_{1}\Bigg{[}\int_{\Omega\cap\{|u|<\Lambda\}}|u|^{\beta}\,dx+ \int_{\Omega\cap\{|u|\geq\Lambda\}}|u|^{p_{s}^{*}+\beta-1}\,dx+\int_{\Omega \cap\{|u|<\Lambda\}}|u|^{p_{s}^{*}+\beta-1}\,dx\] \[+\int_{\Omega\cap\{|u|\geq\Lambda\}}|u|^{p_{s}^{*}+\beta-1}\,dx\Bigg{]}\] \[\leq C_{1}\Bigg{[}\Lambda^{1-p}\int_{\Omega\cap\{|u|<\Lambda\}}|u |^{p+\beta-1}\,dx+\Lambda^{p_{s}^{*}-p}\int_{\Omega\cap\{|u|<\Lambda\}}|u|^{p +\beta-1}\,dx\] \[+2\int_{\Omega\cap\{|u|\geq\Lambda\}}|u|^{p_{s}^{*}-p}\;|u|^{p+ \beta-1}\,dx\Bigg{]}\] \[\leq C_{1}\Big{[}(\Lambda^{1-p}+\Lambda^{p_{s}^{*}-p})\;\|u\|_{L^ {kp}(\Omega)}^{kp}+2\int_{\Omega\cap\{|u|\geq\Lambda\}}|u|^{p_{s}^{*}-p}\;|u|^ {p+\beta-1}\,dx\Big{]}, \tag{1.73}\]
where \(\Lambda>1\) will be chosen later, \(C_{1}>0\) is a constant. By plugging (1.73) into (1.72) and applying Fatou's lemma, we get
\[\|u\|_{L^{kp_{s}^{*}}(\Omega)}^{kp}\leq C_{1}\frac{k^{p-1}}{(S_{s,p})^{p}}\Bigg{[}(\Lambda^{1-p}+\Lambda^{p_{s}^{*}-p})\;\|u\|_{L^{kp}(\Omega)}^ {kp}\] \[+2\int_{\Omega\cap\{|u|\geq\Lambda\}}|u|^{p_{s}^{*}-p}\;|u|^{p+ \beta-1}\,dx\Bigg{]}. \tag{1.74}\]
Now we estimate the second term on the right-hand side in (1.74). For this, using Holder inequality for constant exponents, we obtain
\[\int_{\Omega\cap\{|u|\geq\Lambda\}}|u|^{p_{s}^{*}-p}\;|u|^{kp}\,dx\leq C_{2}\left(\int_{\Omega\cap\{|u|\geq\Lambda\}}|u|^{p_{s}^{*}}dx \right)^{\frac{p_{s}^{*}-p}{p_{s}^{*}}}\left(\int_{\Omega}|u|^{kp_{s}^{*}}dx \right)^{\frac{p}{p_{s}^{*}}}\] \[=C(\Lambda,u)\|u\|_{L^{kp_{s}^{*}}(\Omega)}^{kp}, \tag{1.75}\]
where \(C_{2}>0\) is a constant and \(C(\Lambda,u)=C_{2}\left(\int_{\Omega\cap\{|u|\geq\Lambda\}}|u|^{p_{s}^{*}}dx \right)^{\frac{p_{s}^{*}-p}{p_{s}^{*}}}\). Using uniform boundedness and uniform integrability of \(u\), \(C(\Lambda,u)\) only depends on \(u\) for \(\Lambda\) large so we write this as \(C(\Lambda)\) and get
\[\int_{\Omega\cap\{|u|\geq\Lambda\}}|u|^{p_{s}^{*}-p}\;|u|^{kp}\,dx\leq C( \Lambda)\|u\|_{L^{kp_{s}^{*}}(\Omega)}^{kp} \tag{1.76}\]
Combining (1.74) and (1.76), we have
\[\|u\|_{L^{kp^{*}_{s}}(\Omega)}^{kp}\leq C_{1}\frac{k^{p-1}}{(S_{s,p})^{p}}\left[( \Lambda^{1-p}+\Lambda^{p^{*}_{s}-p})\|u\|_{L^{kp}(\Omega)}^{kp}+2C(\Lambda)\|u \|_{L^{kp^{*}_{s}}(\Omega)}^{kp}\right]. \tag{1.77}\]
Now by applying Lebesgue dominated convergence theorem in (1.76), we can choose \(\Lambda>1\) large enough such that \(C(\Lambda)<\frac{(S_{s,p})^{p}}{4\tilde{C}(k)^{p-1}}\) and hence, from (1.77), it follows that
\[\|u\|_{L^{kp^{*}_{s}}(\Omega)}\leq\left(C_{*}^{\frac{1}{k}}\right)^{\frac{1}{p }}(k^{\frac{1}{k}})^{\frac{p-1}{p}}\|u\|_{L^{kp}(\Omega)}, \tag{1.78}\]
where \(C_{*}=\frac{2C_{1}\ (\Lambda^{1-p}+\Lambda^{p^{*}_{s,s}-p})}{(S_{s})^{p}}>1.\) Next, we start bootstrap argument on (1.78).
Choose \(k=k_{1}:=\frac{p^{*}_{s}}{p}>1\) as the first iteration. Thus, (1.78) yields that
\[\|u\|_{L^{k_{1}p^{*}_{s}}(\Omega)}\leq\left(C_{*}^{\frac{1}{k_{1}}}\right)^{ \frac{1}{p}}(k_{1}^{\frac{1}{k_{1}}})^{\frac{p-1}{p}}\|u\|_{L^{p^{*}_{s}}( \Omega)}. \tag{1.79}\]
Again by taking \(k=k_{2}:=k_{1}\frac{p^{*}_{s}}{p}\) as the second iteration in (1.78) and then inserting (1.79) in (1.78), we get
\[\|u\|_{L^{k_{2}p^{*}_{s}}(\Omega)} \leq\left(C_{*}^{\frac{1}{k_{2}}}\right)^{\frac{1}{p}}\left[(k_{2 })^{\frac{1}{k_{2}}}\right]^{\frac{p-1}{p}}\|u\|_{L^{k_{1}p^{*}_{s}}(\Omega)}\] \[\leq\left(C_{*}^{\frac{1}{k_{1}}+\frac{1}{k_{2}}}\right)^{\frac{1 }{p}}\left[(k_{1})^{\frac{1}{k_{1}}}.(k_{2})^{\frac{1}{k_{2}}}\right]^{\frac{p -1}{p}}\|u\|_{L^{p^{*}_{s}}(\Omega)}. \tag{1.80}\]
In this fashion, taking \(k=k_{n}:=k_{n-1}\frac{p^{*}_{s}}{p}\) as the \(n^{th}\) iteration and iterating for \(n\) times, we obtain
\[\|u\|_{L^{k_{n}p^{*}_{s}}(\Omega)} \leq\left(C_{*}^{\frac{1}{k_{n}}}\right)^{\frac{1}{p}}\left[(k_{n })^{\frac{1}{k_{n}}}\right]^{\frac{p-1}{p}}\|u\|_{L^{k_{n-1}p^{*}_{s}}(\Omega)}\] \[\leq\left(C_{*}^{\frac{1}{k_{1}}}+\frac{1}{k_{2}}\cdots+\frac{1}{ k_{n}}\right)^{\frac{1}{p}}\left[(k_{1})^{\frac{1}{k_{1}}}.(k_{2})^{\frac{1}{k_{2}}} \ldots(k_{n})^{\frac{1}{k_{n}}}\right]^{\frac{p-1}{p}}\|u\|_{L^{p^{*}_{s}}( \Omega)}\] \[=\left(C_{*}^{\sum_{j=1}^{n}\frac{1}{k_{j}}}\right)^{\frac{1}{p}} \left(\prod_{j=1}^{n}\left(k_{j}^{\sqrt{1/k_{j}}}\right)^{\sqrt{1/k_{j}}} \right)^{\frac{p-1}{p}}\|u\|_{L^{p^{*}_{s}}(\Omega)}, \tag{1.81}\]
where \(k_{j}=\left(\frac{p_{s}^{*}}{p}\right)^{j}.\) Since \(\frac{p_{s}^{*}}{p}>1,\) we have \(k_{j}^{\sqrt{1/k_{j}}}>1,\) for all \(j\in\mathbb{N}\) and
\[\lim_{j\to+\infty}k_{j}^{\sqrt{1/k_{j}}}=1.\]
Hence, it follows that there exists a constant \(C^{*}>1,\) independent of \(j,n\in\mathbb{N}\) such that \(k_{j}^{\sqrt{1/k_{j}}}<C^{*}\) and thus, (1.81) gives
\[\|u\|_{L^{k_{n}p_{s}^{*}}(\Omega)}\leq\left(\sum_{J=1}^{n}\frac{1}{k_{j}} \right)^{\frac{1}{p}}\left(\sum_{J=1}^{n}\sqrt{1/k_{j}}\right)^{\frac{p-1}{p}} \|u\|_{L^{p_{s}^{*}}(\Omega)}. \tag{1.82}\]
Observe that
\[\sum_{j=1}^{\infty}\frac{1}{k_{j}}=\sum_{j=1}^{n}\left(\frac{p}{p_{s}^{*}} \right)^{j}=\frac{p/p_{s}^{*}}{1-p/p_{s}^{*}}=\frac{p}{p_{s}^{*}-p}\]
and
\[\sum_{j=1}^{\infty}\frac{1}{\sqrt{k_{j}}}=\sum_{j=1}^{n}\left(\sqrt{\frac{p}{ p_{s}^{*}}}\right)^{j}=\frac{\sqrt{p}}{\sqrt{p_{s}^{*}}-\sqrt{p}},\]
from (1.82), we get that
\[\|u\|_{L^{\nu_{n}}(\Omega)}\leq(C_{*})\overline{p_{s}^{*}-p}\ (C^{*})\frac{p-1}{ \sqrt{p}(\sqrt{p_{s}^{*}}-\sqrt{p})}\ \|u\|_{L^{p_{s}^{*}}(\Omega)}, \tag{1.83}\]
where \(\nu_{n}:=k_{n}p_{s}^{*}.\) Note that, \(\nu_{n}\to+\infty\) as \(n\to+\infty.\) Therefore, we claim that
\[\|u\|_{L^{\infty}(\Omega)}=\|u\|_{L^{\infty}(\Omega)}\leq(C_{*})\overline{p_{ s}^{*}-p}\ (C^{*})\frac{p-1}{\sqrt{p}(\sqrt{p_{s}^{*}}-\sqrt{p})}\ \|u\|_{L^{p_{s}^{*}}(\Omega)}. \tag{1.84}\]
Indeed, if not, let us assume \(\|u\|_{L^{\infty}(\Omega)}>C_{3}\|u\|_{L^{p_{s}^{*}}(\Omega)},\) where
\[C_{3}=(C_{*})\overline{p_{s}^{*}-p}\ (C^{*})\frac{p-1}{\sqrt{p}(\sqrt{p_{s}^{*}} -\sqrt{p})}\.\]
Then there exists \(C_{4}>0\) and a subset \(\mathcal{S}\) of \(\Omega\) with \(|\mathcal{S}|>0\) such that
\[u(x)>C_{3}\|u\|_{L^{p_{s}^{*}}(\Omega)}+C_{4},\ \ \text{for}\ x\in\mathcal{S}.\]
The above implies
\[\liminf_{\nu_{n}\to+\infty}\left(\int_{\Omega}|u(x)|^{\nu_{n}}dx\right)^{ \frac{1}{\nu_{n}}}\geq\liminf_{\nu_{n}\to+\infty}\left(\int_{\mathcal{S}}|u(x )|^{\nu_{n}}dx\right)^{\frac{1}{\nu_{n}}}\]
\[\geq\liminf_{\nu_{n}\to+\infty}\left(C_{3}\|u\|_{L^{p_{3}^{\sharp}}( \Omega)}+C_{4}\right)\left(|S|\right)^{\frac{1}{\nu_{n}}}\] \[=\liminf_{\nu_{n}\to+\infty}\left(C_{3}\|u\|_{L^{p_{3}^{\sharp}}( \Omega)}+C_{4}\right),\]
a contradiction to (1.83). Therefore, (1.84) holds and hence, \(u\in L^{\infty}(\Omega)\).
|
2307.09886 | A reinforcement learning approach for VQA validation: an application to
diabetic macular edema grading | Recent advances in machine learning models have greatly increased the
performance of automated methods in medical image analysis. However, the
internal functioning of such models is largely hidden, which hinders their
integration in clinical practice. Explainability and trust are viewed as
important aspects of modern methods, for the latter's widespread use in
clinical communities. As such, validation of machine learning models represents
an important aspect and yet, most methods are only validated in a limited way.
In this work, we focus on providing a richer and more appropriate validation
approach for highly powerful Visual Question Answering (VQA) algorithms. To
better understand the performance of these methods, which answer arbitrary
questions related to images, this work focuses on an automatic visual Turing
test (VTT). That is, we propose an automatic adaptive questioning method, that
aims to expose the reasoning behavior of a VQA algorithm. Specifically, we
introduce a reinforcement learning (RL) agent that observes the history of
previously asked questions, and uses it to select the next question to pose. We
demonstrate our approach in the context of evaluating algorithms that
automatically answer questions related to diabetic macular edema (DME) grading.
The experiments show that such an agent has similar behavior to a clinician,
whereby asking questions that are relevant to key clinical concepts. | Tatiana Fountoukidou, Raphael Sznitman | 2023-07-19T10:31:35Z | http://arxiv.org/abs/2307.09886v1 | A reinforcement learning approach for VQA validation: an application to diabetic macular edema grading
###### Abstract
Recent advances in machine learning models have greatly increased the performance of automated methods in medical image analysis. However, the internal functioning of such models is largely hidden, which hinders their integration in clinical practice. Explainability and trust are viewed as important aspects of modern methods, for the latter's widespread use in clinical communities. As such, validation of machine learning models represents an important aspect and yet, most methods are only validated in a limited way. In this work, we focus on providing a richer and more appropriate validation approach for highly powerful Visual Question Answering (VQA) algorithms. To better understand the performance of these methods, which answer arbitrary questions related to images, this work focuses on an automatic visual Turing test (VTT). That is, we propose an automatic adaptive questioning method, that aims to expose the reasoning behavior of a VQA algorithm. Specifically, we introduce a reinforcement learning (RL) agent that observes the history of previously asked questions, and uses it to select the next question to pose. We demonstrate our approach in the context of evaluating algorithms that automatically answer questions related to diabetic macular edema (DME) grading. The experiments show that such an agent has similar behavior to a clinician, whereby asking questions that are relevant to key clinical concepts.
keywords: visual Turing test, visual question answering validation, VQA, interpretability, retinal image analysis, reinforcement learning +
Footnote †: journal: Elsevier
## 1 Introduction
Recent advances in computer vision and medical image analysis have shown remarkable performances for numerous diagnostic and intervention applications. With the emergence of large neural networks, or deep learning (DL), tasks that were long considered extremely challenging are now performed with human-level skill.
At the same time, comprehensive validation is increasingly critical as these methods move on towards translation into clinical practice. Yet, as methods have become increasingly powerful, the overall methodology to validate them has largely remained intact. For instance, many challenge competitions (Sun et al., 2020; Allan et al., 2019) compare different methods on a common dataset, using metrics most often borrowed from the computer vision literature. Such competitions have been criticized, as final rankings and outcomes are very often highly skewed to the dataset or metrics used (Maier-Hein et al., 2019).
More generally, good performances, expressed by high metric values, are desirable but not enough to trust a system in healthcare. There are a number of other qualitative factors hidden from strictly quantitative metrics, such as when a system would fail and what are its limitations (e. g., exposed by adversarial attacks, Papernot et al. (2016)), what evidence is used to infer decisions, or how well does the method really understand its input (i. e., interpretability and explainability). Given the growing complexity of DL methods, the answers to the above are especially difficult, if not impossible, to determine. This is one reason why clinical adoption of new machine learning based methodology remains challenging.
For this reason, this work focuses on _evaluating and assessing_ how a trained machine learning (ML) model (in particular one trained to answer questions related to images) makes its decisions. Specifically, we consider that this ML model has been trained to answer questions related to images, linked to a specific medical task, and that we do not know anything about its internal structure, how it was trained or what data was used to train it. Our goal is to design a method that can assess how this trained model, or _method under evaluation (MuE)_, is able to reason when evaluating unseen test data. Here, we refer to "reasoning" as the ability of the MuE to correctly answer questions relevant to the specified medical task. In this context, we propose here an evaluation method that provides insight that goes beyond common evaluation metrics (e. g., accuracy, precision,
etc). As such, our method would not directly be used to tackle a clinical task but rather help in choose which methods should be used in clinical practice.
### Related works
Enhancing explanatory power has gathered strong recent interest from the medical imaging community. Visual question answering (VQA) (Antol et al., 2015; Hasan et al., 2018; Lau et al., 2018; Wu et al., 2017) is one category of models that provide enhanced explainability. In the typical VQA setting a model takes as input both a test image and a question in text form and must predict the correct answer to the question posed (Fig. 1a). Questions could be open-ended (i. e., demand for a free text answer), or closed-ended (i. e., "Yes/No" answer). The assumption is that a model's capability to answer questions with respect to an image or set of images can make it easier to reveal its inner functioning. This makes VQA particularly attractive for medical applications, since explainability is a necessary asset for a ML model to be integrated in healthcare. There are several remarkable attempts to tackle the task of VQA for medical datasets, proposing different combinations and merging techniques for image and text processing (Wu et al., 2020; Ren and Zhou, 2020; Luhua et al., 2019; Gupta et al., 2021; Zhan et al., 2020; Lin et al., 2021; Tascon-Morales et al., 2022). Here again, however, current metrics used to evaluate VQA methods remain inadequate (i. e., accuracy of correctly answered questions or BLEU scores). It is such VQA models that play the role of MuEs in this work, and it is their trustworthiness that we aim to assess.
In the early 1950's, Alan Turing devised a test to assess if a machine exhibits human like behavior (Turing, 1950). In a "Turing test", an interrogator questions a responder by asking sequential questions, in order to discern if the responder is human or a machine. Turing tests have been used in medical imaging to evaluate the quality of adversarial attacks, by seeing if an expert can distinguish between a real and an adversarial example (Chuquicusma et al., 2018; Schlegl et al., 2019). Another use of the Turing test paradigm was to assess the interpretability of methods that infer semantic information from images (i. e., classification, segmentation etc.). In Geman et al. (2015), an automated visual Turing test (VTT) algorithm adaptively selects images and questions to pose to a model such that the answers can not be predicted from the history of answers. While this approach has increased explanatory power, it is limited to manually fabricated story lines to guide questioning. This makes it ill suited for medical applications where such story lines are hard to formalize. In Fountoukidou and Sznitman (2019), questions are posed to a MuE, with the aim to examine whether a concept of interest exists in an entire image. The answers for all images are then used to update a gaussian process (GP) that reveals the biases of both the answers and the dataset. The GP is consequently used to indicate the subset of questions from which the next question should be sampled. While this method helps reduce uncertainty with fewer questions, it does not guarantee that the chosen questions are appropriate to assess the MuE's reasoning. This is due to questions designed to reduce the uncertainty over the entire dataset, rather than exposing the different elements of an image that play a role in a MuE output. Since the question selection criteria are not related to any specific medical task, the MuE's reasoning over complex medical decisions can not be assessed. In this work we opt for a finer level of detail, selecting questions related to each specific image in order to expose the MuE's reasoning.
### Contributions
Following the VTT of Geman et al. (2015), the present work proposes an automatic, learned interrogator that sequentially selects questions to pose to a black-box MuE. We achieve this by training a reinforcement learning (RL) agent to act as the interrogator that selects questions (see Fig. 1b for an illustration). That is, we do not focus on how to train or optimize a specific MuE, that responds to questions, but rather on training an evaluation method that can effectively assess whether the answers produced by the MuE reveal a desirable behavior. Once trained,
Figure 1: (a) Example of MuE’s inputs and outputs. (b) Conceptual illustration of a visual Turing test (VTT) for fundus screening. The VTT selects the questions to pose (green arrows) from all possible questions. Answers provided by the MuE are shown in orange. Questioning continues until a clinical decision (e.g., diagnosis) can be made. A MuE that correctly answers questions leading to a clinical decision resembles a trained clinician and is therefore more trustworthy.
our interrogator does not simply indicate a static sequence of questions, but adapts and dynamically chooses from a pool of questions, based on the task context. The RL agent is therefore able to produce an arbitrary number of question sequences, depending on its interaction with the MuE.
In this work, we focus on a specific diagnostic application for our VTT: diabetic macular edema (DME) grading. Using prior knowledge of this task, we construct a set of questions that are clinically relevant and train our RL agent to select questions that are necessary and adequate to perform this medical task. The question selection is dynamic, as they depend on both the image and the answers from the MuE. We show in our experiments that the trained RL agent questions a MuE in much the same way as a clinician would for this task. To the best of our knowledge this work is the first to learn such an interrogator with the objective of focusing on core concepts related to a medical task. The contributions of this work are thus the following:
1. The proposal of a pipeline to train a questioning strategy for method validation, that is able to simulate the decision process of an expert and thus enhance a model's evaluation of explainability.
2. Development of novel and appropriate evaluation metrics to assess questioning strategies.
It should be noted that this problem is very hard to solve in a universal way, since the very notion of understanding a topic cannot be disconnected from its particularities. The specific solution we propose here may need to be adjusted for a different clinical application1, but the general idea, training pipeline, and evaluation metrics for a VTT questioning strategy hold.
Footnote 1: In particular, the set of questions should be designed for the medical task, and depending on the later’s complexity the question selection network might need to be altered.
The remainder of the paper is organized as follows: in the next section we provide a detailed description of our method and the RL method we propose. In Sec. 3, we detail our experimental setup and report results in Sec. 4. We conclude with final remarks in Sec. 5.
## 2 Automated visual Turing test
Posing the right questions is an important step in learning and understanding a given topic. This is one reason why VQA models can link understanding to the questions and associated answers they evaluate at test time. However the development of a VQA model focuses primarily on the answering part, while the questions are fixed and given beforehand. We claim that the question posing component, however, plays a tremendous role in the VQA evaluation. In order to define whether a responder (in our case a MuE) correctly understands a topic, it does not suffice that they correctly answer some questions. Which questions they answer correctly is critical, as there are questions that are more appropriate than others to expose a "cheating" or incompetent responder. Our hypothesis is that a set of appropriate questions has more discriminative power than a larger set of all possible questions, a hypothesis that is confirmed by our experiments and results.
To this end, this work focuses not on how to answer questions, but on _how to choose which questions to ask_.We do not devise a VQA model that answers questions, we propose a dynamic way to evaluate such models, treated as black-box responders, in an insightful way. In a clinical scenario, a medical task is usually linked to questions, that serve as intermediate steps to solve the specific task. That is, clinicians look at an image, and consecutively look for elements in it that allow them to make a diagnosis. Depending on what elements they observe, they adjust their focus on what to look for next (they do not perform an exhaustive search of all possible elements a medical image contains). For instance, to grade DME severity, a number of elements must be observed in a patient's fundus photograph. Thus, the ability to answer the clinical question of what is the DME grade of an eye depends directly on answering questions about these visible elements. Such elements represent concepts that clinicians are experts in interpreting. VQA algorithms can therefore be compared more in detail to a clinician. However, not all questions are equally important for any given medical case. Also, the complete number of all possible questions could be overwhelming, thus hindering an insightful VQA evaluation. Trained clinicians adjust their reasoning process to the pathology they are dealing with, but also to the medical history of the patient, the imaging modality they are observing, what they have already observed in the image before etc.
In this work, we assume that the questions the experts choose to pose are the most relevant for a clinical task, and therefore the most appropriate to judge if such a task is well understood. We therefore aim to devise a _questioning strategy (QS)_ that would "simulate" an expert, such that the QS can be used to select which questions should posed to a MuE, in order to expose if the MuE is reliable. For example, a MuE that can correctly infer the DME grade from fundus images, but fails to answer correctly regarding the aforementioned concepts, would not be considered trustworthy, despite its potentially high performance according to standard metrics.
Assuming a VQA model trained on answering questions about fundus images, our QS selects which question will be asked first. The VQA, or MuE, provides a response that is fed back to the QS, which selects the next question to be posed, and so on. The questioning stops when the sequence of question-response pairs for a specific image provides enough information for this image to be graded. The same process can be repeated for several images in the same way a clinician screens images from several patients. We formalize our approach below.
### Problem formulation
We first define the set of all possible text questions relevant to a clinical task, \(\mathcal{A}^{2}\), of size \(N\). We specify the subset \(\mathcal{A}_{\text{askd},t}\subseteq\mathcal{A}\) as the set of asked questions at time \(t\). We assume that \(\mathcal{A}\) contains closed-ended ("Yes/No") questions indicated by an expert or explicitly defined by a medical textbook. Note that we limit \(\mathcal{A}\) to closed-ended questions so that datasets and methods that are not specifically designed for question answering can be evaluated too. Also, most open-ended questions can be reformulated into a series of "Yes/No" questions.
For each available test image, we further specify a set of \(N_{L}\) locations, or image regions \(\mathcal{L}=\{l_{i}\mid i=0,\ldots,N_{L}-1\}\), and a set of \(N_{C}\) clinically relevant concepts \(\mathcal{C}=\{c_{i}\mid i=0,\ldots,N_{C}-1\}\). Let
\[\mathcal{A}=\mathcal{L}\times\mathcal{C}=\{d_{c}^{l}\mid c\in\mathcal{C},l \in\mathcal{L}\}, \tag{1}\]
be the set of all possible questions, where \(d_{c}^{l}\) is of the form _"Is concept \(c\) present in region 1?"_.
We denote the response of the black box MuE (i. e., the VQA model) to question \(a_{c}^{l}\), as \(r_{c}^{l}\in\) ["N/A", "No", "Yes"]3, and let \(z:\) ["N/A", "No", "Yes"]\(\rightarrow\{0,0.5,1\}\) be a mapping of the form,
Footnote 3: Note that we choose to use the symbol \(\mathcal{A}\) for the question set, as the questions will consist of actions following the reinforcement learning convention.
\[z(r)=\begin{cases}0,&r=\text{``N/A" (the question is not asked)}\\ 0.5,&r=\text{``No" (the response is ``No")}\\ 1,&r=\text{``Yes" (the response is ``Yes")}\end{cases}. \tag{2}\]
For clarity we refer to \(a_{c}^{l}\) and \(r_{c}^{l}\) at time \(t\) as \(a_{t}\) and \(r_{t}\) respectively. We then let \((a_{i},r_{i})\) be a question-response pair and
\[\mathbf{H}_{t}=\{(a_{0},r_{0}),(a_{1},r_{1}),\ldots,(a_{t},r_{t})\}\,,\quad\mathbf{H}_ {t}\in\mathcal{H}, \tag{3}\]
be the history of question-response pairs at time \(t\), where \(\mathcal{H}\) is the space of all possible history sequences. We then relate the question set \(\mathcal{A}\) to the \(z\) mapping by defining a transformation \(\phi:\mathcal{H}\rightarrow\{0,0.5,1\}^{N_{C}\times N_{L}}\), where
\[\phi(\mathbf{H}_{t})=\begin{bmatrix}z(r_{0}^{0})&\ldots&z(r_{0}^{N_{L}})\\ \vdots&\ddots&\vdots\\ z(r_{N_{C}}^{0})&\ldots&z(r_{N_{L}}^{N_{L}})\end{bmatrix}. \tag{4}\]
Note that the order of questions in \(\mathbf{H}_{t}\) is not preserved in \(\phi(\mathbf{H}_{t})\) and that we assume that an expert can confirm if a specific instance of \(\mathcal{H}\) is adequate to assess the clinical task (i. e., enough information for a clinical decision to be made).
We are interested in generating questioning strategies that use a minimum number of questions necessary to ascertain a clinically relevant task. We thus define a questioning strategy as a function \(f_{\text{QS}}:\mathcal{H}\rightarrow\mathcal{A}\), that given a history, selects which question should be posed next. An overview of the questioning process is presented in Fig. 2. In the next section, we describe our reinforcement learning method and how we train an agent to select questions with the aim of maximizing a scalar reward.
### Questioning strategy generation
Our goal is to learn a function that can adaptively select questions to pose to a black-box MuE, and we aim to do so such that our method yields questions that are "reasonable" with respect to the task the MuE is attempting to perform. To this end, we use a reinforcement learning (RL) approach (Sutton and Barto, 2018), as our aim is to outline a decision process that maximizes a reward which we define below.
RL is a branch of ML, with the particularity of _learning from interactions_. An _agent has_ to learn what _actions_ to take, by interacting with an _environment_, and observing how different actions affect it. Specifically, we propose to utilize RL to construct an agent to model the questioning function \(f_{\text{QS}}\). That is, we treat the QS as an _agent_ that selects _actions_ (i. e., questions to pose to the MuE) at every step of the MuE evaluation. The _environment_ is the way the MuE perceives and understands the medical image. Based on the history \(\mathbf{H}_{t}\), the agent selects the question that is likely to lead to a clinical decision sooner (i. e., via less questions), whereby simulating a doctor's reasoning process. The agent-environment interaction is represented by the question posing, the observation of the MuE response and a produced _reward_ signal. This observation helps update the agent's view of the environment, meaning the agent's state, and therefore affects the selection of the next action (i. e., question). In our setup, we establish episodic tasks that are completed in a finite number of steps. Terminal states are those where sufficient questions are asked for a clinical diagnosis to be established. Such states are specific to the medical task, and can be inferred by published medical criteria (as is the case in our DME application), or indicated by experts. Since our goal is to ask the questions that are relevant for diagnosis, the reward is defined so as to encourage questions in that direction. The RL elements that are used in our approach are described below.
#### 2.2.1 State, actions and observations
We specify the state that the agent observes at timestep \(t\) to be \(s_{t}=\phi(\mathbf{H}_{t})\). That is, the state reflects the history of the questions posed by the QS (interrogator) and the responses given by the MuE (respondor) for a given image. The action corresponds to the question \(a_{t}\), that is posed to the MuE. The agent's observation is the answer, \(r_{t}\), that the MuE provides, which is treated as stochastic in nature. The agent then uses this observation to update its internal state, by updating all the values related to the given question-response pair.
Figure 2: Overview of questioning process for a single image.
#### 2.2.2 Reward
We define the immediate reward after a transition from state \(s\) to \(s^{\prime}\) following action \(a\) as
\[R^{a}_{s,s^{\prime}}=\begin{cases}0&\text{if $s^{\prime}$ is not terminal and $a$ has never}\\ 1&\text{been chosen}\\ 1&\text{if $s^{\prime}$ is terminal and $a$ has never been }.\\ &\text{chosen}\\ -1&\text{if $a$ has been chosen before}\end{cases} \tag{5}\]
We then compute the discounted reward for an episode as
\[G_{\text{episode}}=\sum_{t=0}^{t=T-1}R^{a}_{s_{t},s_{t+1}}\cdot\gamma^{t}, \tag{6}\]
where \(T\) is the length of the episode and \(\gamma\in(0,1)\) is a discount factor, such that higher values of \(\gamma\) emphasize future reward.
#### 2.2.3 Action-value function approximation
As in most RL settings, the action-value function, \(Q(s,a)\), estimates the discounted future reward when being in state \(s\), after taking action \(a\), and the policy \(\pi(s,a)=p(a_{t}\!\!=\!\!a|s_{t}\!\!=\!\!s)\) is a probability function that maps state-action pairs to probabilities. Our goal then is to learn an optimal policy, which maximizes \(Q(s,a)\),
\[Q_{*}(s,a)\dot{=}\max_{\pi}Q_{\pi}(s,a),\quad\forall s\in\mathcal{S},a\in \mathcal{A}. \tag{7}\]
An agent following an optimal policy \(\pi_{*}\) is our desired QS. For a given \(Q(s,a)\), a greedy policy is defined as
\[\pi(s,a)=\begin{cases}1&\text{if $a=\operatorname*{arg\,max}_{a}Q(s,a)$}\\ 0&\text{if $a\neq\operatorname*{arg\,max}_{a}Q(s,a)$}\end{cases}, \tag{8}\]
and a policy that is greedy with respect to an optimal value function \(Q_{*}(s,a)\) is an optimal policy. Our task therefore consists in finding \(Q_{*}(s,a)\).
As in recent trends, we model \(Q(s,a)\) with a parameterized function \(\tilde{Q}(s,a,\mathbf{\theta})\) and model this using a neural network (NN), \(\tilde{Q}(\cdot)\). Fig. 3 depicts the architecture of our proposed network.
#### 2.2.4 Masked policy
Intuitively one can assume that there is no value in asking the same question twice. However, this creates a complexity in the learning process of the agent, since exactly the same action can yield very different rewards (i. e., high if it has not been chosen before and assists diagnosis, and low if it has been chosen before, regardless of its importance for diagnosis, low if in this state of the questioning there is no new information added to the agent's state). We tackle this issue explicitly by imposing a very low return to actions that have already affected the agent's state, whereby forcing the policy to assign them zero probability. Formally, we impose \(\pi(s,a_{t})=0\) if \(a_{t}\in\mathcal{A}_{\text{asked},t-1}\) as in Kucur et al. (2019).
#### 2.2.5 Training
To learn an optimal approximation of \(Q_{*}\), we utilize the mean squared error (MSE) to measure the distance between \(\tilde{Q}(s,a,\mathbf{\theta})\) and \(Q_{*}(s,a)\),
\[\mathcal{L}_{\mathbf{\theta}}=\mathbb{E}_{a\sim\mathcal{A}_{s},s\sim\mathcal{S}} \left[\left(\tilde{Q}(s,a,\mathbf{\theta})-Q_{*}(s,a)\right)^{2}\right]. \tag{9}\]
In addition, we propose two different training schemes in this work: (1) Monte Carlo (MC) learning and (2) Q-learning (Watkins and Dayan, 1992). We show the performance and behavior of both training schemes in our results section.
In MC learning, a batch update is performed after an entire episode is finished. The update sample batch consists of the state-action pair at each episode step, and the target for each such pair is the discounted reward from that step onward. The MSE loss is computed as
\[\mathcal{L}_{\text{MC}} =\mathbb{E}_{\mathcal{E}}\left[\mathcal{L}_{\text{MC}}^{\mathcal{E }}\right]\] \[=\mathbb{E}_{\mathcal{E}}\left[\frac{1}{T}\sum_{s,a,\mathbf{\theta} \in\mathcal{E}}\left(G_{t}-\tilde{Q}(s_{t},a_{t},\mathbf{\theta})\right)^{2} \right], \tag{10}\]
where \(\mathcal{E}\) is an episode of length \(T\), and \(\mathcal{L}_{\text{MC}}^{\mathcal{E}}\) is the average loss calculated for a single episode.
Conversely, in Q-learning an update is performed after every episode step (in our case, after every question), with the update target being the sum of the observed immediate reward \(R^{a}_{s,s^{\prime}}\) and the maximum predicted discounted reward from then on. We also use _experience replay_ with a replay memory (Lin, 1993; Mnih et al., 2013), where we store the agent's experiences at every step in a replay memory \(\mathcal{M}=\{e_{0},\dots,e_{N_{\text{explo}}\to-1}\}\) of size \(N_{\text{explo}}\), with \(e_{i}=(s_{t},a_{t},R^{a}_{s_{t+1}},s_{t+1})\) being an experience tuple. When the time for the update comes, instead of updating with the step that was just taken, we sample a minibatch of size \(N_{b}\) from the replay memory, \((e_{i,0},\dots,e_{i,N_{b}-1})\sim\mathcal{M}\). This way, the update batch has less chances to include samples with strong correlations. The lack of such strong correlations is desirable because it reduces the variance of the updates. In addition, each experience tuple can be used in several updates, therefore making better use of the dataset. The MSE loss for this case is
\[\mathcal{L}_{\text{Q}}=\mathbb{E}_{s,a,\mathbf{R},s^{\prime}\sim\mathcal{M}}\left[ \left(R+\gamma\max_{a^{\prime}}\tilde{Q}(s^{\prime},a^{\prime},\mathbf{\theta})- \tilde{Q}(s,a,\mathbf{\theta})\right)^{2}\right], \tag{11}\]
and we make use of an \(\epsilon-\)greedy policy to generate episodes,
\[\pi(s_{t},a_{t})=\begin{cases}\frac{\epsilon}{\left\lvert\frac{\mathbf{\theta}}{ \lvert\mathbf{\theta}}\right\rvert}+1-\epsilon&\text{if $a_{t}=\operatorname*{arg\,max}_{a}Q(s_{t},a)$}\\ \frac{\epsilon}{\left\lvert\frac{\mathbf{\theta}}{\lvert\mathbf{\theta}}\right\rvert}& \text{if $a_{t}\neq\operatorname*{arg\,max}_{a}Q(s_{t},a)$}\end{cases}. \tag{12}\]
Figure 3: Function approximation network. The numbers in parentheses refer to the layer size.
It can be easily noticed that the greedy policy is a special case of \(\epsilon\)-greedy with \(\epsilon=0\). We use an \(\epsilon\) decay scheme, whereby training starts with \(\epsilon=1\) and progressively reduces by a factor of \(\epsilon_{\text{decay}}\). This scheme allows for early exploration when the agent is unaware of the environment, and progressively moves in exploring more the high reward state-action pairs as the agent's confidence increases. The questioning stops when a terminal state is reached, or a maximum number of questions is asked (e. g., 20 questions in our experiments).
Algorithms A.1 and A.2 in the Supplementary Material (Appendix A) outline the MC learning and Q-learning with function approximation, respectively.
### Questioning strategy evaluation
In this setting, quantifying how well questions are posed is not obvious and we propose a number of ways to do so. In general, we are interested in how quickly a QS can reach a clinically adequate state and how well a QS can differentiate MuEs exhibiting different behaviors.
Episode rewardsFirst, we consider the achieved episode reward on a number of testing images as an indication of the QS ability to ask proper questions.
MuE separationWe also wish to see how questioning strategies can be used to differentiate MuEs. Specifically, we are interested in examining whether a good QS can differentiate between MuEs with the same overall performance. That is, we wish to promote appropriate QSs that can ask relevant questions, instead of just asking all possible questions. Several MuEs may have the same accuracy over all the questions, but some of those MuEs are more appropriate than others for clinical integration. This is yet another reason why asking all possible questions might not be enlightening enough when evaluating a MuE. To identify the more reliable MuEs, we look at the rate of correct answers for each QS. The overall performance of a MuE is based on the correctly answered questions on the entire question set \(\mathcal{A}\). Note however that the MuE performance induced by a QS depends on the questions \(\mathcal{A}_{\text{asked}}\) that the strategy posed. Since the different MuEs represent different reasoning behaviors, we would like to identify a QS that can separate MuEs despite a similar average accuracy.
MuE accuracy approximation with beta distributionTo examine a questioning strategy's ability to distinguish between MuEs, we treat every question-response pair produced by a QSME pair as a Bernoulli trial. Over a test set then, the probability mass function of these trials can be computed as,
\[P_{\text{Bernoulli}}(k,p)=p^{k}(1-p)^{1-k},\quad\text{for }k\in\{0,1\},\ p\in[0,1], \tag{13}\]
where \(k=1\) if the MuE answers correctly, and \(k=0\) otherwise.
We then approximate the accuracy achieved by each MuE with a beta distribution, which we update through Bayesian inference. This involves computing,
\[p_{\text{beta}}(x,\alpha,\beta)=\frac{x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha, \beta)}, \tag{14}\]
where \(\alpha\) and \(\beta\) are the beta distribution parameters and \(B(\cdot,\cdot)\) is a normalizing factor. Note that we can interpret the numbers \(\alpha-1\) and \(\beta-1\) as the number of successes and failures of an experiment, respectively. Given that the beta distribution is the conjugate prior of the Bernoulli, and that the beta distribution describes a distribution over probabilities, we can model the MuE performance as perceived by a QS by computing,
\[p_{\text{qs}}^{u}(x,\alpha_{\text{qs}}^{u},\beta_{\text{qs}}^{ u})=\frac{x^{\alpha_{\text{qs}}^{u}-1}(1-x)^{\beta_{\text{qs}}^{u}-1}}{B( \alpha_{\text{qs}}^{u},\beta_{\text{qs}}^{u})}\] \[\forall\text{qs}\in\mathcal{I}\text{ and }\forall u\in\mathcal{U}, \tag{15}\]
where \(\mathcal{I}\) is the set of all questioning strategies (interrogators), and \(\mathcal{U}\) is the set of all MuEs. We initialize each one of the \(p_{\text{qs}}^{u}\) with an uninformative prior (i. e., \(\alpha=\beta=1\)), which gives a uniform distribution in the interval \([0,1]\), and we update the parameters after every observed question-response. After the questioning is over, we end up with one beta distribution per QS per MuE, that characterizes the performance of each MuE as perceived by each QS.
Note that because each QS asks a different number of questions, we can anticipate lower variance beta distributions for the strategies that ask more questions, as these will undergo a greater number of Bayesian updates. To avoid this bias, we set a limit \(N_{u}\) (for each MuE) in the number of questions that are used to define parameters \(\alpha\) and \(\beta\). We then ask \(N_{u}\) questions from each QS, and we update the corresponding beta distributions. \(N_{u}\) is set to the number of questions of the most effective QS.
Information radiusTo quantify the ability of a QS to differentiate between responding MuEs, we use the information radius measure proposed in Sibson (1969). The information radius is a symmetric measure of separation, or dissimilarity coefficient, between distributions. It is inspired by the Kullback-Leibler divergence (KL divergence), but is symmetric and generalizes to more than two distributions. Assuming \(N\) distributions with probabilities, \(p_{i},\forall i\in{1,...,N}\) defined in the same probability space \(\mathcal{X}\), the information radius is calculated as,
\[R=\frac{1}{N}\sum_{i=1}^{N}D_{\text{KL}}\bigg{\{}p_{i}\left\|\ \frac{\sum_{j=1}^{N}p_{j}}{N}\right\}. \tag{16}\]
It is therefore the average KL divergence from the mean distribution to each MuE distribution. Note that it is always finite and bound by \(N\text{log}K\), where \(\frac{1}{K}\) is the probability at each point in \(\mathcal{X}\), if \(p\) is uniform. Combining Eq. (15) and Eq. (16), we thus
compute the information radius for a questioning strategy given a set of MuES \(\mathcal{U}\) as
\[R_{\text{qs}}=\frac{1}{|\mathcal{U}|}\sum_{u\in\mathcal{U}}D_{KL}\bigg{(}p_{ \text{qs}}^{u}\;\Bigg{\|}\;\frac{\sum_{u\in\mathcal{U}}p_{\text{qs}}^{u}}{| \mathcal{U}|}\bigg{)},\quad\forall\text{qs}\in\mathcal{I}. \tag{17}\]
## 3 Experimental setup
We describe below an overview of the experimental setup we use to validate our approach. Specifically, we propose to validate our method for the task of DME grading, where we first describe the data we use in our experiments and then detail a number of comparison methods.
### DME grading and datasets
DME is the build-up of fluid in the macula of the retina. This fluid increase leads to blurry or wavy vision, near or in the center of the visual field (Bandello et al., 2017). Color fundus photography (see Fig. 4) plays a key role in diagnosing and assessing the risk levels associated with the condition. One important type of retinal lesions visible in color fundus images is called _hard exudate_ and is well known to be linked to the disease. To assess the risk level of DME, the following guidelines of the ETDRS grading scale (Group et al., 1991) are typically used:
**Grade 0**: No apparent hard exudates,
**Grade 1**: Presence of hard exudates outside the radius of one disc diameter from the macula center (fovea),
**Grade 2**: Presence of hard exudates within the radius of one disc diameter from the macula center (fovea).
Despite the apparent simplicity of this task, any automated method that grades for DME must implicitly be able to: (1) classify and identify hard exudate lesions, (2) localize the fovea, (3) segment the optic disk and (4) compare relative distances and sizes between the fovea and hard exudates. We hence consider a MuE that can correctly answer questions on the aforementioned points more explainable and trustworthy.
Given that our goal is to train a questioning strategy that can assess if black-box trained MuEs are performing DME grading according to the correct clinical reasoning, we propose to build a dedicated VQA dataset for our experiments. We do so because, despite the existence of a few medical VQA datasets (Lau et al., 2018; He et al., 2020), they are not appropriate, as we require questions and responses that relate to a specific medical task, where the medical outcome is also known. For this reason, we use datasets that were not initially designed for VQA, but contain all necessary annotations for our purpose.
To this end, we make use of two different color fundus photograph datasets4 and summarize the number of images per grade in Table 1 :
Footnote 4: For each dataset, we use a subset of images that contain annotations relating to hard exudates and this precludes us from using fundus datasets with only DME annotations, such as MESSIDOR (Decencière et al., 2014).
**Indian Diabetic Retinopathy image Dataset (IDRiD) (Prasanna et al., 2018):** 148 color fundus images from both healthy and diabetic retinopathy subjects, with optic disc and hard exudate segmentation masks. Fovea localization was manually performed. The dataset is split in a 60%-10%-30% training, validation and test set respectively.
**eOphtha Dataset (Decencière et al., 2013):** 62 color fundus images from both healthy and diabetic retinopathy subjects, with hard exudate segmentation masks. Optic disc segmentation and fovea localization were manually performed. The dataset is split in a 60%-10%-30% training, validation and test set respectively.
The training, validation, and test sets of the two datasets are respectively concatenated, and used in the QS generation and evaluation.
### Question set
We now specify the set of questions \(\mathcal{A}\) as defined in Eq. (1) for the task of DME grading. We let,
\[\mathcal{L}= \{\text{whole image, 1st quadrant, 2nd quadrant,}\] \[3\text{rd quadrant, 4th quadrant}\},\]
such that a fundus image is divided into 4 non-overlapping quadrants. This division is made so that different image regions
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **IDRiD** & **e-Ophtha** & **Total** \\ \hline
**DME grade 0** & 71 (48\%) & 23 (37\%) & 93 (44\%) \\
**DME grade 1** & 3 (2\%) & 10 (16\%) & 13 (6\%) \\
**DME grade 2** & 74 (50\%) & 29 (47\%) & 105 (50\%) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Number and percentage of images per grade in the datasets.
Figure 4: Examples of fundus images with groundtruths of lesions, anatomical structures and DME risk gradings.
can be questioned separately, allowing not only the presence but also the localization of a concept to be determined. That is, we treat the quadrant division as a proxy for the localization of a structure in a closed-ended questioning setup5. We also define the concepts to test by the set of clinically relevant structures,
Footnote 5: We chose to divide the image to quadrants because they are relevant for DME grading, and fundus image inspection. Extending to finer or different grids for applications that require it is trivial.
\[\mathcal{C}=\{\text{hard exudate, fovea, optic disc}\}.\]
Thus the entire question set is \(\mathcal{A}=\mathcal{L}\times\mathcal{C}\) and we depict examples of the regions and concepts Fig. 5.
### Generating clinically relevant question streams
Using our question set \(\mathcal{A}\), we are now able to generate sets of closed-ended questions for any image by querying different image locations and structures of interest. That is, we can simulate streams of questions, with some being appropriate for diagnosis inference (i. e., corresponding to terminal states). In Sec. 3.4, we also outline different questioning strategies that produce different such streams. Specifically, we can simulate questioning strategies that pose similar questions to that of an expert.
To do so, we make the following assumptions: (1) if the optic disc is localized, its size is assumed to be known, (2) localization of the optic disc is a necessary condition for diagnosis if exudates are present, (3) if the fovea is in the same quadrant as hard exudates, their distance is assumed to be lower than one optic disc diameter and (4) the fovea and the optic disc need to be localized. Although (3) does not hold for the entire dataset, we confirmed that it is valid for 96% of the images. While (4) is not strictly necessary for diagnosis, we add it as an extension, since a MuE that properly understands a fundus image should be able to identify the optic disc and the fovea.
We henceforth refer to the use of assumptions (1-3) as **simple-A**, and to the use of assumptions (1-4) as **extra-U-A** (from extra-Understanding). In the Supplementary Material (B) we provide an illustration for the DME grading decision process described in Sec. 3.1, for both **simple-A** and **extra-U-A** in the form of decision trees, as well as give detailed statistics and examples as to the validity of these assumptions.
### Questioning strategies
We now describe the different questioning strategies we will compare in our experiments. These include:
**Random QS:**: The next question posed is randomly chosen from the set of _not asked questions_\(\mathcal{A}\mid\mathcal{A}_{\text{asked}}\). Note that this reflects the most common form of MuE evaluation found in the literature.
**Textbook QS:**: Considered our gold standard as it follows a clinical-reasoning and approximates clinical thinking, as described in Sec. 3.1. Fig. 10 in the Supplementary Material shows two such questioning strategies.
**Decision Tree QS, Random Budget (DT-RB):**: This strategy is generated by traversing a classification tree that is trained to perform the DME grading task. Specifically, we train the classification tree to correctly assess the grade of an image using a limited budget of randomly selected history sequences (see Eq. (3)). Each node of the trained tree corresponds to a question, and each edge to a response. We then use the tree splits to select the next question that should be posed (see Supplementary Material, C for more details).
**Decision Tree QS Textbook Budget (DT-TB):**: Similar to the above but where the classification tree is trained on a budget of history sequences that are selected for each image according to the textbook criteria described in Sec. 3.1.
**Reinforcement learning QS (RL QS):**: Our proposed strategy as described in Sec. 2.2. We show results when our method is trained using MC learning, denoted **(MC)**, and when trained with Q-learning, denoted as **(Q)**.
We train our RL methods with an Adam optimizer (Kingma and Ba, 2014). The discount factor was set to \(\gamma=0.8\), so as to emphasize the final reward but to achieve it as quickly as possible. We start by generating one episode per training image with a random policy (\(\epsilon\)-greedy with \(\epsilon=1\)), and we reduce the value of \(\epsilon\) by a factor of \(\epsilon_{\text{decay}}=0.9\) after every epoch. We run one episode per training image for each epoch. For Q-learning, we make use of a replay memory of size 500 and we update with minibatches of size 8. We train for 50 epochs and we keep the model with the best validation reward after the 15th epoch.
## 4 Results
We show the results of our experiments in this section and we provide additional results in the Supplementary Material.
### Preliminaries
To first establish the difficulty in inferring coherent DME grades, we evaluate the DME classification performance using standard tree classifiers. For the scope of this work, we train
Figure 5: Regions and concepts used in questioning.
classification trees as an intermediate step to generate a questioning strategy, and to show that good classification results do not necessarily go hand in hand with relevant clinical criteria. As the DME grading task is not the focus of this work, we do not use a powerful or complex training scheme, but one that can easily be used to generate a QS.
To this end, we train 50 trees to predict DME grades from history instances, \(H\) (Eq. (3)), and report the classification accuracy in the training, validation and test set in Table 2. We see that such a tree-based classifier performs well when looking at usual classification metrics. However, as we show below, this is not an indication that they can successfully be used to generate a good QS.
To support this statement, we present in Fig. 6 the average reward during training of the RL based QSs, on the validation set, as compared to the baselines. The training is performed 120 times and we show here the mean and standard deviation of the reward. From these, we clearly see that the proposed RL agents improve their respective policies and achieve higher rewards than the QSs that were generated from the classification trees.
### Comparison of questioning strategies
We provide in Fig. 7 a visual representation of each of the QS methods in the form of binary trees, for the **simple-A** version (corresponding trees for the **extra-U-A** version are available in the Supplementary Material, D). In addition, we provide an example of the question streams produced by each QS for a DME grade 2 sample in Fig. 8 (with more question streams available in the Supplementary Material, E). These illustrate a potential use-case for clinicians to gain insight into a MuE's responses. Examples of question-response streams for different MuEs can be presented to experts, and the QS depiction as a decision tree can help them choose a QS that they consider reliable as a validation tool. Subsequently, this insight can then be used to select appropriate MuEs.
In Fig. 7, we see that some strategies are better than others at reaching terminal states with few questions. For instance, the random strategy is represented by a wide tree whose paths reflect no reasoning. Conversely, we can see that the RL based questioning strategies are similar to the textbook strategy, indicating that these strategies resemble more closely the clinical decision process.
An interesting observation regarding **DT-RB QS** and **DT-TB QS** is that even though the accuracy of the classification trees is very high (see Table 2), the generated questioning strategies do not reflect a clinical decision process. This can be explained by the fact that the statistical properties of the dataset are exploited when learning tree splits. However, although an attribute, or combination of attributes, may be common among the samples of the same class, this does not necessarily correspond to an important clinical criterion. For example, samples of the DME class 0 (healthy), may happen to have a value of 0 for the question _"Is there optic disc in the whole image?"_, since if there is no exudate, a diagnosis can be achieved without asking about or locating the optic disc. This however is by no means an indication for a diagnosis. A sequence of questions and responses may therefore be considered by the trained tree as sufficient to reach diagnosis, while this sequence might be far from a clinician's reasoning. Such a scenario clearly does not enhance the explainability of the MuE.
One can also observe that **DT-RB** generates a better questioning strategy than **DT-TB**. This is expected as random episodes contain more feature variability and therefore increase the likelihood of identifying important features that participate in the diagnosis. To illustrate this, the **Textbook QS** will never ask about the optic disc for a healthy sample. This can fool the classification tree to consider the absence of the optic disc a sign of health. For the **Random QS** however, it could be that questions about the optic disc are asked for some of the healthy samples, and for others not. This way, the classifier can more easily infer that the presence of hard exudates is more important.
#### 4.2.1 Rewards on test set
Using Eq. (6) to compute rewards after running an episode per QS per test image, we show the average reward for the entire test set and for the different DME grades in Table 3.
Here we can see that both RL based questioning strategies perform well and in some cases outperform the gold standard. This occurs because both those QSs are trained to exploit the dataset properties so as to quickly attain terminal states and reduce the number of questions they need to pose. This hence increases the episode reward. For example, if the optic disc is observed in a particular quadrant in most images, it is likely that this quadrant will be queried first. This is similar to the use of experience in order to "know where to look first", which a real clinician may have, but our implementation of the **Textbook QS** does not. Note that for DME grade 1, the available cases are so few that the trained QS methods do not outperform the **Textbook QS** in terms of reward.
#### 4.2.2 Rewards on different MuEs
To see how different QS methods can differentiate between different MuEs, we generate several synthetic MuEs. Specifi
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **Training set** & **Validation set** & **Testing set** \\ \hline _simple-A_ & & & \\
**DT-RB** & \(0.99\pm 0.003\) & \(0.92\pm 0.024\) & \(0.91\pm 0.030\) \\
**DT-TB** & \(0.99\pm 0.005\) & \(0.93\pm 0.015\) & \(0.92\pm 0.029\) \\ _extra-U-A_ & & & \\
**DT-RB** & \(0.99\pm 0.003\) & \(0.92\pm 0.022\) & \(0.90\pm 0.034\) \\
**DT-TB** & \(0.99\pm 0.006\) & \(0.93\pm 0.015\) & \(0.92\pm 0.030\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean classification accuracy of decision tree classifier trained with a random budget (**DT-RB**) and a textbook budget (**DT-TB**) of history sequences. Results are averaged over 50 tries with \(\mu\pm\sigma\) provided.
cally, each generated MuE has the same accuracy performance in terms of percentage of correctly answered questions over the entire question set \(\mathcal{A}\)t:
**Random MuE:**: _accuracy_\(\times\) 100% of randomly selected questions are answered correctly.
**Reasonable MuE:**: Questions that are relevant for diagnosis are answered correctly 95% of the times, while the rest are answered correctly \(x\)% of the times (so that the total accuracy is still _accuracy_\(\times\) 100%).
**Unreasonable MuE:**: Questions that are irrelevant for diagnosis are answered correctly 95% of the times, while the rest are answered correctly \(x\)% of the times (so that the total accuracy is still _accuracy_\(\times\) 100%).
Note that none of the above MuEs are trained. They are fabricated to intentionally exhibit distinct behaviors, while having a common rate of correct answers. Hence, the aim of this experiment is not to design an optimal MuE to answer questions, but to explore whether a QS can see beyond the common accuracy. This experiment justifies the need for a QS in the first place, by exposing that the rate of correct answers over all possible questions is not an adequate quality criterion, and can hide differences in the reasoning behavior of the MuEs.
Table 4 shows the average test set rewards for these MuEs when they have a common 70% accuracy of correct answers over the entire question set \(\mathcal{A}\) (see Supplementary Material, Appendix F for performances on each DME grade, for MuEs with common accuracy of 60%, 70% and 90%).It should be noted that in all cases, the QSs are trained with the groundtruth an
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**QS** & **Grade 0** & **Grade 1** & **Grade 2** & **Total** \\ \hline _simple-A_ & & & & \\
**Testbook (gold standard)** & 1 [1] & 0.21 [8.1] & 0.26 [7.6] & 0.58 [4.7] \\ \hline
**Random** & 0.36 [6.9] & 0.10 [11.6] & 0.15 [10.4] & 0.24 [8.9] \\
**DT-RB** & **1 [1]** & 0.09 [12.2] & 0.14 [10.7] & 0.51 [6.6] \\
**DT-TB** & 0.61 [4.9] & 0.08 [12.7] & 0.11 [11.5] & 0.33 [8.7] \\
**RL (MC)** & **1 [1]** & **0.10 [11.3] & 0.31 [7.1] & 0.60 [4.6] \\
**RL (Q)** & **1 [1]** & **0.14 [10]** & **0.34 [6.5]** & **0.62 [4.3]** \\ \hline
**real-UA** & & & & \\
**Testbook (gold standard)** & 0.40 [5.4] & 0.21 [8.1] & 0.27 [7.5] & 0.32 [6.6] \\ \hline
**Random** & 0.17 [9.6] & 0.11 [11.2] & 0.15[10.1] & 0.16 [10.1] \\
**DT-RB** & 0.28 [7.3] & 0.09 [12.0] & 0.14 [10.8] & 0.20 [9.2] \\
**DT-TB** & 0.28 [7.3] & 0.08 [12.6] & 0.11 [11.4] & 0.18 [9.7] \\
**RL (MC)** & **0.41 [5.2] & 0.11 [11.7] & 0.29 [7.1] & 0.34 [6.5] \\
**RL (Q)** & **0.47 [4.6]** & **0.12 [10.7]** & **0.34 [6.3]** & **0.40 [5.7]** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Average reward over the test set [] show the average number of questions needed to achieve a diagnosis). Here the **groundtruth** answer is provided to every question.
Figure 6: Average validation reward during training. Solid lines show the mean \(\mu\) over training iterations, and shaded regions represent \(\pm\sigma\). When terminal states are used, then the tuple \((s_{\text{term}},a,R^{a}_{\text{num}})\), where \(s_{\text{term}}\) is a terminal state, \(a\) a random action, and \(R^{a}_{\text{num}}=0\), is also used in the updates.
## 6 Conclusion
Figure 7: Decision trees for the questioning strategies for simple-A questioning assumptions (part 1 of 2). For clarity reasons, only the part of the tree up to depth 6 is shown. Notation in the tree nodes is as follows: **EX:** hard exudate, **OD:** optic disk, **FOV:** fovea. The region is specified in the second line. **Left** child of each node corresponds to answer “No” of the parent node question, while **right** child corresponds to answer “Yes”. Circles correspond to terminal states with the number indicating the DME grade. Orange dashed lines on **(b)** and **(c)** correspond to the point that the baseline considers adequate for classification, and therefore random questions are chosen from then on.
## 6 Conclusion
Figure 7: Decision trees for the questioning strategies for simple-A questioning assumptions (part 2 of 2). For clarity reasons, only the part of the tree up to depth 6 is shown. Notation in the tree nodes is as follows: **EX:** hard exudate, **OD:** optic disk, **FOV:** fovea. The region is specified in the second line. **Left** child of each node corresponds to answer ”No” of the parent node question, while **right** child corresponds to answer “Yes”. Circles correspond to terminal states with the number indicating the DME grade. Orange dashed lines on **(b)** and **(c)** correspond to the point that the baseline considers adequate for classification, and therefore random questions are chosen from then on.
swers and not with the MuE's ones, such that trained QSs are exactly the same in all columns of Table 4. In general, we expect that the rewards for a reasonable MuE are closer to the ones of a "perfect responder" MuE, both for the entire test set and for the separate grades, an expectation that is confirmed by the results. From these results, we observe that the RL based questioning strategies are consistently better at asking important questions, even if the MuE responder is not perfect. Those results confirm the value of selecting which questions to pose to a MuE, instead of posing all possible questions. If we posed all questions, the average acurracy would be the same for the 3 MuEs, and the one with the more desirable behavior would not stand out.
#### 4.2.3 MuE separation
In the next experiment, we used the **random**, **reasonable** and **unreasonable** MuEs, all with a 70% average accuracy over the entire question set \(\mathcal{A}\), to infer the distribution of responses as described in Sec. 2.3. This allows us to compute a beta distribution per MuE per QS.
We present the information radius \(R_{\text{qs}}\) in Table 5, and the final state of the beta distributions in Fig. 9. It can be seen that the **Textbook QS**, as well as both versions of **RL QS**, lead to distinguishable distributions between the MuEs, something that the **Random QS** fails to do. This is an indication that certain questioning strategies can see beyond the common accuracy, and distinguish between MuEs with different behaviors, by asking appropriate questions. Those results confirm the value of selecting which questions to pose to a MuE, instead of posing all possible questions. If we posed all questions, the average acurracy would be the same for the three MuEs, and the one with the more desirable behavior would not stand out.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**QS**} & **Random** & **Reasonable** & **Unreasonable** & **Groundtruth** \\ & **MuE** & **MuE** & **MuE** & **MuE** \\ \hline \hline _simple-A_ & & & & \\
**Textbook (gold standard)** & 0.61 [4.4] & 0.59 [4.4] & 0.55 [5] & 0.58 [4.7] \\ \hline \hline
**Random** & 0.41 [0.4] & 0.22 [0.1] & 0.40 [12.6] & 0.24 [8.9] \\
**DT-RB** & 0.35 [3.6] & 0.48 [7.1] & 0.39 [8.5] & 0.51 [6.6] \\
**DT-TR** & 0.51 [7.3] & 0.49 [6.9] & 0.40 [8.3] & 0.33 [8.7] \\
**RL (MC)** & 0.59 [5.4] & 0.57 [5.2] & 0.50 [6.5] & 0.60 [4.6] \\
**RL (Q)** & **0.60 [4.9]** & **0.60 [4.6]** & **0.51 [6.1]** & **0.62 [4.3]** \\ \hline \hline _extra-U-A_ & & & & \\
**Textbook (gold standard)** & 0.23 [7.7] & 0.35 [6] & 0.11 [8.9] & 0.32 [6.6] \\ \hline
**Random** & 0.03 [12.4] & 0.15[01.1] & -0.03 [13.6] & 0.16 [10.1] \\
**DT-RB** & 0.04 [12.3] & 0.21 [9] & -0.01 [13] & 0.20 [9.2] \\
**DT-TB** & -0.02 [13.1] & 0.19 [9.5] & -0.06 [13.6] & 0.18 [9.7] \\
**RL (MC)** & 0.20 [9.5] & 0.35 [6.5] & 0.15 [10.2] & 0.34 [6.5] \\
**RL (Q)** & **0.22 [9.3]** & **0.36 [6.5]** & **0.16 [10.1]** & **0.40 [5.7]** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Average reward over the test set [1] show the average number of questions needed for diagnosis), for different MuEs with total accuracy 70%, compared to a groundtruth MuE (always answering correctly).
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{2}{*}{**QS**} & **Random** & **Reasonable** & **Unreasonable** & **Groundtruth** \\ & **MuE** & **MuE** & **MuE** \\ \hline _simple-A_ & & & \\
**Textbook (gold standard)** & 0.61 [4.4] & 0.59 [4.4] & 0.55 [5] & 0.58 [4.7] \\
**DT-RB** & 0.41 [0.4] & 0.22 [0.1] & 0.40 [12.6] & 0.24 [8.9] \\
**DT-RB** & 0.35 [3.6] & 0.48 [7.1] & 0.39 [8.5] & 0.51 [6.6] \\
**DT-TR** & 0.51 [7.3] & 0.49 [6.9] & 0.40 [8.3] & 0.33 [8.7] \\
**RL (MC)** & 0.59 [5.4] & 0.57 [5.2] & 0.50 [6.5] & 0.60 [4.6] \\
**RL (Q)** & **0.60 [4.9]** & **0.60 [4.6]** & **0.51 [6.1]** & **0.62 [4.3]** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Average reward over the test set [1] show the average number of questions needed for diagnosis), for different MuEs with total accuracy 70%, compared to a groundtruth MuE (always answering correctly).
Figure 8: Example of question stream from all questioning strategies, for a sample with DME grade 2.
## 5 Conclusions and future work
In this work, we focused on determining if asking the right questions impact the evaluation of a VQA method. To do so, we devised a trainable VTT method that adaptively poses closed-ended questions to a VQA method, with the intention of exposing its reasoning behavior. We use a reinforcement learning
Figure 9: Distributions of accuracy on asked questions for each QS and MuE, using the proposed beta distribution approximation. All MuEs have an accuracy of 70% over the entire dataset.
scheme to train an agent to act as the interrogator. We evaluated our framework in the context of DME grading and show that our approach is able to generate question streams adequate for diagnosis in a small number of steps, highly resembling the reasoning process of a clinician. We also propose the use of a beta distribution, that is progressively updated after each question is asked, to characterize the performance of a MuE as perceived by the interrogator (QS). The results show that the beta distributions produced by the proposed QS are better at distinguishing between a reasonable and an unreasonable responder (MuE), even if the two have the same average performance. The careful and dynamic question selection proves therefore to be a useful evaluation tool, since it quantitatively reveals differences between responder behaviors that the simple asking of all the questions does not. These results are consistent on both a simpler and a more complex set of clinical criteria, as illustrated by performances on the **simple-A** and the **extra-U-A** case. That is, in both cases, our approach learns what is important to ask.
Here we use the application of DME grading to demonstrate our approach, since it is both tractable, and yet requires different medical image analysis subtasks to be solved. Naturally, in the future we plan to investigate how to apply this framework to other clinical applications. In particular, the generation of a questioning strategy for method validation is challenging to solve in a universal way, since the very notion of understanding a topic cannot be disconnected from its specificities (e. g., the question set should be adjusted to the medical task).
As future work, the generation - in close collaboration with clinicians - of more VQA datasets that refer to particular medical tasks, would be of very high value. Also, adding a richer representation of the image in the state is of high interest and would broaden the framework's applicability. For example, a feature extractor could be used to extract low level features of an image, that could subsequently constitute part of the state. Another direction would be the expansion of the method to open-ended questions, or the selection of the region from a continuous instead of discrete image space. In addition, exploring the addition of noise in the state or the question would most likely help determine the MuE's ability to provide consistent responses. Likewise, there are a number of logical inconsistencies that could be revealed through questioning (e. g., the answer regarding the presence of a structure in the entire image being "Yes", but all answers regarding subregions being "No"). Such inconsistencies could be exploited to further enhance the responder's explainability. Another type of inconsistency could be exposed by enhancing the question set with the final question "_What is the DME grade?_", and checking whether the MuE's direct response, and the grade inferred by the intermediate responses agree. Finally, in an next step, an approach that integrates the QS in the MuE's training could help to improve its interpretability, a direction we did not explore in this work, where the MuE was treated as a black box and the emphasis was given on the QS generation.
## Acknowledgments
This work was partly funded by the Swiss National Science Foundation grant No. 325230-141189 and the University of Bern. Calculations were performed on UBELIX ([http://www.id.unibe.ch/hpc](http://www.id.unibe.ch/hpc)), the HPC cluster at the University of Bern.
|
2303.15780 | Instruct 3D-to-3D: Text Instruction Guided 3D-to-3D conversion | We propose a high-quality 3D-to-3D conversion method, Instruct 3D-to-3D. Our
method is designed for a novel task, which is to convert a given 3D scene to
another scene according to text instructions. Instruct 3D-to-3D applies
pretrained Image-to-Image diffusion models for 3D-to-3D conversion. This
enables the likelihood maximization of each viewpoint image and high-quality 3D
generation. In addition, our proposed method explicitly inputs the source 3D
scene as a condition, which enhances 3D consistency and controllability of how
much of the source 3D scene structure is reflected. We also propose dynamic
scaling, which allows the intensity of the geometry transformation to be
adjusted. We performed quantitative and qualitative evaluations and showed that
our proposed method achieves higher quality 3D-to-3D conversions than baseline
methods. | Hiromichi Kamata, Yuiko Sakuma, Akio Hayakawa, Masato Ishii, Takuya Narihira | 2023-03-28T07:50:45Z | http://arxiv.org/abs/2303.15780v1 | # Instruct 3D-to-3D: Text Instruction Guided 3D-to-3D conversion
###### Abstract
We propose a high-quality 3D-to-3D conversion method, Instruct 3D-to-3D. Our method is designed for a novel task, which is to convert a given 3D scene to another scene according to text instructions. Instruct 3D-to-3D applies pre-trained Image-to-Image diffusion models for 3D-to-3D conversion. This enables the likelihood maximization of each viewpoint image and high-quality 3D generation. In addition, our proposed method explicitly inputs the source 3D scene as a condition, which enhances 3D consistency and controllability of how much of the source 3D scene structure is reflected. We also propose dynamic scaling, which allows the intensity of the geometry transformation to be adjusted. We performed quantitative and qualitative evaluations and showed that our proposed method achieves higher quality 3D-to-3D conversions than baseline methods. [https://sony.github.io/Instruct3Dto3D-doc/](https://sony.github.io/Instruct3Dto3D-doc/)
## 1 Introduction
In recent years, generative modelings based on diffusion models have been rapidly developed and applied in a wide range of domains, including image [28, 30, 29], music [12, 24], video [31, 10], and 3D [26, 16]. Especially for Text-to-Image (T2I) tasks, diffusion models have received much attention for their ability to generate high-quality images that align with input texts. Such amazing T2I models are often realized by large-scale training with a huge amount of image-text pair dataset [28, 30, 29].
The pretrained T2I models can be also utilized to solve various tasks other than T2I. In particular, the application of diffusion models to the field of 3D has been studied extensively [26, 16, 19, 37]. They used the pretrained T2I models to optimize NeRF [21] so that the content of rendered images from NeRF always matches input texts. Through this optimization, they successfully generate NeRF representation of the 3D scene described by the input texts.
Nowadays, it has become much easier to obtain NeRF representations of real-world scenes [21, 2]. If these 3D scenes can be edited with text instructions, it should also make it substantially easier to create high-quality 3D content. Prior works [35, 36] have shown that it is possible to stylize a 3D scene based on a given text that describes the target scene to be obtained after the editing. However, in their methods, we cannot directly instruct what scene to be changed, which fairly reduces the controllability of the editing. In addition, we cannot also accurately control how strongly the structure of the original scene would be preserved after the editing.
In this paper, we tackle a novel and challenging problem, which is to convert a given 3D scene to another scene according to text instructions. To solve this problem, we propose **Instruct 3D-to-3D**, which performs 3D-to-3D conversions using pretrained diffusion models.
Our Instruct 3D-to-3D achieves better 3D consistency, quality, and controllability simultaneously compared to baseline methods. For better 3D consistency, we directly use the source 3D scene as a condition to keep its semantic and structural information. Furthermore, to achieve better quality, we apply a pretrained Image-to-Image (I2I) diffusion model that allows us to maximize the likelihood of each viewpoint image. In addition, we propose dynamic scaling to enable the users to control the strength of the geometry transformation. We use voxel grid-based implicit 3D representation [33, 7] as a 3D model. Our dynamic scaling gradually decreases and increases the 3D resolution of the voxel grid to achieve controllable and smooth 3D geometry conversions.
The main contributions of this study are summarized as follows.
* We propose Instruct 3D-to-3D, a method for transforming a 3D scene based on text instructions. Our Instruct 3D-to-3D realizes the high quality and 3D-consistent conversion using a pretrained I2I model conditioned by the source 3D scene.
* We propose dynamic scaling. This allows manipulation of the strength of the geometry transformation and smooth 3D-to-3D conversion.
* We conducted qualitative and quantitative evaluations, showing that our proposed method can perform 3D-to-3D conversion with higher quality than baseline methods.
## 2 Related Works
### Diffusion Models
Diffusion models [32, 6] are generative models inspired by non-equilibrium thermodynamics. It can generate high-fidelity images by gradually denoising data starting from pure random noise. The diffusion models comprise two processes along with timesteps: a forward process where a small amount of noise is added to the data at each timestep, and a reverse process where the data are slightly denoised at each timestep. Specifically, in the forward process, the data at timestep \(t\) can be obtained as follows:
\[\mathbf{x}_{t} =\sqrt{\alpha_{t}}\mathbf{x}_{t-1}+\sqrt{1-\alpha_{t}}\mathbf{\epsilon}_{ t-1}\] \[=\cdots=\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_ {t}}\mathbf{\epsilon} \tag{1}\]
Here, \(\{\alpha_{i}\}_{i=1}^{T}\) are hyper-parameters that satisfy \(0<\alpha_{i}<1\) and \(\bar{\alpha}_{t}=\Pi_{i=1}^{t}\alpha_{i}\), \(0<\bar{\alpha}_{T}<\bar{\alpha}_{T-1}<\cdots<\bar{\alpha}_{T}<1\). \(\mathbf{\epsilon}\) is a random noise sampled as \(\mathbf{\epsilon}\sim\mathcal{N}(0,\mathbf{I})\).
In the reverse process, The noise added at each timestep needs to be predicted from noisy data using a neural network to denoise the data. We may optionally use condition information for this prediction, and here we use an input text \(y\). The predicted noise is denoted as \(\epsilon_{\phi}(\mathbf{x}_{t};y,t)\), where \(\phi\) is the parameters of the neural network. The neural network is trained to minimize the following loss function.
\[\mathcal{L}=\mathbb{E}_{t,\mathbf{\epsilon}}\left[w(t)\|\epsilon_{\phi}(\mathbf{x}_{t };y,t)-\mathbf{\epsilon}\|^{2}\right] \tag{2}\]
where \(w(t)\) is a coefficient calculated with the scheduling parameter.
In classifier free guidance [11], the strength of the condition \(y\) to the generated images can be controlled by changing the noise prediction as the following equation:
\[\tilde{\epsilon}_{\phi}(\mathbf{x}_{t};y,t)= \epsilon_{\phi}(\mathbf{x}_{t};\varnothing,t)\] \[+s\cdot(\epsilon_{\phi}(\mathbf{x}_{t};y,t)-\epsilon_{\phi}(\mathbf{x}_ {t};\varnothing,t)) \tag{3}\]
where \(\varnothing\) is a fixed null value. The value of \(s\) corresponds to the strength of \(y\). By using larger \(s\), we can generate images that are more faithful to the condition \(y\).
### Text-guided Image-to-Image using Diffusion Models
Diffusion models have also been extensively studied for text-guided Image-to-Image tasks. These methods can be divided into two categories based on what the input text represents. The first category assumes that the text describes the caption of the images before and after editing such as [15, 34, 38, 9]. These methods require extra information not directly related to editing, such as descriptions of the part of the image that remains constant through editing. On the other hand, the second category assumes that the text describes the instruction on what point should be changed through editing. InstructPix2Pix [4] edits the image based on a given text instruction, which has the advantage of making image editing easier and more intuitive. They first create a dataset of editing instructions and image descriptions before and after editing with GPT-3 [5]. Then, they input it to [9] to generate edited images. After that, they fine-tune StableDiffusion to generate these edited images conditioned by the source images and text instructions. Thus, InstructPix2Pix learns to edit images according to various text instructions.
In this paper, we extend InstructPix2Pix to 3D data and realize the task of editing a 3D scene into a new 3D scene with text instructions. This enables highly intuitive editing of 3D scenes.
### Implicit 3D Representation
There are various ways to represent 3D information [8, 1, 3, 14, 18, 21]. In this paper, we adopt implicit representation for its high expressive power, which has become quite popular since being used in NeRF [21]. Given images that capture a target scene from different viewpoints, NeRF builds a neural network that predicts color and density at any spatial point in the scene from its 3D coordinates. This model is trained so that each viewpoint image matches the image rendered from the corresponding viewpoint based on the model. After training, NeRF implicitly acquires a 3D representation and we can obtain images seen from any camera viewpoint.
Recently, voxel grid-based implicit 3D representations have also been studied. They retain color and density information in the form of voxel grids instead of neural networks [33, 7, 17] and achieve much faster training. In this study, we use voxel grid-based DVGO [33] to represent 3D scenes for fast 3D-to-3D conversion.
### Stylization of 3D scenes
3D stylization is a task to transform a source 3D scene into a new scene that has a different style while preserving
Figure 2: Overview of Instruct 3D-to-3D. First, the target model is initialized with the source model (i). Next, the target image is rendered from a random camera viewpoint (ii) and then the noise is added to input into InstructPix2Pix. The source image is rendered from the same viewpoint (iii) and input to InstructPix2Pix as conditions along with the text instruction (iv). \(\nabla\mathcal{L}_{\mathrm{SDS}}\) is calculated using them (v) and the target model is updated with it. By performing this procedure from various camera viewpoints, we can convert the target model along with the text instruction.
the content of the original scene. In [39, 22], the style is specified by a reference image. On the other hand, CLIP-NeRF [35] accepts texts to specify the style, which provides substantially higher flexibility. They finetune NeRF of the original scene so that the images rendered from any viewpoint would have high similarity in the CLIP feature space with the given text. There is also a concurrent work [36] to ours for text-guided 3D stylization, which calculates a CLIP-based contrastive loss based on the source and target image to properly strengthen the 3D stylization.
These methods only match the CLIP features of each viewpoint image and the reference image or text and do not use a generative model. Hence, there is no guarantee that they can convert the input 3D scene into a high-quality one. In this paper, we use a diffusion model to maximize the likelihood of each viewpoint image. In addition, while previous studies required a description of the converted 3D scene, we use editing instructions as input to make the 3D-to-3D conversion more intuitive.
### Text-to-3D models
DreamFields [13] was the first study to realize Text-to-3D. DreamFields generates a 3D scene that follows the input text from any viewpoint by optimizing the CLIP features of each NeRF viewpoint image to match the input text. However, since DreamFields only optimizes the CLIP features and does not use a generative model, it may lead to generating unrealistic scenes that just cheat the similarity at the CLIP feature space.
DreamFusion [26] is the first method to apply diffusion models to the Text-to-3D task. DreamFusion uses pre-trained Imagen [30] to generate 3D scenes by optimizing each NeRF viewpoint image \(\mathbf{x}\) to follow the input text. Specifically, they first apply noise \(\mathbf{\epsilon}\sim\mathcal{N}(0,\mathbf{I})\) to the viewpoint image \(\mathbf{x}\) according to the randomly sampled \(t\), and obtain a noisy image \(\mathbf{x}_{t}=\sqrt{\bar{\alpha}_{t}}\mathbf{x}+\sqrt{1-\bar{\alpha}_{t}}\mathbf{\epsilon}\). This noisy image is used to calculate the gradient of the loss function \(\nabla_{\theta}\mathcal{L}_{\mathrm{SDS}}\) as the following equation:
\[\nabla_{\theta}\mathcal{L}_{\mathrm{SDS}}=\mathbb{E}_{t,\epsilon}\left[w(t)( \epsilon_{\phi}(\mathbf{x}_{t};y,t)-\mathbf{\epsilon})\frac{\partial\mathbf{x}}{\partial \theta}\right] \tag{4}\]
where \(\theta\) is the parameters of NeRF and \(y\) is the text description of the 3D scene to be generated. \(\theta\) is updated using this gradient from any viewpoint. This method enables the generation of high-quality 3D scenes for a variety of text inputs. They also edit generated 3D scenes by re-training them with new texts. However, direct finetuning of a 3D scene may result in a scene that is far removed from the original 3D scene. In addition, this method requires a text description of the scene after conversion, and conversion by text instructions is not possible.
**Input**:
\(y\): instruction text,
\(\theta_{\mathrm{src}}\): parameters of source model,
\(g_{\theta}\): volume rendering function from a model parameterized with \(\theta\)
**Output**:
\(\theta_{\mathrm{tgt}}\): parameters of target model
```
1:\(\theta_{\mathrm{tgt}}\leftarrow\theta_{\mathrm{src}}\)# initialize \(\theta_{\mathrm{tgt}}\)
2:for\(i=1\) to \(N_{\mathrm{iter}}\)do
3:# rendering from source & target model
4:\(c=\mathrm{random\_camera\_pose}()\)
5:\(I_{\mathrm{src}}=g_{\theta_{\mathrm{src}}}(c)\)
6:\(I_{\mathrm{tgt}}=g_{\theta_{\mathrm{tgt}}}(c)\)
7:\(L_{\mathrm{tgt}}=\mathrm{StableDiffusionEncoder}(I_{\mathrm{tgt}})\)
8:# add noise
9:\(\mathbf{\epsilon}\sim\mathcal{N}(0,\mathbf{I})\)
10:\(t\sim U[1,\dots,T]\)
11:\(\mathbf{x}_{t}=\sqrt{\bar{\alpha}_{t}}L_{\mathrm{tgt}}+\sqrt{1-\bar{\alpha}_{t}} \mathbf{\epsilon}\)
12:# calculate the gradient of the loss function
13:\(\nabla_{\theta_{\mathrm{tgt}}}\mathcal{L}_{\mathrm{SDS}}=\mathbb{E}_{t, \epsilon}\left[w(t)(\tilde{\epsilon}_{\phi}(\mathbf{x}_{t};y,I_{\mathrm{src}},t)- \mathbf{\epsilon})\frac{\partial L_{\mathrm{tgt}}}{\partial\theta_{\mathrm{tgt}}}\right]\)
14:\(\theta_{\mathrm{tgt}}\leftarrow\mathrm{Adam}(\theta_{\mathrm{tgt}},\nabla_{ \theta_{\mathrm{tgt}}}\mathcal{L}_{\mathrm{SDS}})\)
15:endfor
```
**Algorithm 1** Proposed method: Instributed 3D-to-3D
## 3 Proposed Method
### Pipeline of Instruct 3D-to-3D
Figure 2 shows the overview of Instruct 3D-to-3D. Our proposed method converts a source model, which is an implicit representation of a source 3D scene, into a new target model along with the text instruction.
The main idea of our Instruct 3D-to-3D is to learn the target model from an arbitrary viewpoint using Instruct-Pix2Pix conditioned by the source scene and the text instruction. First, the target model is initialized with the source model. Next, using the target model, a target image \(I_{\mathrm{tgt}}\) is rendered from a random camera viewpoint and is fed into the encoder of StableDiffusion to obtain the corresponding latent feature \(L_{\mathrm{tgt}}\). We add noise \(\mathbf{\epsilon}\sim\mathcal{N}(0,\mathbf{I})\) to it to make noisy latent \(\mathbf{x}_{t}=\sqrt{\bar{\alpha}_{t}}L_{\mathrm{tgt}}+\sqrt{1-\bar{\alpha}_{t}} \mathbf{\epsilon}\). A source image \(I_{\mathrm{src}}\) is also rendered from the same viewpoint as the target image using the source model. \(\mathbf{x}_{t}\) is input to InstructPix2Pix along with the source image \(I_{\mathrm{src}}\) and the text instruction \(y\) as conditions. As InstructPix2Pix has two conditions, \(I_{\mathrm{src}}\) and \(y\), the noise is estimated as follows.
\[\tilde{\epsilon}_{\phi}(\mathbf{x}_{t};y,I_{\mathrm{src}},t) =\ \epsilon_{\phi}(\mathbf{x}_{t};\varnothing,\varnothing,t)\] \[+s_{I}\cdot(\epsilon_{\phi}(\mathbf{x}_{t};\varnothing,I_{\mathrm{ src}},t)-\epsilon_{\phi}(\mathbf{x}_{t};\varnothing,\varnothing,t))\] \[+s_{T}\cdot(\epsilon_{\phi}(\mathbf{x}_{t};y,I_{\mathrm{src}},t)- \epsilon_{\phi}(\mathbf{x}_{t};\varnothing,I_{\mathrm{src}},t)) \tag{5}\]
where \(s_{I}\) is a hyper-parameter that determines the degree of fidelity to the information of the source image, and \(s_{T}\) is a hyper-parameter that determines the degree of fidelity to the text instruction. In this way, our proposed method explicitly incorporates the source image and the text instruction. Similar to DreamFusion, we update the target model by the following gradient.
\[\nabla_{\theta_{\text{tgt}}}\mathcal{L}_{\text{SDS}}=\mathbb{E}_{t,\epsilon} \left[w(t)(\tilde{\epsilon}_{\phi}(\mathbf{x}_{t};y,I_{\text{src}},t)-\mathbf{e}) \frac{\partial L_{\text{tgt}}}{\partial\theta_{\text{tgt}}}\right] \tag{6}\]
where \(\theta_{\text{tgt}}\) is the parameters of the target model, \(\phi\) is the parameters of InstructPix2Pix, and \(w(t)\) is a scheduling coefficient and we set it as \(1-\bar{\alpha}_{t}\) in the experiments.
By repeating this procedure from randomly chosen camera viewpoints, an arbitrary viewpoint image of the target model becomes the image appropriately converted from the same viewpoint image of the source model. As a result, we can obtain a target 3D scene converted from the source 3D scene along with the text instruction.
### Dynamic Scaling
In this study, we use DVGO [33] to perform fast 3D-to-3D conversions. DVGO is one of the voxel grid-based implicit 3D representations and maintains density and color information in the form of a 3D voxel grid. The voxel grid is a discrete partition of 3D space, with each vertex holding color and density information. Volume rendering is performed with the interpolated information of the vertices around the ray.
The resolution of the 3D scene is determined by the number of voxels used in the model. DVGO performs progressive scaling [17] to gradually increase the number of voxels during training as shown in Figure 3 (a). It encourages the model to learn the rough geometry of the scene first, and then gradually move on to learning the fine details of the scene.
In our 3D-to-3D task, the number of voxels in the target model is initially set to \(N\), the same as in the source model. In this situation, the voxel grid is too fine and it is difficult to change the geometry, and only the appearance changes. Therefore, we propose dynamic scaling, which gradually reduces the number of voxels from \(N\) to \(N/2^{l}\) during the 3D-to-3D conversion process and then gradually returns it to \(N\). Figure 3 (b) shows this process. This allows gradual changes in the global structure, followed by changes in the detailed structure accordingly. Also, we can adjust the magnitude of the structure transformation by adjusting the scaling factor \(l\). For large \(l\), the number of voxels becomes small, which allows for a large structural transformation. Conversely, for smaller \(l\), the structural transformation is suppressed.
## 4 Experiments
### Experimental Settings
We implemented the proposed Instruct 3D-to-3D with PyTorch [25] and performed 3D-to-3D conversion with NeRF synthetic dataset [21] and Local Light Field Fusion (LLFF) dataset [20]. We resized the NeRF synthetic dataset to 256\(\times\)256 and the LLFF dataset to 378\(\times\)504 for training. In both datasets, we first trained the source models and then converted them with Instruct 3D-to-3D. The text instructions were manually designed. The source and target models are constructed with DVGO and the number of voxels \(N\) is set as \(1,024,000\).
In the 3D conversion process, we trained the target model for 2,000 iterations. We set the dynamic scaling factor \(l=4\). With dynamic scaling, we reduced the voxel number by a factor of \(1/2^{l/5}\) every 150 iterations for the first 750 iterations and increased it by \(2^{l/5}\) every 150 iterations for the next 750 iterations. After that, we kept the voxel number the same and trained for the remaining 500 iterations. Our proposed method can be completed in 15 minutes with a single NVIDIA A100 GPU.
We set the text guidance scale \(s_{T}\) of Eq. 5 to 100 according to [26]. The image guidance scale \(s_{I}\) was set to \(5\) for the NeRF synthetic dataset and \(100\) for the LLFF dataset. We used the larger \(s_{I}\) for the LLFF dataset to strongly maintain 3D consistency since it has only front-view images.
We also used CLIP-NeRF and DreamFusion as our baseline methods. Both were set up with the same experimental settings as the proposed method except for the loss function
Figure 3: Comparison of progressive scaling and proposed dynamic scaling. In dynamic scaling, we first gradually decrease the number of voxels to change the global structure, then increase them to change to the local structure.
Figure 4: Qualitative comparison between the proposed method and baselines.
and the target texts. The target texts were also manually designed to match what the text instructions used in Instruct 3D-to-3D pointed to.
When fine-tuning source 3D scenes with DreamFusion, we used open-source StableDiffusion [29] instead of Imagen [30].
### Qualitative Evaluations
We show the qualitative comparison between the proposed method and baseline methods in Figure 4. The results in the top four rows of Figure 4 are examples from the NeRF synthetic dataset and results in the bottom three rows are examples from the LLFF dataset. The text instructions used in Instruct 3D-to-3D and the target texts used in DreamFusion and CLIP-NeRF are also shown in Figure 4. As a whole, it can be seen that the proposed method can produce converted 3D scenes that accurately reflect both the text instructions and structures of the source 3D scenes. Although CLIP-NeRF can convert 3D scenes from the NeRF synthetic dataset relatively cleanly, it cannot convert 3D scenes from the LLFF dataset well, resulting in noisy 3D scenes. In addition, DreamFusion does not explicitly incorporate source 3D scenes as conditions, so it converts the LLFF 3D scenes into completely different 3D scenes.
Figure 5 shows the differences by changing the image guidance scale \(s_{I}\) in Eq. 5. We can manipulate the degree to which the structure of the source 3D scene is reflected by adjusting the value of \(s_{I}\). For large \(s_{I}\), the source image condition is strongly incorporated during the noise estimation and the structure of the source 3D scene is strongly reflected.
### Quantitative Evaluations
For quantitative evaluation, we measured CLIP score [27] and BRISQUE score [23]. The CLIP score is a measure of the semantic alignment of image-text pairs, where higher is better. The BRISQUE score is a measure of image quality, where lower means better.
We performed 3D-to-3D conversion by creating ten 3D scene-text pairs for each of the proposed and baseline methods. The input texts were manually designed so that the target texts used in DreamFusion and CLIP-NeRF match what the text instructions in Instruct 3D-to-3D point to. The list of scene-text pairs used in this experiment is shown in the supplementary material. For each converted 3D scene, we rendered images from 100 viewpoints and used them to measure scores. We did this for 10 converted 3D scenes, for a total of 1,000 rendered images used in the evaluation. The CLIP score was calculated using the rendered images and the target texts, while the BRISQUE score was calculated from the rendered images only.
Figure 7 shows the BRISQUE score measurement results, and Figure 6 shows the CLIP score measurement results. Regarding the BRISQUE score, Instruct 3D-to-3D performs better than DreamFusion and CLIP-NeRF and achieves higher-quality 3D-to-3D conversion than these baseline methods. On the other hand, the CLIP scores for Instruct 3D-to-3D and DreamFusion are comparable. Although CLIP-NeRF achieves the best CLIP score, we cannot compare CLIP-NeRF with the other methods in terms of CLIP score in a fair manner. This is because CLIP-NeRF directly uses CLIP for training to improve the CLIP score. From these results, we confirmed that Instruct 3D-to-3D is able to convert high-quality 3D scenes with high text fidelity comparable to DreamFusion. The reason for this is considered to be that Instruct 3D-to-3D simultaneously incorporates the source 3D scene as a condition in addition to the text instruction, which contributes to a natural 3D scene conversion.
the 10 patterns of 3D-to-3D conversions used in the section 4.3 as test cases for the user study. In each test case, we first showed the source 3D scene as a video and the text instruction. Then we showed two of each of the three 3D scenes transformed with the proposed method and baseline methods in random order. The participants chose the better converted 3D scene by jointly considering the following four aspects: image quality from any viewpoint, 3D consistency, fidelity to the text instruction, and fidelity to the source 3D scene. We did not set a time limit. We finally collected 25 questionnaires. Figure 8 shows the results of the user study. Our proposed method outperforms the baseline methods with a much higher user preference rate.
### Sensitivity to the Scaling Strategy
We proposed dynamic scaling to gradually change the 3D structure. However, it is also possible to use conventional progressive scaling in 3D-to-3D conversion. Figure 9 shows a comparison of 3D conversion results using progressive scaling and dynamic scaling. As progressive scaling greatly reduces the number of voxels at the beginning of the conversion, the detailed 3D structure cannot be maintained. Our dynamic scaling gradually changes the number of voxels, resulting in small and repairable 3D structural damage and preserving the 3D structure.
Figure 10 shows the differences in 3D-to-3D conversion results by changing the dynamic scaling factor \(l\). For larger \(l\), the resolution of the voxel becomes smaller during the conversion process and the structure can be changed significantly.
### Limitations
Figure 11 is an example of a failure case. We gave the instruction to place an apple on the chair, but the seat and backrest of the chair changed to apples. Similar to Instruct-Pix2Pix, our proposed method is difficult to follow instructions that require spatial reasonings. To solve this problem, it is necessary to handle spatial information, for example, by taking the depth information of the 3D scene into account. We leave this improvement to future work.
## 5 Conclusion
In this study, we proposed Instruct 3D-to-3D, which achieves a high-quality 3D-to-3D conversion following the text instruction. We proposed dynamic scaling, which allows manipulation of the strength of the 3D structure transformation and smooth 3D-to-3D conversion. We performed quantitative and qualitative evaluations and showed that our Instruct 3D-to-3D outperforms the baseline methods in terms of the quality of the converted 3D scenes and the fidelity to the source 3D scenes and text instructions. Our proposed method makes 3D content easier to edit and use, and will contribute to greatly expanding the scope of various content productions.
Figure 11: A failure case of 3D-to-3D conversion. This 3D scene is converted from a 3D scene of a chair with the text instruction ”put an apple on the chair”.
Figure 8: User study results. Preference rates for 3D conversion quality of Instruct 3D-to-3D over DreamFusion and CLIP-NeRF.
Figure 10: Effects of dynamic scaling factor \(l\) in the 3D conversion. The large \(l\) causes a major change in structure.
Figure 9: Comparison of 3D conversion results by scaling method. our dynamic scaling does not break the 3D structure. |
2306.08669 | Plan B: New ${Z^\prime}$ models for $b\rightarrow sl^+l^-$ anomalies | Measurements of $b \rightarrow s \mu^+ \mu^-$ transitions indicate that there
may be a new physics field coupling to di-muon pairs associated with the $b$ to
$s$ flavour transition. Including the 2022 LHCb reanalysis of $R_K$ and
$R_{K^\ast}$, one infers that there may also be associated new physics in
$b\rightarrow e^+ e^-$ transitions. Here, we examine the extent of the
statistical preference for $Z^\prime$ models coupling to di-electron pairs
taking into account the relevant constraints, in particular from experiments at
LEP-2. We identify an anomaly-free set of models which interpolates between the
$Z^\prime$ not coupling to electrons at all, to one in which there is an equal
$Z^\prime$ coupling to muons and electrons (but where in all models in the set,
the $Z^\prime$ boson can mediate $b\rightarrow \mu^+ \mu^-$ transitions). A
$3B_3-L_e-2L_\mu$ model provides a close-to-optimal fit to the pertinent
measurements along the line of interpolation. We have (re-)calculated
predictions for the relevant LEP-2 observables in terms of dimension-6 SMEFT
operators and put them into the ${\tt flavio2.3.3}$ computer program, so that
they are available for global fits. | Ben Allanach, Anna Mullin | 2023-06-14T17:59:03Z | http://arxiv.org/abs/2306.08669v7 | # Plan B:
###### Abstract
Measurements of \(b\to s\mu^{+}\mu^{-}\) transitions indicate that there may be a new physics field coupling to di-muon pairs associated with the \(b\) to \(s\) flavour transition. Including the 2022 LHCb reanalysis of \(R_{K}\) and \(R_{K^{*}}\), one infers that there may also be associated new physics in \(b\to se^{+}e^{-}\) transitions. Here, we examine the extent of the statistical preference for \(Z^{\prime}\) models coupling to di-electron pairs taking into account the relevant constraints, in particular from experiments at LEP-2. We identify an anomaly-free set of models which interpolates between the \(Z^{\prime}\) not coupling to electrons at all, to one in which there is an equal \(Z^{\prime}\) coupling to muons and electrons (but where in all models in the set, the \(Z^{\prime}\) boson can mediate \(b\to s\mu^{+}\mu^{-}\) transitions). A \(3B_{3}-L_{e}-2L_{\mu}\) model provides a close-to-optimal fit to the pertinent measurements along the line of interpolation. We have (re-)calculated predictions for the relevant LEP-2 observables in terms of dimension-6 SMEFT operators and put them into the flavio computer program, so that they are available for global fits.
Keywords:\(B-\)anomalies, beyond the Standard Model, flavour changing neutral currents
## 1 Introduction
Various measurements of \(B-\)meson decays at LHC experiments are in tension with Standard Model (SM) predictions, particularly when the final state includes a di-muon pair. For example the CMS, ATLAS and LHCb combined [1; 2; 3]\(CP-\)untagged, time integrated branching ratio of \(B_{s}\) decaying to di-muon pairs \(\overline{BR}(B_{s}\to\mu^{+}\mu^{-})\)[1; 3; 4] has a \(1.6\sigma\) tension [5] with SM predictions. Furthermore, measurements in various di-muon invariant mass-squared (\(q^{2}\)) bins of \(BR(B_{s}\to\phi\mu^{+}\mu^{-})\) are up to \(4\sigma\) smaller than the SM predictions [6; 7]. Some angular distributions in \(B\to K^{*}\mu^{+}\mu^{-}\) decays have been measured by LHC experiments [8; 9; 10; 11; 12; 13] to be several \(\sigma\) short of state-of-the-art SM predictions and the same can be said of \(BR(B\to K^{*}\mu^{+}\mu^{-})\)[14]. We call the aforementioned tensions the neutral current \(b\to s\mu^{+}\mu^{-}\) anomalies. It is tempting to suppose that the tensions could be explained by unaccounted-for new physics. It has been shown in fits [15; 16] to measurements that the weak effective theory (WET) operators
\[\mathcal{L}=\ldots+N\left(C_{9}^{(\mu)}(\bar{b}\gamma^{\alpha}P_{L}s)(\bar{ \mu}\gamma_{\alpha}\mu)+C_{10}^{(\mu)}(\bar{b}\gamma^{\alpha}P_{L}s)(\bar{\mu }\gamma_{\alpha}\gamma_{5}\mu)+H.c.\right), \tag{1}\]
parameterising the effects of new physics states, significantly improve the situation. Such beyond-the-SM operators may be generated by integrating out putative heavy new physics states. Here, \(b\) is the bottom quark field, \(\mu\) the muon field and \(s\) the strange quark field.
\(N:=4G_{F}e^{2}|V_{ts}|/(16\pi^{2}\sqrt{2})\) is a normalising constant, where \(G_{F}\) is the Fermi decay constant, \(e\) the electromagnetic gauge coupling and \(V_{ij}\) the entries of the CKM matrix.
We use the smelli2.3.2[17] computer program to predict the aforementioned \(b\to s\mu^{+}\mu^{-}\) anomalies. smelli2.3.2 puts them in the 'quarks' category of observable. For these, there is some debate about the most accurate predictions and the size of the associated theoretical uncertainties although many estimates (e.g. [18; 19]) predict that the theoretical uncertainties alone cannot explain the \(b\to s\mu^{+}\mu^{-}\) anomalies1.
Footnote 1: Some estimates in Ref. [20] fit an unidentified non-perturbative SM contribution that mimics a \(q^{2}-\)dependent lepton-family universal \(C_{9}\) in tandem with the new physics operators. As argued in Ref. [21], a similar non-perturbative effect cannot explain the \(2\sigma\) deficit in the \(BR(B\to X_{s}\mu^{+}\mu^{-})\) high \(q^{2}-\)bin, which is compatible with the low \(q^{2}-\)deficits. The \(b\to s\mu^{+}\mu^{-}\) anomalies persist when one uses ratios of observables including \(\Delta M_{s,d}\), \(\epsilon_{K}\), \(S_{\psi K_{S}}\) to cancel their dependence on CKM matrix elements [22; 23], although throughout the present paper, new physics contributions to CKM matrix elements are predicted to be negligible.
Contrary to the \(b\to s\mu^{+}\mu^{-}\) anomalies, a 2022 LHCb reanalysis [24] holds that measurements of
\[R_{A}(q_{\rm min}^{2},\ q_{\rm max}^{2}):=\frac{\int_{q_{\rm min}^{2}}^{q_{\rm max }^{2}}dq^{2}BR(B\to A\mu^{+}\mu^{-}(q^{2}))}{\int_{q_{\rm min}^{2}}^{q_{\rm max }^{2}}dq^{2}BR(B\to Ae^{+}e^{-}(q^{2}))}, \tag{2}\]
are broadly _compatible_ with SM predictions for \(A\in\{K,K^{*}\}\), within uncertainties2. Such ratios are commonly called lepton flavour universality (LFU) variables. Since we entertain the possibility that the \(b\to s\mu^{+}\mu^{-}\) anomalies may be pointing to some new physics state coupling to muons (and \(\bar{b}s\) quarks) the reanalysis suggests that there could be also be a new physics contribution from
Footnote 2: There are some \(1\sigma\) mild tensions, however, for \(A=K_{S}^{0}\) and \(A=K^{*\pm}\)[25].
\[{\cal L}=\ldots+N\left(C_{9}^{(e)}(\bar{b}\gamma^{\alpha}P_{L}s)(\bar{e} \gamma_{\alpha}e)+C_{10}^{(e)}(\bar{b}\gamma^{\alpha}P_{L}s)(\bar{e}\gamma_{ \alpha}\gamma_{5}e)+H.c.\right). \tag{3}\]
This possibility has already been partially addressed in Ref. [15], where constraints on the \(C_{9}^{(e)}-C_{9}^{(\mu)}\) parameter plane from some different relevant flavour observables were presented, where all other new physics Wilson coefficients are null. It was demonstrated that there is parameter space where the constraints are compatible with each other at the 95% confidence level (CL). Some cases with other non-zero new physics Wilson operators were also analysed in Refs. [15; 16]. It was shown in Ref. [15] that, fitting two dominant new-physics WET operators to \(b\to s\mu^{+}\mu^{-}\) data, \(C_{9}^{(e)}\) and \(C_{9}^{(\mu)}\) provide the biggest fit improvement upon the SM compared to other scenarios involving new physics effects with right-handed quark currents (\(C_{9}^{\prime\,(\mu)}\) and \(C_{10}^{\prime\,(\mu)}\)) or other operators. It had already been emphasised though that current data on direct \(CP-\)violation in \(B\to K\mu\mu\) decays coupled with measurements of the branching ratio and the 2022 LHCb constraints upon \(R_{K^{(*)}}\) still allow significant lepton universality _violation_ between \(C_{9,10}^{(\mu)}\) and \(C_{9,10}^{(e)}\)[26]. We note here that often, the natural language to describe the interactions of TeV-scale models is the SM effective field theory (SMEFT) [27], which involves complete representations of the unbroken SM gauge group (e.g. \(SU(2)_{L}\) doublets), as opposed to WET, which is valid below the \(W\) boson mass and is therefore written in the spontaneously broken phase of the electroweak gauge symmetry.
Within the present paper, we shall only address the \(b\to s\mu^{+}\mu^{-}\) anomalies, not the charged current anomalies in \(b\to c\ell\bar{\nu}\) transitions, which currently display a joint deviation between two particular SM predictions and measurements [28] at the \(3.3\sigma\) level. Were this deviation to become definite and confirmed, the models and scenarios contained within the present paper would require significant modification, for example by adding additional charged gauge fields or leptoquarks, with family-dependent interactions.
In the following section, we shall perform our own fits including the new physics operators in (1) and (3) in order to check the compatibility of some of the results of Ref. [15] with the different theoretical calculation of smelli2.3.2. Then, in SS3, we examine \(Z^{\prime}\) models that are capable of predicting them. Some of the other operators induced yield a change to di-lepton production cross-section measurements at experiments at the LEP-2 collider, which we recalculate in SS4. Using these constraints, we examine the fits to our set of models in SS5, quantifying the extent to which a non-zero coupling of the \(Z^{\prime}\) to di-electron pairs is preferred. One particular model based on \(U(1)_{3B_{3}-L_{e}-2L_{\mu}}\) is singled out as being close-to-optimal whilst simultaneously having relatively low \(U(1)\) charges for the fermionic fields. Parameter space constraints are presented. We summarise and conclude in SS6.
## 2 SMEFT operator fit
Introducing operators that couple di-electron and di-muon pairs with new physics appears to significantly improve fits to recent measurements. In Section 5 we investigate the best fit for \(Z^{\prime}\) models described in Section 3, but to inform our choice of model we first understand the phenomenological effects of adding only four non-zero Wilson coefficients (WCs): \(C_{9}^{(e)}\), \(C_{9}^{(\mu)}\), \(C_{10}^{(e)}\) and \(C_{10}^{(\mu)}\). Note that in particular we do not consider possible contributions to isospin triplet operators (which may induce changes to \(b\to c\ell\bar{\nu}\) transitions), nor do we consider purely right-handed quark current contributions (as mentioned in SS1, these can ameliorate the fit to neutral current \(b-\)anomalies but they do not provide the best fit improvement, at least in simplified set-ups). Within the restricted set of operators that we consider - which are generated by the \(Z^{\prime}\) models we consider later - we check how \(b\to s\mu^{+}\mu^{-}\) measurements and LFU observables (especially the \(R_{K}\) and \(R_{K^{*}}\) ratios from the 2022 LHCb reanalysis [24]) affect the statistical preference for new physics that couples to di-electron pairs.
We place constraints and perform global fits in the parameter plane \(C_{9}^{(e)}-C_{9}^{(\mu)}\) similar to Ref. [15]. Our evaluation focuses on four cases which encompass combinations of left-handed (\(C_{9}^{a}=-C_{10}^{a}\) for \(a\in\{e,\mu\}\)) and vector-like (\(C_{10}^{a}=0\)) couplings of new physics to di-muon pairs and/or di-electron pairs through appropriate selection of our chosen WCs. We shall take two-dimensional slices through the four-dimensional parameter space using combinations of these couplings.
The WCs introduced above belong to the WET, whereas we give inputs to smelli2.3.2 belonging to the Standard Model effective field theory (SMEFT) WCs. The SMEFT provides a framework for describing new physics contributions at energies much larger than the electroweak scale. We match between the WET Hamiltonian and the SMEFT operators as
described in Ref. [20], and normalise by the constant \(N\) introduced in (1) to a new physics scale of 30 TeV. In our analysis of new physics that couples to di-muon pairs, the relevant SMEFT coefficients are denoted \(C_{qe}^{(l)2322}\) and \(C_{lq}^{(l)2223}\), which are input to smelli2.3.2 in units of GeV\({}^{-2}\). We wish to match the SMEFT operators to include those in (1), i.e.
\[C_{qe}^{(l)2322}=N(C_{9}^{(\mu)}+C_{10}^{(\mu)}), \tag{2}\]
\[C_{lq}^{(l)2223}=N(C_{9}^{(\mu)}-C_{10}^{(\mu)}). \tag{3}\]
These are WCs multiplying the dimension-6 SMEFT operators in the Lagrangian density:
\[O_{qe}^{(l)2322}=(\bar{Q}_{2}\gamma_{\alpha}Q_{3})(\bar{e}_{2}\gamma^{\alpha}e_ {2}), \tag{4}\]
\[O_{lq}^{(l)2223}=(\bar{L}_{2}\gamma_{\alpha}L_{2})(\bar{Q}_{2}\gamma^{\alpha}Q _{3}), \tag{5}\]
where \(L_{i}\) and \(Q_{i}\) are \(SU(2)_{L}\) doublets and \(e_{i}\) are \(SU(2)_{L}\) singlets. We adapt Eqs. 2 - 5 to write the equivalent SMEFT coefficients and operators contributing to the transitions \(b\to se^{+}e^{-}\), by replacing indices \(2\to 1\) on lepton indices and \(\mu\to e\) on weak effective theory coefficients on the right hand sides, in which case the left hand sides read \(C_{qe}^{(l)2311}\), \(C_{lq}^{(l)1123}\), \(O_{qe}^{(l)2311}\) and \(O_{lq}^{(l)1123}\), respectively.
We examine constraints from two main categories of observables contained in smelli2.3.2, labelled 'quarks' and 'LFU'. The LFU category consists of 23 measurements from Belle, LHCb and BaBar which include constraints from the ratios \(R_{K}\) and \(R_{K^{*}}\) (including the updates by LHCb in 2022). We aim to understand to which extent the updated measurements favour adding new physics couplings to di-electron pairs. Therefore, we additionally examine these two LFU ratios separately from the total set of LFU contributions in our results. The 'quarks' category contains 224 other contributions from LHCb measurements of \(B\) meson decays and other similar measurements from ATLAS, CMS, Belle and BaBar.
The smelli2.3.2 package requires several tools for performing a phenomenological analysis, including flavio2.3.3 for computing flavour and other precision observables and accounting for their theory uncertainties, alongside wilson2.3.2 for matching between the weak effective theory and the SMEFT and performing the renormalisation group running. The combination of these and other tools allows smelli2.3.2 to produce a SMEFT likelihood function including a total of 247 observables to compare with predictions [29].
Our global fits aim to identify the preferred ranges of WCs parameterising new physics by performing a \(\chi^{2}\) test (as described in Ref. [30]). Combined measurements include relevant sectors of experimental physics as including \(B\)-decay and LFU violating observables. By using smelli2.3.2 for our predictions, we take into account the mixing between different sectors under renormalisation.
A similar fit to one of the four that we present in this section has also been performed (with a somewhat different set of \(b-\)observables and a different calculation of the predictions
of observables) in Ref. [15]3. Here, we present our results as a function of \(C_{9,10}^{(e,\mu)}\) for ease of comparison, even though actually our fit involves additional and related operators (implied by SMEFT) that are related by \(SU(2)_{L}\) symmetry.
Footnote 3: The fit of Ref. [15] goes beyond QCD factorisation, allowing it to use the \(q^{2}\in[6,8]\) bin of various measurements, unlike our fit using flavio2.3.3, which uses QCD factorisation for its predictions and so excludes that bin.
### Fit results
In our first scenario, we set \(C_{10}^{(e)}=C_{10}^{(\mu)}=0\) and allow \(C_{9}^{(e)}\) and \(C_{9}^{(\mu)}\) to vary freely, corresponding to the case where new physics has vector-like couplings to both di-muon and di-electron pairs. The result is plotted in Fig. 1 (top-left). A significant region of overlap exists between the 'LFU' and 'quarks' constraints where \(C_{9}^{(e)}\) takes values between around -2 and its SM value of 0. The most constraining observables from the collection of 23 that test lepton flavour universality appear to be \(R_{K}\) and \(R_{K^{\star}}\). This fit was performed first by Ref. [15] (using a different theoretical calculation of the SM prediction and theoretical uncertainties) and our flavio2.3.3 fit shows a rather similar 95% CL region of global fit4.
Footnote 4: The \(\chi^{2}\) improvement upon the SM is significantly higher in Ref. [15] due mainly to the inclusion there of the \(q^{2}\in[6,8]\) GeV\({}^{2}\) bin.
The second scenario we consider here requires \(C_{9}^{(e)}=-C_{10}^{(e)}\) and \(C_{9}^{(\mu)}=-C_{10}^{(\mu)}\) such that di-electron and di-muon pairs have only left-handed couplings with new physics, leaving a smaller range of best-fit values for \(C_{9}^{(e)}\) as shown in Fig. 1 (top-right).
Another possibility we consider is that of vector-like couplings to di-muon pairs and left-handed couplings of new physics to di-electron pairs, presented in Fig. 1 (bottom-left). The range of best-fit values for \(C_{9}^{(e)}\) is similar here to that in the top-right panel, but this scenario includes a wider range of best-fit values for \(C_{9}^{(\mu)}\).
The final scenario we consider is Fig. 1 (bottom-right) where the couplings to new physics are swapped compared with the scenario in the bottom-left panel such that di-muon pairs have left-handed couplings and di-electron pairs have vector-like couplings. A larger range of good-fit values for \(C_{9}^{(e)}\) result for this scenario; it is a case with a fit that includes values extending below \(C_{9}^{(e)}=-1\).
The main outcome of Ref. [15] is to evaluate global fits with and without new physics contributions to electron modes under a framework containing updates to both experimental measurements and theoretical calculations of form factors. Within this fully updated framework, the results in that reference identify that new physics introduced by \(C_{9}^{(\mu)}\) is mildly preferred over scenarios with \(C_{9}^{(\mu)}=-C_{10}^{(\mu)}\), favouring a vector-like coupling to di-muon pairs over a left-handed coupling. The fit also reveals that data can be compatible with non-zero \(C_{10}^{(\mu)}\), although support for these scenarios is not as strong.
Other analyses support the introduction of non-zero new physics WCs, though with different assumptions to ours. For example, a recent evaluation including possible new physics WCs [16] performed higher dimensional global fits for several more non-zero WCs instead of focusing on the four that we examine here. There is therefore no overlap between our results
and those of Ref. [16]. Another fit assumes that new physics affects electrons and muons identically [31], an assumption which we do not follow in the present paper.
Both Refs. [15] and [16] provide insight into a renewed focus on LFU new physics by examining the differences between global fits before and after the release of the 2022 LHCb update of \(R_{K}\) and \(R_{K^{*}}\). The large impact of such observables on constraining the parameter plane \(C_{9}^{(e)}-C_{9}^{(\mu)}\) can be seen in Fig. 1.
Fig. 1 indicates that the a best-fit point has \(C_{9}^{(e)}\approx C_{9}^{(\mu)}\neq 0\), but that \(C_{9}^{(e)}=0\) can also fit the data, as shown in the top-left hand panel. Motivated by this top-left hand panel and similar previous results in Ref. [15] we shall now turn to a set of models which interpolates between \(C_{9}^{(e)}=C_{9}^{(\mu)}\neq 0\) and \(C_{9}^{(e)}=0\) (and which also extrapolates outside of these constraints).
[FIGURE
odels
Our \(U(1)_{X}\) gauge symmetry (which is extra to the gauge symmetry of the SM) is expected to be spontaneously broken by a complex scalar 'flavon' field \(\theta\), whose \(U(1)_{X}\) charge \(Q\) is non-zero. Of the models we shall propose, aspects such as these just mentioned are very similar to the \(U(1)_{Y_{3}-x(L_{\mu}-L_{\tau})}\) model [32], the \(U(1)_{B_{3}-L_{2}}\) model [33; 34; 35; 36], Third Family Hypercharge models (TFHMs) [37; 38], or mixtures between the \(U(1)_{B_{3}-L_{2}}\) model and the TFHM [39]. The massive electrically neutral gauge boson resulting from Higgsed \(U(1)_{X}\) breaking is dubbed a \(Z^{\prime}\) boson which has a mass
\[M_{Z^{\prime}}=Qg_{Z^{\prime}}\langle\theta\rangle, \tag{1}\]
where \(\langle\theta\rangle\) is the vacuum expectation value of the flavon field. Whichever fields possess non-zero \(U(1)_{X}\) charges will generically have couplings to the \(Z^{\prime}\) boson. Thus we wish second-family leptons to have a non-zero charge, and (following the arguments in SS1), possibly first-family leptons as well5. We also wish the third family of quarks to have a charge in order to establish a \(Z^{\prime}\) coupling to \(s\bar{b}+H.c.\), starting from a coupling to \(b\bar{b}\) in the weak eigenbasis, as also explained in SS1. Then, we may explain the \(b\to s\ell^{+}\ell^{-}\) anomalies by new physics contributions to the amplitude like the one depicted in Fig. 2. It should be evident from the above discussion that the charges of the various fields in the model affect the phenomenology of it, since they determine what the \(Z^{\prime}\) couples to (and with which relative strength). Therefore, we now discuss the fermionic charge assignment of the model.
Footnote 5: Even with the \(Z^{\prime}\)_not_ coupling to electrons, the fit of flavour data can be significantly improved with respect to that of the SM [5; 26].
### Fermionic charge assignment
We begin by extending the SM by three right-handed neutrino fields \(\nu_{i}\). These \(\nu_{i}\) fields will be used both for anomaly cancellation, with an eye to ultimately providing an ultra-violet model of neutrino masses (this we shall not specify in any detail, however). The chiral fermionic field content of the model and its representations under the SM\(\times U(1)_{X}\) gauge group is specified in Table 1. We note here that for brevity we shall denote both the field and its \(U(1)_{X}\) charge by the same label; context should make the meaning of the symbol clear.
As explained above, we want to find a set of models that can potentially address the \(b\to s\mu^{+}\mu^{-}\) anomalies, but which interpolate between models where the \(Z^{\prime}\) does _not_ couple
Figure 2: Feynman diagram of the leading new physics contribution to \(b\to s\ell^{+}\ell^{-}\) observables from a suitable \(Z^{\prime}\) model.
to electrons and those where it couples with an equal strength to di-electron pairs and di-muon pairs. However, the \(U(1)_{X}\) charge assignments have constraints upon them given by the requirement of not generating quantum field theoretic anomalies, which would spoil the gauge symmetry. We shall now go through the arguments that lead us to the charge assignments, since they shall make clear to which extent the assignments are constrained by anomaly cancellation, to which extent they are a choice dictated by desired phenomenology and the extent to which they are just a choice to be concrete.
We first list the anomaly cancellation conditions which a gauged \(U(1)_{X}\) chiral-fermion charge assignment should respect [40]:
\[\sum_{i}\left(2Q_{i}-u_{i}-d_{i}\right) = 0, \tag{10}\] \[\sum_{i}\left(L_{i}+3Q_{i}\right) = 0,\] (11) \[\sum_{i}\left(Q_{i}+3L_{i}-8u_{i}-2d_{i}-6e_{i}\right) = 0,\] (12) \[\sum_{i}\left(6Q_{i}+2L_{i}-3u_{i}-3d_{i}-e_{i}-\nu_{i}\right) = 0,\] (13) \[\sum_{i}\left(Q_{i}^{2}-L_{i}^{2}-2u_{i}^{2}+d_{i}^{2}+e_{i}^{2}\right) = 0\] (14) \[\sum_{i}\left(6Q_{i}^{3}+2L_{i}^{3}-3u_{i}^{3}-3d_{i}^{3}-e_{i}^{ 3}-\nu_{i}^{3}\right) = 0. \tag{15}\]
These equations have been simultaneously solved over the integers both numerically with \(U(1)_{X}\) charges between 10 and -10 [40] and, more generally, analytically [41]. Rather than begin with these solutions and then restrict them, we instead find it instructive to make some choices based partly on expected phenomenological consequences of some charge assignments whilst simultaneously applying (10)-(15).
\begin{table}
\begin{tabular}{|c|c|} \hline Field & \((SU(3),SU(2),U(1)_{Y},U(1)_{X})\) \\ \hline \(Q_{i}\) & \((3,2,1,Q_{i})\) \\ \(L_{i}\) & \((1,2,3,L_{i})\) \\ \(u_{i}\) & \((3,1,4,u_{i})\) \\ \(d_{i}\) & \((3,1,-2,d_{i})\) \\ \(e_{i}\) & \((1,1,-6,e_{i})\) \\ \(\nu_{i}\) & \((1,1,0,\nu_{i})\) \\ \hline \end{tabular}
\end{table}
Table 1: Representations of fermionic chiral fields under the SM\(\times U(1)_{X}\) gauge group. \(i\in\{1,2,3\}\) is a family index and gauge indices have been suppressed. \(Q_{i}\) and \(L_{i}\) are left-handed Weyl fermions, whereas the other fields listed are all right-handed Weyl fermions. We have chosen here to normalise the hypercharge gauge coupling so that the hypercharges of all fermionic fields are integers.
We require a coupling of the \(Z^{\prime}\) to left-handed \(b\bar{s}\) quark pairs in order to explain an apparent new physics effect in \(b\to s\mu^{+}\mu^{-}\) transitions. We therefore pick \(Q_{3}\neq 0\), providing a \(Z^{\prime}\) coupling to left-handed \(b\bar{b}\) pairs and assume that its coupling to (left-handed) \(\bar{b}s+H.c.\) will be provided by some small amount of \(b-s\) mixing. The mixing is banned by \(U(1)_{X}\) but since this is spontaneously broken, we anticipate small \(U(1)_{X}\) breaking effects, such as small quark mixing. We may fix \(Q_{3}=1\) by rescaling the \(U(1)_{X}\) gauge coupling. If we couple the \(Z^{\prime}\) dominantly to the _third_ family quarks only, direct \(Z^{\prime}\) search bounds from the LHC will be weaker, since LHC \(Z^{\prime}\) production dominantly occurs via the \(b\bar{b}\to Z^{\prime}\) process, which is doubly suppressed by both the \(b\) and \(\bar{b}\) parton distribution functions. Motivated by this, we fix the \(U(1)_{X}\) charges of the first two generations of quark fields to zero, i.e. \(u_{1}=u_{2}=d_{1}=d_{2}=Q_{1}=Q_{2}=0\). Substituting these assignments into (18), we obtain that \(u_{3}+d_{3}=2\). We shall here pick \(u_{3}=d_{3}=1\), meaning that we can characterise the quark charges in terms of third-family baryon number. (19) then gives that
\[\sum_{i}L_{i}=-3. \tag{20}\]
We shall pick \(L_{2}\neq 0\) in order to couple the \(Z^{\prime}\) to left-handed muon pairs, since we know from fits to \(b\to s\mu^{+}\mu^{-}\) anomalies [15; 16] that a new physics contribution to \(C_{9}^{(\mu)}\neq 0\) is necessary to describe the pertinent measurements well. We shall vary \(L_{1}\) in order to vary the \(Z^{\prime}\) coupling to left-handed electron pairs: (20) then can be rearranged to yield \(L_{3}\). Substituting (20) and the other assigned charges into (21), we obtain
\[\sum_{i}e_{i}=-3, \tag{21}\]
which allows us to obtain, using (22),
\[\sum_{i}\nu_{i}=-3. \tag{22}\]
(23) and (24) then become
\[\sum_{i}\left(L_{i}^{2}-e_{i}^{2}\right)=0, \tag{23}\] \[\sum_{i}\left(2L_{i}^{3}-e_{i}^{3}-\nu_{i}^{3}\right)=0. \tag{24}\]
(21)-(24) are solved by
\[L_{i}=e_{i}=\nu_{i}\text{ for each }i. \tag{25}\]
This is not a general solution of the equations, but it is sufficient for our purposes here. After the stipulation in (25), there remains only one independent constraint, which we can take to be (20). (25) allows us to summarise the \(U(1)_{X}\) charges in terms of electron number \(L_{e}\), muon number \(L_{\mu}\) and tau number \(L_{\tau}\). We shall fix the \(X\) charge of \(L_{2}=e_{2}=\nu_{2}\) (here dubbed to be \(-X_{\mu}\)) to be a reasonably large integer to allow more resolution in the other
charges; we pick \(X_{\mu}=10\). We then allow the \(U(1)_{X}\) electron charge (\(-X_{e}\)) to vary. The \(U(1)_{X}\) charges of the fermions as a whole can be characterised by
\[3B_{3}-(X_{e}L_{e}+X_{\mu}L_{\mu}+[3-X_{e}-X_{\mu}]L_{\tau})\,. \tag{3.14}\]
\(X_{e}/X_{\mu}=1\) corresponds to the case where the coupling of the \(Z^{\prime}\) to di-electron pairs is equal to that of di-muon pairs, whereas \(X_{e}=0\) is the case where the electron does not directly couple to the \(Z^{\prime}\) at tree-level.
The arguments on anomaly cancellation thus far apply to our assumed chiral fermionic content of the SM plus three right-handed neutrinos. If one were add a pair of chiral fermions which are vector-like under the SM gauge symmetries but have non-cancelling \(U(1)_{X}\) charges, the system of anomaly equations would change and one could acquire different solutions to the ones that we have found. One would need to explain how these additional chiral fermions acquire masses to make them significantly heavier than is probed by current experiments; this might be possible, depending upon the new chiral fermion charges, by utilising \(U(1)_{X}\) breaking effects via \(\langle\theta\rangle\approx\mathcal{O}(\text{TeV}).\) We note this caveat, but shall for now assume no additional chiral fermionic fields of this type. Our anomaly cancellation analysis applies to the chiral fermionic field content in Table 1 along with any additional fermions only being added in vector-like pairs under the entire SM\(\times U(1)_{X}\) gauge group.
We fix the \(U(1)_{X}\) charge of the SM Higgs doublet \(H\) so that the top Yukawa coupling Lagrangian density term, \(\lambda_{t}\overline{Q}_{3}Hu_{3}+H.c.\), is allowed by the gauge symmetry6. This constraint requires that \(H\) then has \(U(1)_{X}\) charge equal to zero, simplifying our analysis because there is no predicted \(Z-Z^{\prime}\) mixing at tree-level. Such a mixing would change the predictions of electroweak precision observables (EWPOs); with zero mixing, as predicted here, we effectively decouple the EWPOs from our discussion. This is essentially dictated by model choice: in other models, e.g. the TFHMs, the electroweak observables significantly change with model parameters (the quality of the electroweak fit in the TFHMs is similar to that of the SM, with improvements in \(M_{W}\) being offset against other EWPOs such as measurements of \(Z^{0}\) boson couplings to different families of di-lepton pair [43]). Thus, decoupling the EWPOs as we do here simplifies our analysis but is not necessarily essential phenomenologically: the preference (or otherwise) of electroweak fits has to be determined on a case-by-case basis.
Footnote 6: Some of the other Yukawa couplings may be disallowed by \(U(1)_{X}\), but may receive small contributions from non-renormalisable operators (for example as in the Froggatt-Nielsen mechanism [42]) once \(U(1)_{X}\) is spontaneously broken.
It behoves us now to specify the other pertinent TeV-scale properties of our model, which we shall do in the following subsection.
### More model details
Here, we deal with the \(Z^{\prime}\)-specific parts of the model, which encapsulates the phenomenology that we are interested in predicting. We shall not find it necessary to specify all details of the model (the flavon potential or flavon/Higgs mixing - for that, see Ref. [44] - or the origin of
_non-zero_ small Yukawa couplings, for example). The model set-up in the present subsection closely follows that of Refs. [36; 37; 38; 39], which are discriminated from the present model by the fermionic \(U(1)_{X}\) charge assignments. The model is supposed to be at the level of a TeV-scale effective field theory that includes the quantum fields of the SM, three right-handed neutrino fields and the \(Z^{\prime}\). We write the fermionic fields in the gauge eigenbasis with a primed notation
\[\mathbf{u}^{\prime}_{\mathbf{L}} = \left(\begin{array}{c}u^{\prime}_{L}\\ c^{\prime}_{L}\\ t^{\prime}_{L}\end{array}\right),\qquad\mathbf{d}^{\prime}_{\mathbf{L}}=\left( \begin{array}{c}d^{\prime}_{L}\\ s^{\prime}_{L}\\ b^{\prime}_{L}\end{array}\right),\qquad\mathbf{e}^{\prime}_{\mathbf{L}}=\left( \begin{array}{c}e^{\prime}_{L}\\ \mu^{\prime}_{L}\\ \tau^{\prime}_{L}\end{array}\right),\qquad\boldsymbol{\nu}^{\prime}_{L}= \left(\begin{array}{c}\nu^{\prime}_{eL}\\ \nu^{\prime}_{\mu L}\\ \nu^{\prime}_{\tau L}\end{array}\right),\] \[\mathbf{u}^{\prime}_{\mathbf{R}} = \left(\begin{array}{c}u^{\prime}_{R}\\ c^{\prime}_{R}\\ t^{\prime}_{R}\end{array}\right),\qquad\mathbf{d}^{\prime}_{\mathbf{R}}=\left( \begin{array}{c}d^{\prime}_{R}\\ s^{\prime}_{R}\\ b^{\prime}_{R}\end{array}\right),\qquad\mathbf{e}^{\prime}_{\mathbf{R}}=\left( \begin{array}{c}e^{\prime}_{R}\\ \mu^{\prime}_{R}\\ \tau^{\prime}_{R}\end{array}\right),\qquad\boldsymbol{\nu}^{\prime}_{R}=\left( \begin{array}{c}\nu^{\prime}_{eR}\\ \nu^{\prime}_{\mu R}\\ \nu^{\prime}_{\tau R}\end{array}\right), \tag{3.15}\]
along with the SM fermionic electroweak doublets
\[\mathbf{Q}^{\prime}_{\,i}=\left(\begin{array}{c}\mathbf{u}^{\prime}_{ \mathbf{L}i}\\ \mathbf{d}^{\prime}_{\mathbf{L}i}\end{array}\right),\qquad\mathbf{L}^{\prime} _{\,i}=\left(\begin{array}{c}\boldsymbol{\nu}^{\prime}_{Li}\\ \mathbf{e}^{\prime}_{\mathbf{L}i}\end{array}\right). \tag{3.16}\]
The neutrinos and SM fermions acquire masses after the SM Brout-Englert-Higgs mechanism through
\[-\mathcal{L}_{Y} = \overline{\mathbf{Q}^{\prime}}Y_{u}\tilde{H}\mathbf{u}^{\prime}_{ \mathbf{R}}+\overline{\mathbf{Q}^{\prime}}Y_{d}H\mathbf{d}^{\prime}_{\mathbf{R }}+\overline{\mathbf{L}^{\prime}}Y_{e}H\mathbf{e}^{\prime}_{\mathbf{R}}+ \overline{\mathbf{L}}Y_{\nu}\tilde{H}\boldsymbol{\nu}^{\prime}_{R}+\frac{1}{2 }\overline{\boldsymbol{\nu}^{\prime\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
\(V_{I_{L}}\) and \(V_{I_{R}}\) are 3 by 3 unitary mixing matrices for each field species \(I\), \(m_{u}:=vY_{u}/\sqrt{2}\), \(m_{d}:=vY_{d}/\sqrt{2}\), \(m_{e}:=vY_{e}/\sqrt{2}\) and \(m_{\nu_{D}}:=vY_{\nu}/\sqrt{2}\), where \(v\) is the SM Higgs expectation value, measured to be 246.22 GeV [45]. The final explicit term in (3.19) incorporates the see-saw mechanism via a 6 by 6 complex symmetric mass matrix. Since the elements in \(m_{\nu_{D}}\) are much smaller than those in \(M\), we perform a rotation to obtain a 3 by 3 complex symmetric mass matrix for the three light neutrinos. These approximately coincide with the left-handed weak eigenstates \(\mathbf{\nu}_{L}^{\prime}\), whereas three heavy neutrinos approximately correspond to the right-handed weak eigenstates \(\mathbf{\nu}_{R}^{\prime}\). The neutrino mass term of (3.19) becomes, to a good approximation,
\[-\mathcal{L}_{\nu}=\frac{1}{2}\overline{\mathbf{\nu}_{L}^{\prime c}}m_{\nu}\mathbf{ \nu}_{L}^{\prime}+\frac{1}{2}\overline{\mathbf{\nu}_{R}^{\prime c}}M\mathbf{\nu}_{R}^{ \prime}+H.c., \tag{3.21}\]
where \(m_{\nu}:=m_{\nu D}^{T}M^{-1}m_{\nu_{D}}\) is a complex symmetric 3 by 3 matrix.
Choosing \(V_{I_{L}}^{\dagger}m_{I}V_{I_{R}}\) to be diagonal, real and positive for \(I\in\{u,d,e\}\), and \(V_{\nu_{L}}^{T}m_{\nu}V_{\nu_{L}}\) to be diagonal, real and positive (all in ascending order of mass from the top left toward the bottom right of the matrix), we can identify the _non_-primed _mass_ eigenstates7
Footnote 7: \(\mathbf{P}\) and \(\mathbf{P}^{\prime}\) are column 3-vectors.
\[\mathbf{P}=V_{P}^{\dagger}\mathbf{P}^{\prime}\text{ where }P\in\{u_{R},\ d_{L},\ u_{L},\ e_{R},\ u_{R},\ d_{R},\ \nu_{L},\ e_{L}\}. \tag{3.22}\]
We may then find the CKM matrix \(V\) and the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix \(U\) in terms of the fermionic mixing matrices:
\[V=V_{u_{L}}^{\dagger}V_{d_{L}},\qquad U=V_{\nu_{L}}^{\dagger}V_{e_{L}}. \tag{3.23}\]
The zeroes in \(Y_{u}\) and \(Y_{d}\) in (3.18) predict that the magnitudes of the elements \(V_{23},V_{13},V_{32},V_{31}\) are much smaller than 1, agreeing with measurements [45]. Clearly, more model building into the ultra-violet would be required to understand the details of neutrino masses and mixing and how exactly the zeroes in (3.18) are filled in with small entries. We leave such model considerations aside, instead pointing to SS2.5 of Ref. [5] for some possibilities.
The kinetic terms of the \(U(1)_{X}\) gauge boson yield the following \(Z^{\prime}\) interactions
\[\mathcal{L}_{I}=-g_{Z^{\prime}}\left(\overline{\mathbf{Q}^{\prime}}\not{Z}^{ \prime}\xi\mathbf{Q}^{\prime}+\overline{\mathbf{u}_{R}^{\prime}}\not{Z}^{ \prime}\xi\mathbf{u}_{R}^{\prime}+\overline{d_{R}^{\prime}}\not{Z}^{\prime} \xi\mathbf{d}_{R}^{\prime}+\overline{\mathbf{L}^{\prime}}\not{Z}^{\prime}\Xi \mathbf{L}^{\prime}+\overline{\mathbf{e}_{R}^{\prime}}\not{Z}^{\prime}\Xi \mathbf{\nu}_{R}^{\prime}+\overline{\mathbf{\nu}_{R}^{\prime}}\not{Z}^{\prime}\Xi\mathbf{ \nu}_{R}^{\prime}\right), \tag{3.24}\]
where
\[\xi:=\begin{pmatrix}0&0&0\\ 0&0&0\\ 0&0&1\end{pmatrix},\qquad\Xi:=\begin{pmatrix}-X_{e}&0&0\\ 0&-X_{\mu}&0\\ 0&0&-X_{\tau}\end{pmatrix} \tag{3.25}\]
are fixed by the fermionic fields' \(U(1)_{X}\) charges. The right-handed neutrinos \(\mathbf{\nu}_{R}^{\prime}\) are assumed to be heavy compared to the TeV scale and play no further role in the phenomenology of \(b\to s\mu^{+}\mu^{-}\) anomalies; we shall therefore neglect them in the discussion that follows. In the unprimed mass eigenbasis, (3.24) becomes
\[\mathcal{L}_{I} = -g_{Z^{\prime}}\left(\overline{\mathbf{u}_{\mathbf{L}}}\not{Z}^{ \prime}\Lambda_{\xi}^{u_{L}}\mathbf{u}_{\mathbf{L}}+\overline{\mathbf{d}_{ \mathbf{L}}}\not{Z}^{\prime}\Lambda_{\xi}^{d_{L}}\mathbf{d}_{\mathbf{L}}+ \overline{\mathbf{u}_{\mathbf{R}}}\not{Z}^{\prime}\Lambda_{\xi}^{u_{R}}\mathbf{ u}_{\mathbf{R}}+\overline{\mathbf{d}_{\mathbf{R}}}\not{Z}^{\prime}\Lambda_{\xi}^{d_{R}} \mathbf{d}_{\mathbf{R}}+\right. \tag{3.26}\] \[\left.\overline{\mathbf{e}_{\mathbf{L}}}\not{Z}^{\prime}\Lambda_{ \Xi}^{e_{L}}\mathbf{e}_{\mathbf{L}}+\overline{\mathbf{e}_{\mathbf{R}}}\not{Z}^{ \prime}\Lambda_{\Xi}^{e_{R}}\mathbf{e}_{\mathbf{R}}+\overline{\mathbf{\nu}_{L}} \not{Z}^{\prime}\Lambda_{\Xi}^{\nu_{L}}\mathbf{\nu}_{L}+\overline{\mathbf{\nu}_{R}} \not{Z}^{\prime}\Lambda_{\Xi}^{\nu_{R}}\mathbf{\nu}_{R}\right),\]
where \(\Lambda_{\alpha}^{P}:=V_{P}\alpha V_{P}^{\dagger}\) for \(P\neq\nu_{L}\) and \(\alpha\in\{\xi,\Xi\}\). \(\Lambda_{\alpha}^{\nu}:=V_{P}\alpha V_{P}^{T}\)
To make phenomenological progress with our models, we shall need to specify \(V_{P}\). We simply assume that the ultra-violet model details are such that the zeroes in (3.18) are filled in (or not) at the correct level for experiment. The \(V_{P}\) are 3 by 3 unitary matrices, and we pick a simple ansatz which is not immediately obviously ruled out by strong flavour changing neutral current constraints on charged lepton flavour violation or neutral current flavour violation in the first two families of quark. Firstly, we set \(V_{e_{R}}=V_{d_{R}}=V_{u_{R}}=V_{e_{L}}=I\), the 3 by 3 identity matrix. A non-zero \((V_{d_{L}})_{23}\) matrix element is required for the \(Z^{\prime}\) to mediate new physics contributions to \(b\to s\ell^{+}\ell^{-}\) transitions. We capture the important quark mixing (i.e. between \(s_{L}\) and \(b_{L}\)) in \(V_{d_{L}}\) as
\[V_{d_{L}}=\left(\begin{array}{ccc}1&0&0\\ 0&\cos\theta_{sb}&\sin\theta_{sb}\\ 0&-\sin\theta_{sb}&\cos\theta_{sb}\end{array}\right). \tag{3.27}\]
\(V_{\nu_{L}}\) and \(V_{u_{L}}\) are fixed by (3.23), where we use the experimentally determined values for the entries of \(V\) and \(U\) via the central values in the standard parameterisation from Ref. [45]. Having fixed all of the fermionic mixing matrices, we have provided an ansatz that could be perturbed around for a more complete characterisation. We leave such perturbations aside in the present paper.
Here, we summarise the SMEFT operators that result from integrating out the \(Z^{\prime}\); they are given in Table 2, ready for input into flavio2.3.3[46]. We note that, to specify the model and its \(Z^{\prime}\) phenomenology, once we have picked a value for \(X_{e}\), there are three important model parameters that affect the pertinent phenomenology: \(g_{Z^{\prime}}\), \(M_{Z^{\prime}}\) and \(\theta_{sb}\), but at tree-level, as Table 2 shows, the flavour data only depend upon two effective parameters: the combination \(g_{Z^{\prime}}/M_{Z^{\prime}}\) and \(\theta_{sb}\).
\begin{table}
\begin{tabular}{|c c||c c|} \hline WC & value & WC & value \\ \hline \(C_{ll}^{iiii}\) & \(-\frac{1}{2}L_{i}^{2}\) & \(C_{ll}^{iijj}\) (\(i\neq j\)) & \(-L_{i}L_{j}\) \\ \((C_{lq}^{(1)})^{iijk}\) & \(L_{i}(\Lambda_{\Xi}^{(d_{L})})_{jk}\) & & \\ \(C_{ee}^{iijj}\) (\(i\neq j\)) & \(-L_{i}L_{j}\) & \(C_{uu}^{3333}\) & \(-\frac{1}{2}\) \\ \(C_{dd}^{3333}\) & \(-\frac{1}{2}\) & \(C_{ee}^{iiii}\) & \(-\frac{1}{2}L_{i}^{2}\) \\ \(C_{eu}^{ii33}\) & \(L_{i}\) & \(C_{ed}^{ii33}\) & \(L_{i}\) \\ \(C_{ud}^{(1)^{3333}}\) & \(-1\) & \(C_{ee}^{iijj}\) & \(-L_{i}L_{j}\) \\ \(C_{qc}^{ijkl}\) & \(L_{k}(\Lambda_{\Xi})_{ij}\) & \(C_{qu}^{(1)^{ij33}}\) & \(-(\Lambda_{\Xi})_{ij}\) \\ \(C_{dd}^{(1)^{ij33}}\) & \(-(\Lambda_{\Xi})_{ij}\) & \(C_{qu}^{(1)^{ijkl}}\) & \((\Lambda_{\Xi})_{ij}(\Lambda_{\Xi})_{kl}\frac{\delta_{ik}\delta_{jl}-2}{2}\) \\ \(C_{lu}^{ii33}\) & \(L_{i}\) & \(C_{ld}^{ii33}\) & \(L_{i}\) \\ \hline \end{tabular}
\end{table}
Table 2: Non-zero \(M_{Z^{\prime}}\)-scale SMEFT operators in units of \(g_{Z^{\prime}}^{2}/M_{Z^{\prime}}^{2}\), in terms of the left-handed lepton doublet \(U(1)_{X}\) charges \(L_{i}\). The notation is in the down-aligned Warsaw basis [27]. There is no sum implied upon repeated family indices \(i,j,k,l\in\{1,2,3\}\).
LEP constraints
Since some of the models we consider couple the \(Z^{\prime}\) to di-electron pairs, LEP-2 di-lepton production cross-section measurements, which are broadly in agreement with SM predictions, provide constraints. This is because a contribution from the \(Z^{\prime}\) becomes non-zero: some leading Feynman diagrams for its contribution to the amplitude are shown in Fig. 3. In this section, we shall re-examine the constraints on four-lepton dimension-6 SMEFT WCs coming from LEP-2. Analytic expressions for the expected dominant contributions from these (in the interference terms) have been already calculated in Ref. [47]. Here, we re-calculate the full dependence of the tree-level predictions upon the WCs ready for inclusion into flavio2.3.3. By extracting the interference terms, we shall provide an independent check upon the analytic results presented in Ref. [47]. By calculating the _full_ tree-level dependence upon the WCs (i.e. not only including the interference terms) and putting them into flavio2.3.3, we evade possible computational problems with predicted negative cross-sections when performing parameter scans. Ref. [47] provided numerical results of a fit to electroweak measurements of the epoch and other LEP measurements, where some of the WCs are constrained at the \(\mathcal{O}(10^{-2})/v^{2}\) level, where \(v=246.22\) GeV is the SM Higgs vacuum expectation value. Although LEP experimental measurements have not changed since Ref. [47], some electroweak data have. Providing the LEP constraints as part of the flavio2.3.3 package should then facilitate SMEFT fits in general as well as fits to our \(Z^{\prime}\) models, once we have matched the models to the SMEFT.
Some of the SMEFT WCs alter differential scattering cross-section predictions of \(e^{+}e^{-}\to\mu^{+}\mu^{-}\), \(e^{+}e^{-}\to\tau^{+}\tau^{-}\) and \(e^{+}e^{-}\to e^{+}e^{-}\) (Bhabha scattering). In the Warsaw convention [27], the relevant WCs which can alter the predictions for these processes are \(C_{le}^{1jj1}\), \(C_{ll}^{11jj}\), \(C_{ee}^{11jj}\), \(C_{le}^{11jj}\) and \(C_{le}^{11jj}\), where \(j\in\{1,2,3\}\). The predictions for \(e^{+}e^{-}\to\mu^{+}\mu^{-}\) and \(e^{+}e^{-}\to\tau^{+}\tau^{-}\) are simple and almost identical to each other and so we consider them first, before going on to consider Bhabha scattering.
### LEP: di-muon and di-tau final states
Building notation similar to that in Ref. [48], we consider the tree-level polarised scattering amplitudes of massless di-electron pairs into either massless di-muon pairs (\(j=2\)) or massless
Figure 3: Feynman diagram of the leading \(Z^{\prime}\) contribution to LEP-2 di-lepton production from a suitable model. Here, \(\ell\in\{e,\mu,\tau\}\).
di-tau pairs (\(j=3\)) including 4-lepton dimension-6 SMEFT operators
\[{\cal M}\left(e^{+}e^{-}\to e_{j}^{+}e_{j}^{-}\right)=-i\left(\bar{e} \gamma^{\alpha}P_{L}e_{j}\right)\left(\bar{e}_{j}\gamma_{\alpha}P_{R}e\right)C _{le}^{1jj1}+i\sum_{X,Y}\left(\bar{e}\gamma^{\alpha}P_{X}e\right)\left(\bar{e}_ {j}\gamma_{\alpha}P_{Y}e_{j}\right)N_{1jj1}^{XY}(s), \tag{35}\]
where the sum is over \(\{X,Y\}\in\{L,R\}\),
\[N_{1jj1}^{XY}(s) := \frac{e^{2}}{s}+\frac{g_{Z}^{e_{X}}g_{Z}^{e_{j}Y}}{s-M_{Z}^{2}+i \Gamma_{Z}M_{Z}}+\left(C_{ll}^{11jj}+C_{ll}^{1jj1}\right)\delta_{XL}\delta_{YL }+C_{ee}^{11jj}\delta_{XR}\delta_{YR}+ \tag{36}\] \[C_{le}^{11jj}\delta_{XL}\delta_{YR}+C_{le}^{jj11}\delta_{XR} \delta_{YL},\]
\(\delta_{LL}=\delta_{RR}=1,\;\delta_{LR}=\delta_{RL}=0\), \(s,t\) and \(u\) are the usual Mandlestam kinematic variables, \(M_{Z}\) is the \(Z^{0}\) boson pole mass and the SMEFT WCs \(C_{ee}^{ijkm}\), \(C_{le}^{ijkm}\) and \(C_{ll}^{ijkm}\) (where \(i,j,k,m\in\{1,2,3\}\) are family indices) are written in the Warsaw convention [27]. \(g_{Z}^{e_{X}}\) is the di-\(X\)-handed electron coupling to the \(Z^{0}\) boson including tree-level corrections from SMEFT and \(g_{Z}^{e_{j_{Y}}}\) is the di-\(Y\)-handed \(j^{th}\)-family coupling to the \(Z^{0}\) boson [49]. \(\Gamma_{Z}\) is the \(Z^{0}\) boson's total width. Summing over the final spins and averaging over initial spins, we obtain a differential cross-section
\[\frac{d\sigma}{dt}=\frac{1}{16\pi}\left\{|C_{le}^{1jj1}|^{2}+\sum_{X,Y}|N_{1jj1 }^{XY}(s)|^{2}\left[\delta_{XY}\left(1+\frac{t}{s}\right)^{2}+(1-\delta_{XY}) \frac{t^{2}}{s^{2}}\right]\right\}. \tag{37}\]
Integrating, we obtain the total cross-section
\[\sigma(e^{+}e^{-}\to e_{j}^{+}e_{j}^{-})=\frac{s}{48\pi}\left\{3|C_{le}^{1jj1 }|^{2}+\sum_{X,Y}|N_{1jj1}^{XY}(s)|^{2}\right\} \tag{38}\]
and a forward cross-section minus backward cross-section
\[\sigma(e^{+}e^{-}\to e_{j}^{+}e_{j}^{-})_{F}-\sigma(e^{+}e^{-}\to e_{j}^{+}e _{j}^{-})_{B}=\frac{s}{64\pi}\sum_{X,Y}\left|N_{1jj1}^{XY}(s)\right|^{2}(2 \delta_{XY}-1). \tag{39}\]
The dimension-6 WC-SM interference terms in (38) and (39) agree with the interference terms derived in Ref. [47] (in the parameter space considered in Ref. [47], such interference terms encapsulate the dominant effects of the SMEFT operators and were the only ones presented explicitly there). (37) shows that the part proportional to \(|C_{le}^{1jj1}|^{2}\) does not interfere with the rest of the matrix element due to its different helicity-flavour structure (as already noted in Ref. [47]).
In Ref. [50], the SM prediction for the cross-section is given including some higher-order one-loop contributions. By using the _ratio_ of the predicted cross-section to the SM prediction in the constraints, we can effectively include the dominant effects of these higher order contributions. Our implementation in flavio2.3.3 therefore uses such a ratio. Both
the ratios of the total cross-section and the forward cross-section minus the backward cross-section are used for the following LEP 2 centre-of-mass energies:
\[E/\text{GeV}\in\{130.3,136.3,161.3,172.1,182.7,188.6,191.6,195.5,199.5,201.8,204.8,206.5\}. \tag{4.6}\]
Correlations between the various measurements are neglected for \(e^{+}e^{-}\to\ell^{+}\ell^{-}\), where \(\ell\in\{\mu,\tau\}\).
### LEP: Bhabha scattering
We calculate the tree-level polarised amplitude \(\mathcal{M}\) for \(e^{+}(p_{2})e^{-}(p_{1})\to e^{+}(q_{2})e^{-}(q_{1})\) in the massless electron approximation.
\[-i\mathcal{M} = -I_{t}\frac{e^{2}}{t}\ +I_{s}\frac{e^{2}}{s}+\sum_{X,Y}\left\{-\left( \bar{u}\gamma^{\mu}P_{X}u\right)\left(\bar{v}\gamma_{\mu}P_{Y}v\right)t_{XY}+ \left(\bar{u}\gamma^{\mu}P_{X}v\right)\left(\bar{v}\gamma_{\mu}P_{Y}u\right)s _{XY}\right\}, \tag{4.7}\]
where \(u:=u(p_{1})\), \(v:=v(q_{2})\), \(\bar{u}:=\bar{u}(q_{1})\) and \(\bar{v}:=\bar{v}(p_{2})\) are the usual positive and negative energy 4-component Dirac spinors of the electron field and
\[I_{t} := \left(\bar{u}\gamma^{\mu}u\right)\left(\bar{v}\gamma_{\mu}v\right),\] \[I_{s} := \left(\bar{u}\gamma^{\mu}v\right)\left(\bar{v}\gamma_{\mu}u\right),\] \[t_{XY} := \frac{g_{e_{X}}g_{e_{Y}}}{t-M_{Z}^{2}+i\Gamma_{Z}M_{Z}}+\frac{C_ {XY}+C_{YX}}{2},\] \[s_{XY} := \frac{g_{e_{X}}g_{e_{Y}}}{s-M_{Z}^{2}+i\Gamma_{Z}M_{Z}}+\frac{C_ {XY}+C_{YX}}{2} \tag{4.8}\]
and \(C_{LL}:=C_{ll}^{(1)}{}^{1111}\), \(C_{RR}:=C_{ee}^{1111}\), \(C_{LR}:=C_{le}^{1111}\) and \(C_{RL}:=C_{le}^{1111}\). The spin summed/averaged differential cross-section in the centre-of-mass frame is then
\[\frac{d\sigma}{d\cos\theta} = \frac{1}{32\pi s}\left(2e^{4}\left[\frac{u^{2}+t^{2}}{t^{2}}+ \frac{u^{2}+t^{2}}{s^{2}}+\frac{2u^{2}}{st}\right]+\right.\] \[\sum_{X,Y}\left\{\frac{2e^{2}}{t}\left(\text{Re}(t_{XY})\left[u^ {2}\delta_{XY}+s^{2}(1-\delta_{XY})\right]+\text{Re}(s_{XY})u^{2}\delta_{XY} \right)+\right.\] \[\left.\frac{2e^{2}}{s}\left(\text{Re}(s_{XY})\left[u^{2}\delta_{ XY}+t^{2}(1-\delta_{XY})\right]+\text{Re}(t_{XY})u^{2}\delta_{XY}\right)+\right.\] \[\left.|t_{XY}|^{2}[u^{2}\delta_{XY}+s^{2}(1-\delta_{XY})]+|s_{XY }|^{2}[u^{2}\delta_{XY}+t^{2}(1-\delta_{XY})]+\right.\] \[\left.2\text{Re}(t_{XY}^{\dagger}s_{XY})u^{2}\delta_{XY}\right\},\]
where \(\theta\) is the scattering angle. Extracting the dimension-6 SMEFT WC-SM interference terms from (4.9), we observe agreement with Ref. [47], providing an independent check on both calculations. In order to implement the Bhabha scattering constraints into flavio2.3.3 we have integrated (4.9) with respect to \(\cos\theta\) (utilising \(t=-s(1-\cos\theta)/2\) and \(u=-s(1+\cos\theta)/2\)), since the combined LEP2 data in Ref. [50] are given in bins of \(\cos\theta\). The resulting
expression is rather large, so we do not list it here, although we note that it can be found in the ancillary information stored with the arXiv version of this paper.
Footnote 7: We note that the \(\chi^{2}\) is not a function of the \(U(1)_{X}\)-electron charge divided by the muon charge, \(X_{e}/X_{\mu}\).
Ref. [50] combined LEP-experiment cross-sections for \(e^{+}e^{-}\to e^{+}e^{-}\) for centre-of-mass energies between 189 GeV and 207 GeV and in bins of \(\cos\theta\) in the interval \([-0.9,0.9]\) were presented. The SM prediction of the binned cross-sections were also given and we shall again constrain the ratio of the measured cross-section to the SM prediction in order to constrain SMEFT operators. Here, correlations between measurements were given in Ref. [50] and are taken into account. We again take ratios of each measurement with the SM prediction in order to effectively utilise some higher order corrections that were included in the SM prediction; calculating their correlation coefficients, we see that these ratios have identical correlations to those between the original measurements, since the normalising factor cancels between the numerator and the denominator.
LEP-2 cross-sections and resulting constraints have been presented and calculated specifically for a class of \(Z^{\prime}\) models in Ref. [51]. In the present paper, despite our application of the LEP-2 constraints to \(Z^{\prime}\) models, we instead find it more convenient to first match to SMEFT and then apply the constraints in terms of said operators. There are two reasons for this: firstly, it fits naturally into the flavio2.3.3 _modus operandi_, and secondly, the cross-sections and implementation within flavio2.3.3 could have applicability to other models, provided that the new physics state is significantly more massive than LEP-2 energies so the SMEFT truncation at dimension-6 remains a good approximation.
## 5 Fits
For each of the models in the set identified in SS3, we perform a fit of \(\theta_{sb}\) and \(g_{Z^{\prime}}/M_{Z^{\prime}}\) to various data by using smelli2.3.2[17], flavio2.3.3[46] and wilson2.3.2[52] (to a good approximation broken only by small loop corrections, the WCs - and thus the predictions of observables which are affected by them - only depend upon the ratio \(g_{Z^{\prime}}/M_{Z^{\prime}}\) rather than on \(g_{Z^{\prime}}\) and \(M_{Z^{\prime}}\) separately). Practically, in the numerics, we set \(M_{Z^{\prime}}=3\) TeV throughout this paper. smelli2.3.2 and flavio2.3.3 have been updated to include the LEP-2 measurements as described in SS4, as well as the updated combination of measurements of \(B_{d,s}\to\mu^{+}\mu^{-}\) branching ratio measurements from Ref. [5].
We show the \(\chi^{2}\) improvement with respect to the SM in Fig. 4 as a function of the \(U(1)_{X}\) electron charge divided by the muon charge, \(X_{e}/X_{\mu}\). All fit outputs that we present are approximately only sensitive to this ratio of charges aside from the value of the best-fit gauge coupling \(g_{Z^{\prime}}\) and the mixing angle \(\theta_{sb}\): these are sensitive to the value of \(X_{\mu}\) itself, and shall be presented here for the default value of 10 for this variable8. The 23 measurements of LFU observables prefer \(X_{e}/X_{\mu}=1\) to values of the ratio that are below 0, where a \(\chi^{2}\) of 2 higher than in the SM is evident. The LEP-2 constraints pass through the origin, since \(X_{e}=0\)
decouples the \(Z^{\prime}\) from electron pairs, meaning that the tree-level predicted cross-sections are identical to those of the SM. A 'LEP' \(\chi^{2}\) contribution of up to 1 higher than that of the SM is possible in the domain \(X_{e}/X_{\mu}\in[-2,2]\) best fits. On the other hand, the 'quarks' set of observables, which contains angular distributions of \(B\to K^{\star}\mu^{+}\mu^{-}\) decays and of branching ratios in different bins of di-muon invariant mass squared, enjoys a larger effect, improving \(\chi^{2}\) on the SM value by over 9 units in the domain taken. Adding all effects together in the 'global' fit, we see that in fact \(X_{e}/X_{\mu}\) of around 1/2 is preferred. Both \(X_{e}=0\) and \(X_{\mu}=X_{e}\) are within the 95% CL preferred region \(-0.4<X_{e}/X_{\mu}<1.3\) (however, neither is within the 68% CL region).
The \(p-\)values associated with each fit, as well as the best-fit values of parameters, are displayed in the left-hand panel of Fig. 5. We see that the \(p-\)values of the three categories defined are all above the.05 level, indicating that no category has a terrible fit. The global \(p-\)values show a reasonable fit overall in the.15-.26 range throughout the domain of \(X_{e}/X_{\mu}\) shown. However, we should bear in mind that the \(p-\)values have been 'diluted' by measurements included in some categories that have large errors (for example some of the Belle data in the 'LFU' category). We also see that the 'quarks' category is _not_ fit perfectly; this could
Figure 4: \(\chi^{2}\) improvement with respect to that of the SM as a function of electron charge divided by muon charge, \(X_{e}/X_{\mu}\). \(X_{\tau}=3-X_{e}-X_{\mu}\) at each point, as implied by (3.14). A negative value of \(\chi^{2}-\chi^{2}_{SM}\) indicates an _improvement_ of the fit with respect to the SM whereas a positive value indicates a worse fit than the SM. ‘LFU’ contains 23 observables such as \(R_{K}\) and \(R_{K^{\star}}\), which test lepton flavour universality. ‘LEP’ contains the 148 \(e^{+}e^{-}\to l^{+}l^{-}\) measurements discussed in §4. ‘quarks’ contains 224 other \(b\to s\mu^{+}\mu^{-}\) measurements defined in flavio2.3.3. Under the hypothesis that the model line is correct, the region where the ‘global’ results are _below_ the marked dashed line is within the 95% fit region.
be due either to: the flavio2.3.3 predictions not having large enough theory errors ascribed to them, unaccounted for experimental systematic errors or that the set of models we have chosen is not the best one to describe the data in the category. In the right-hand panel of Fig. 5, we see some trends. The fact that \(g_{Z^{\prime}}\) falls towards the right-hand side and left-hand side of the plot can be seen as due to the fact that LEP constraints will prefer a smaller value of \(g_{Z^{\prime}}\) when \(|X_{e}|\) is large, since then the \(Z^{\prime}\) coupling to electrons is higher. When \(g_{Z^{\prime}}\) is smaller, \(\theta_{sb}\) is higher. This is expected: since the non-zero tree-level new physics WET WC is constrained by the model to be
\[C_{9}^{(\mu)}=-\frac{X_{\mu}Q_{3}}{2}\frac{g_{Z^{\prime}}^{2}}{M_{Z^{\prime}}^ {2}}\sin 2\theta_{sb}. \tag{5.1}\]
Requiring some particular fixed value of \(C_{9}^{(\mu)}\neq 0\) to fit the quarks category, we see that \(g_{Z^{\prime}}/M_{Z^{\prime}}\) would tend to move in the opposite direction to \(\theta_{sb}\).
Figure 5: (left panel) \(p-\)values, (right panel) best-fit parameters for \(X_{\mu}=10\) and (bottom panel) 2022 LHCb LFU measurements associated with each model, as a function of \(X_{e}/X_{\mu}\), for \(M_{Z^{\prime}}=3\) TeV. \(X_{\tau}=3-X_{e}-X_{\mu}\) at each point, as implied by (3.14). In the left panel, ‘LFU’ contains 23 observables (including the aforementioned 2022 LHCb LFU measurements) such as \(R_{K}\) and \(R_{K^{*}}\), which test lepton flavour universality. ‘LEP’ contains the 148 \(e^{+}e^{-}\to l^{+}l^{-}\) measurements discussed in §4. ‘quarks’ contains 224 other \(b\to s\mu^{+}\mu^{-}\) measurements defined in flavio2.3.3. In the bottom panel, the legend displays the domain of invariant mass squared in GeV\({}^{2}\) in parenthesis and ‘pull’ is defined as \((p-e)/\sigma\), where \(p\) is the theoretical prediction of the best-fit point of the \(U(1)_{X}\) model, \(e\) is the experimental central value and \(\sigma\) is the experimental uncertainty, ignoring correlations with other observables.
The left-hand panel of Fig. 5 confirms that out of our set, globally, \(X_{e}/X_{\mu}\approx 1/2\) provides a close-to-optimal fit to the experimental measurements included. The optimal model at this value of the ratio corresponds to fermionic charges of \(3B_{3}-5L_{e}-10L_{\mu}+12L_{\tau}\). Some may feel that, aesthetically, some of the charges in this assignment are rather large. This suggests that we investigate a different model in our set with the same ratio of \(X_{e}/X_{\mu}\) but with smaller \(X_{\mu}\). By adjusting \(g_{Z^{\prime}}\), we may expect the fit then to reach a similar \(\chi^{2}-\chi^{2}_{SM}\) for the LEP observables. We also expect that \(\theta_{sb}\) will then change to keep the value of (5.1) invariant. There should be only small corrections to this overall \(\chi^{2}-\)invariant picture from the different gauge coupling affecting the renormalisation between \(M_{Z^{\prime}}\) and \(M_{Z}\). One charge assignment with \(X_{e}/X_{\mu}=1/2\) which is anomaly-free is9\(3B_{3}-L_{e}-2L_{\mu}\). We shall investigate this model in more detail now.
Footnote 9: For this particular choice, \(X_{\tau}=0\), leading to no tree-level new physics contribution to \(b\to s\tau^{+}\tau^{-}\) transitions. However, we note that generically \(X_{\tau}\neq 0\) as other choices of \(X_{e}\) and \(X_{\mu}\) show.
### \(3B_{3}-L_{e}-2L_{\mu}\)
Here, we perform a new fit to the \(3B_{3}-L_{e}-2L_{\mu}\) model; the result is displayed in Table 3. We see that the overall fit is of an acceptable quality, with a \(p-\)value of.28. The higher invariant mass-squared bin of \(R_{K}\) is compatible with its experimental value (the pull is \(1.1\sigma\)), whereas the others are all well fit. The \(B_{3}-L_{e}-2L_{\mu}\) model has a \(\chi^{2}\) improvement of 15.3 as compared to the SM, for two additional fitted parameters. The \(p-\)value of the SM being statistically as good a fit to the data10 as the \(B_{3}-L_{e}-2L_{\mu}\) model is.005.
Footnote 10: The SM is equivalent to the parameter choice \(g_{Z^{\prime}}=0\), \(\theta_{sb}=0\) of the \(B_{3}-L_{e}-2L_{\mu}\) model under suitable caveats; therefore we use the optimal likelihood ratio test for two degrees of freedom to calculate the \(p-\)value.
We display the parameter space of the model in Fig. 6. One should interpret the left-hand panel as testing the joint compatibility of measurements between different categories of observable in the model. We see, since the regions of acceptable fit (defined here to be \(p-\)value greater than.05) overlap, there is parameter space where each constraint is compatible with every category. The right-hand panel should be interpreted as parameter constraints upon the model, _assuming that the \(3B_{3}-L_{e}-2L_{\mu}\) model hypothesis is correct_. Here, we see that the constraints from the 'quarks' category of observable is more-or-less compatible with the
\begin{table}
\begin{tabular}{|c|c c||c c|} \hline & \(\chi^{2}-\chi^{2}_{SM}\) & \(p-\)value & measurement & pull \\ \hline LFU & -0.2 &.85 & \(R_{K^{*}}[0.1,\ 1.1]\) & -0.1 \\ LEP & -0.4 &.58 & \(R_{K^{*}}[1.1,\ 6]\) & -1.1 \\ quarks & -14.7 &.10 & \(R_{K}[0.1,\ 1.1]\) & -0.3 \\ global & -15.3 &.28 & \(R_{K}[1.1,\ 6]\) & -0.1 \\ \hline \end{tabular}
\end{table}
Table 3: Quality-of-fit for the \(3B_{3}-L_{e}-2L_{\mu}\)\(Z^{\prime}\) model and the pulls of the 2022 LHCb LFU measurements. The numbers in square parenthesis refer to the end-points of the domain of the relevant bin of di-lepton invariant mass squared, in units of GeV\({}^{2}\). For \(M_{Z^{\prime}}=3\) TeV, the best-fit parameters are \(g_{Z^{\prime}}=0.222\), \(\theta_{sb}=-0.0270\).
LFU constraints. The LEP-2 constraints cut off the global fit contour at the top-right hand side, for larger \(g_{Z^{\prime}}/M_{Z^{\prime}}\). The \(B_{s}-\overline{B_{s}}\) mixing constraint cuts off the global-fit region at the lower left-hand side. A curved region at the bottom right-hand side of the plot has too large flavour changing effects in general for flavio2.3.3 to return a numerical answer, which explains why some of the constraints (notably from LEP-2) are bounded there. Such regions are highly ruled out by flavour measurements anyway.
We show the pulls of some observables of interest in Fig. 7. We see that although the \(3B_{3}-L_{e}-2L_{\mu}\) model fits \(B_{s}-\overline{B_{s}}\) mixing (as measured by \(\Delta m_{s}\)), \(BR(B_{s}\to\mu^{+}\mu^{-})\) and \(R_{K^{*}}(1.1,6)\) less well than the SM, it ameliorates the fit to more of the other observables. Various bins of \(BR(B_{s}\to\phi\mu^{+}\mu^{-})\), whilst fitting _better_ than in the SM, are still far from optimal (the egregious one being \(3\sigma\) between 2.5 and 4 GeV\({}^{2}\) in di-muon invariant mass squared). This goes some way to confirming the assertion in the discussion of Fig. 5 that the fit to the 'quarks' category of observable, although acceptable, is far from perfect.
## 6 Conclusion
We have critically re-examined \(Z^{\prime}\) models that can significantly ameliorate the \(b\to s\mu^{+}\mu^{-}\) anomalies in global fits. The 2022 re-analysis of the \(R_{K}\) and \(R_{K^{*}}\) observables by LHCb implies that, if the \(b\to s\mu^{+}\mu^{-}\) anomalies are due to beyond the SM effects, there may well
Figure 6: Parameter space of the \(3B_{3}-L_{e}-2L_{\mu}\) model: (left) compatibility of different sets of observables and (right) constraints. ‘LFU’ contains 23 observables (including the aforementioned 2022 LHCb LFU measurements) such as \(R_{K}\) and \(R_{K^{*}}\), which test lepton flavour universality. ‘LEP-2’ contains the 148 \(e^{+}e^{-}\to l^{+}l^{-}\) measurements discussed in §4. ‘quarks’ contains 224 other \(b\to s\mu^{+}\mu^{-}\) measurements defined in flavio2.3.3. In the left-hand panel, the coloured regions show where the \(p-\)value of each labelled constraint is greater than 0.05. In the right-hand panel, the coloured regions are the 95% confidence limit (CL) _allowed_ regions defined by \(\chi^{2}-\chi^{2}(\text{min})=5.99\)[53] as shown for the labelled category of observable. The black line encloses the 95% CL global region and the black dot gives the locus of the best-fit point. The region above the dashed line is compatible with the \(B_{s}-\overline{B_{s}}\) mixing constraint at the 95% CL (note that this measurement is a member of the ‘quarks’ set of observables).
also be beyond the SM effects in \(b\to se^{+}e^{-}\). One possible explanation is that of a TeV-scale \(Z^{\prime}\) boson that couples dominantly to third family quarks, to \(s\bar{b}\) and \(b\bar{s}\) through weak mixing effects, to di-muon pairs and to di-electron pairs in addition. We identified a one-rational-parameter family of models which, in the first two-family charged-lepton sector, interpolates between a \(Z^{\prime}\) only coupling to di-muon pairs and a \(Z^{\prime}\) which couples to di-electron pairs and to di-muon pairs with equal strength. Here, the coupling strength is directly proportional to the \(U(1)_{X}\) charge of the leptonic field in question. By coupling a \(Z^{\prime}\) to di-electron pairs, one obtains constraints from LEP-2, which measured the scattering of \(e^{+}e^{-}\) to di-lepton pairs and observed no significant deviations from SM predictions. One hopefully useful side-product of the present paper was to re-calculate such predicted deviations resulting from relevant dimension-6 SMEFT operators. A previous presentation in the literature [47] has thus received an independent check. Our calculation is presented in a more complete form than in Ref. [47], which guarantees that the resulting predicted LEP-2 cross-sections are positive even in extreme parts of parameter space. The calculations in the more complete form have been programmed into smelli and flavio, and are thus publicly available for use. In a different analysis, other experimental data, such as electroweak precision observables, can be varied (or indeed re-fit) and the LEP-2 constraints will change accordingly in the calculation.
Figure 7: Various pulls of interest for the SM and the best-fit point of the \(3B_{3}-L_{e}-2L_{\mu}\) model. Pull is defined as theory prediction minus the experimental central value divided by uncertainty. Correlation between observables is neglected in the calculation of the pull. |
2305.06314 | Scan2LoD3: Reconstructing semantic 3D building models at LoD3 using ray
casting and Bayesian networks | Reconstructing semantic 3D building models at the level of detail (LoD) 3 is
a long-standing challenge. Unlike mesh-based models, they require watertight
geometry and object-wise semantics at the fa\c{c}ade level. The principal
challenge of such demanding semantic 3D reconstruction is reliable
fa\c{c}ade-level semantic segmentation of 3D input data. We present a novel
method, called Scan2LoD3, that accurately reconstructs semantic LoD3 building
models by improving fa\c{c}ade-level semantic 3D segmentation. To this end, we
leverage laser physics and 3D building model priors to probabilistically
identify model conflicts. These probabilistic physical conflicts propose
locations of model openings: Their final semantics and shapes are inferred in a
Bayesian network fusing multimodal probabilistic maps of conflicts, 3D point
clouds, and 2D images. To fulfill demanding LoD3 requirements, we use the
estimated shapes to cut openings in 3D building priors and fit semantic 3D
objects from a library of fa\c{c}ade objects. Extensive experiments on the TUM
city campus datasets demonstrate the superior performance of the proposed
Scan2LoD3 over the state-of-the-art methods in fa\c{c}ade-level detection,
semantic segmentation, and LoD3 building model reconstruction. We believe our
method can foster the development of probability-driven semantic 3D
reconstruction at LoD3 since not only the high-definition reconstruction but
also reconstruction confidence becomes pivotal for various applications such as
autonomous driving and urban simulations. | Olaf Wysocki, Yan Xia, Magdalena Wysocki, Eleonora Grilli, Ludwig Hoegner, Daniel Cremers, Uwe Stilla | 2023-05-10T17:01:18Z | http://arxiv.org/abs/2305.06314v1 | # Scan2LoD3: Reconstructing semantic 3D building models at LoD3
###### Abstract
Reconstructing semantic 3D building models at the level of detail (LoD) 3 is a long-standing challenge. Unlike mesh-based models, they require watertight geometry and object-wise semantics at the facade level. The principal challenge of such demanding semantic 3D reconstruction is reliable fajcade-level semantic segmentation of 3D input data. We present a novel method, called Scan2LoD3, that accurately reconstructs semantic LoD3 building models by improving facade-level semantic 3D segmentation. To this end, we leverage laser physics and 3D building model priors to probabilistically identify model conflicts. These probabilistic physical conflicts propose locations of model openings: Their final semantics and shapes are inferred in a Bayesian network fusing multimodal probabilistic maps of conflicts, 3D point clouds, and 2D images. To fulfill demanding LoD3 requirements, we use the estimated shapes to cut openings in 3D building priors and fit semantic 3D objects from a library of facade objects. Extensive experiments on the TUM city campus datasets demonstrate the superior performance of the proposed Scan2LoD3 over the state-of-the-art methods in facade-level detection, semantic segmentation, and LoD3 building model reconstruction. We believe our method can foster the development of probability-driven semantic 3D reconstruction at LoD3 since not only the high-definition reconstruction but also reconstruction confidence becomes pivotal for various applications such as autonomous driving and urban simulations.
## 1 Introduction
Reconstructing detailed semantic 3D building models is a fundamental challenge in both photogrammetry [10] and computer vision [39]. Recent developments have shown that reconstruction using 2D building footprints and aerial observations provides building models up to level of detail (LoD) 2 [10, 20, 34], which are characterized by complex roof shapes but display planar facades. Owing to their watertightness and object-oriented modeling, such models have found many applications [4] and are now ubiquitous, as exemplified by around 140 million open access building models in the United States, Switzerland, and Poland 1.
Footnote 1: [https://github.com/OloOcki/awesome-citygml](https://github.com/OloOcki/awesome-citygml)
However, reconstructing facade-detailed semantic LoD3 building models remains an open challenge. Currently, LoD3-specific facade elements, such as windows and doors, are frequently manually modeled [5, 43]; yet at-scale, automatic LoD3 reconstruction is required by numerous applications ranging from simulating flood damage [2], estimating heating demand [27], calculating facade solar potential [47] to testing automated driving functions [36].
The best data source for semantic LoD3 facade modelling [54] appears to be mobile mapping data, as the last years have witnessed a growth in mobile mapping units
Figure 1: Scan2LoD3: Our method reconstructs detailed semantic 3D building models; Its backbone is laser rays’ physics providing geometrical cues enhancing semantic segmentation accuracy.
yielding accurate, dense, street-level image and point cloud measurements. Yet, typically such data necessities robust, accurate, and complete semantic segmentation before it can be applied to semantic reconstruction. In the past decade, various learning-based facade-level 3D point cloud segmentation solutions have achieved promising performance [23, 8, 49]. However, they have limited accuracy of up to 40% [23] when working on translucent (e.g., windows) and label-sparse (e.g., door) objects. Methods based on intersections of laser rays with 3D models are used to improve the accuracy [40, 49]. However, such methods are prone to errors due to the limited semantic information [40] and field-of-view obstacles, such as window blinds [49]. Another approaches employ images for facade segmentation and achieve high performance [33, 22]; yet, their direct application for 3D facade segmentation is limited chiefly owing to the 2D representation [16, 30].
In this paper, we present a novel ray-casting-based multi-modal framework for semantic LoD3 building model reconstruction named Scan2LoD3. In contrast to previous methods, we combine multimodalities instead of relying on single modality [40]; and we fuse modalities using their state probabilities, as opposed to mere binary fusion [49]. The key to maintaining geometric detail is to utilize laser ray physical intersections with vector priors to find probability-quantified model conflicts in a Bayesian network, as highlighted in Figure 1; we list our contributions as follows:
* A probabilistic visibility analysis using mobile laser scanning (MLS) point clouds and semantic 3D building models, enabling detection of detailed conflicts by non-binary probability masks and L2 norm;
* A Bayesian network approach for the late fusion of multimodal probability maps enhancing 3D semantic segmentation at the facade-level;
* An automatic, watertight reconstruction of LoD3 models with facade elements of windows and doors compliant with the CityGML standard [9];
* An open LoD3 reconstruction benchmark comprising LoD3 and facade-textured LoD2 building models, and facade-level semantic 3D MLS point clouds 2. Footnote 2: [https://sites.google.com/view/olafwysocki/papers/scan2lod3](https://sites.google.com/view/olafwysocki/papers/scan2lod3)
## 2 Related work
The key to reconstructing the LoD3 building model is to achieve an accurate 3D facade segmentation. Here, we provide insights into visibility- and learning-based methods.
**Visibility analysis using ray casting and 3D models.** In the context of 3D building models, ray casting from the sensor's origin yields deterministic information about measured, unmeasured, and unknown model parts [41, 24], but also provides geometric cues, so-called conflicts, for the facade elements reconstruction [40, 49, 13]. For example, Tuttas et al. [40] exploit the fact that laser scanning rays traverse glass objects to identify building openings: They assume that the intersection points of rays and found building planes indicate the position of windows, which are then reconstructed by minimum bounding boxes. Hoegner & Gleixner [13] pursue this idea using mobile laser scanning and, besides rays intersections, they analyze empty regions in point clouds. Due to the methods' assumption that each visible opening is a window, they do not distinguish between other openings, such as doors or underpasses. To overcome this issue, Wysocki et al. [50] propose the conflict classification method, which infers the semantics of ray intersections with 3D models using 2D vector maps to detect and reconstruct building underpasses. However, conflict-based methods are prone to occlusions and are limited in identifying openings that are concealed by non-translucent objects, such as blinds.
**Machine learning in 3D facade reconstruction.** Early learning-based facade segmentation methods [19, 42, 33, 4, 6] typically rely on ubiquity of 2D image facade segmentation datasets and represent facade elements as 2D objects (discussed in detail in [25]). Recent works utilize well-established 2D image-based neural networks to identify facade elements in images and then project them onto 3D point clouds or their derivatives, such as 3D models [12, 16, 30, 29]. However, these methods frequently assume full point cloud coverage of buildings and correctly co-referenced multiple image observations from various angles. For example, Huang et al. [16] propose a method employing FC-DenseNet56 [17], trained with ortho-rectified facade images, to recognize facade openings. The labels are projected onto LoD2 building model, which is reconstructed from a drone-based photogrammetric point cloud. The projected window and door labels are approximated to bounding boxes, which cut openings in LoD2 solids, thereby upgrading 3D models to LoD3.
An alternative strategy concentrates on direct 3D facade modeling from laser scanning point clouds since MLS point clouds provide detailed and accurate depth information [53]. Recently, it has been demonstrated that great advances of point-wise, learning-based methods [55, 31] are applicable in the context of 3D facade segmentation [23, 8], where an early fusion of geometric features into DGCNN [45] enhances facade segmentation accuracy. Nevertheless, sparsely represented classes, such as windows and doors, remain challenging [23]. This issue is further exacerbated by the lack of comprehensive 3D facade-level training and validation data: to the best of our knowledge, no 3D facade-level reconstruction benchmark includes textures, point clouds, and ground-truth LoD3 models [51].
One recent work [49] pursues the idea of combining ge
ometric features and visibility analysis. The authors merge model conflicts and inferred semantics from a modified Point Transformer architecture [55]. The output is added to a 3D building model face using a projection, and respective window and door openings are 3D-modeled by 3D bounding box fitting of pre-defined models. The method, however, is limited in reconstructing windows with partially closed blinds owing to simplified probabilities to binary masks comprising only high-probability conflicts and semantics. Additionally, the visibility analysis concerns uncertainties using L1 distance, which generalizes L2 distance measurements, rendering it less sensitive for detailed conflicts.
## 3 Methodology
Our Scan2LoD3 method comprises two interconnected steps: semantic 3D segmentation that yields input for semantic 3D reconstruction. As shown in Figure 2, we first generate a ray-based conflicts probability map consisting of three states (_conflicted_, _confirmed_, and _unknown_), analyzing the visibility of the laser scanner in conjunction with 3D building models (Sec. 3.1). However, this map is limited to the laser field-of-view and does not provide facade-specific semantics. To address this limitation, we additionally introduce two probability maps derived from point clouds and images: The former is generated by a modified Point Transformer network [49, 55] (top branch), while the latter is produced using Mask-RCNN [11] (bottom branch), as described in Sections 3.2 and 3.3, respectively. We then fuse these three probability maps via a Bayesian network, resulting in a target probability map that represents the occurrence of openings and their associated probability score (Sec. 3.4). The opening labels yield detailed 3D opening geometries for reconstruction, which is conducted with the input 3D building model and a pre-defined 3D library of openings (Sec. 3.5). Finally, we assign the respective semantics to the reconstructed parts along with the final probability score, resulting in the CityGML-compliant LoD3 building model [9].
### Visibility analysis concerning uncertainties
We perform ray tracing on a 3D voxel grid to determine areas that are measured by a laser scanner and analyze them with a 3D building model (Fig. 3). The total grid size adapts to the input data owing to the utilized octree structure with leaves represented by 3D voxels of size \(v_{s}\) dependent on the relative accuracy of the scanner.
As shown in Figure 2(a), the laser rays are traced from sensor position \(s_{i}\), using orientation vector \(r_{i}\), to hit point \(p_{i}=s_{i}+r_{i}\). Our approach leverages MLS trait of multiple laser observations \(z_{i}\) to decide upon the laser occupancy states (i.e., _empty_, _occupied_, and _unknown_) and includes the respective occupancy probability score. The states' update mechanism uses prior probability \(P(n)\), current estimate \(L(n|z_{i})\), and preceding estimate \(L(n|z_{1:i-1})\) to calculate and assign the final state. The mechanism is controlled by log-odd values \(L(n)\) along with clamping thresholds \(l_{min}\) and \(l_{max}\)[14, 49, 50]:
\[L(n|z_{1:i})=max(min(L(n|z_{1:i-1})+L(n|z_{i}),l_{max}),l_{min}) \tag{1}\]
where
\[L(n)=log[\frac{P(n)}{1-P(n)}] \tag{2}\]
As illustrated in Figure 2(a), in the visibility analysis process of laser observations, voxels encompassing \(p_{i}\) are
Figure 2: The workflow of the proposed Scan2LoD3 consists of three parallel branches: The first is generating the point cloud probability map based on a modified Point Transformer network (top); the second is producing a conflicts probability map from the visibility of the laser scanner in conjunction with a 3D building model (middle); and the third is using Mask-RCNN to obtain a texture probability map from 2D images. We then fuse three probability maps with a Bayesian network to obtain final facade-level segmentation, enabling a CityGML-compliant LoD3 building model reconstruction.
deemed as _occupied_ (light-blue), those traversed by a ray as _empty_ (pink), and unmeasured as _unknown_ (gray). Then, as shown in Figure 2(b), we assign further voxel states by analyzing occupancy voxels and building model: Voxels are _confirmed_ (green) when _occupied_ voxels intersect with the building surface and are _conflicted_ (red) when a ray traverses a building surface and reflects inside a building. The final probability estimate, however, also concerns 3D model uncertainties.
Specifically, we address the uncertainties of global positioning accuracy of building model surfaces and of point clouds along the ray. Let us assume that the probability distribution of global positioning accuracy of a building surface \(P(A)\) is described by the Gaussian distribution \(\mathcal{N}(\mu_{1},\sigma_{1})\), where \(\mu_{1}\) and \(\sigma_{1}\) are the mean and standard deviation of the Gaussian distribution. Analogically, let us assume that the probability distribution of global positioning accuracy of a point in point cloud \(P(B)\) is described by the Gaussian distribution \(\mathcal{N}(\mu_{2},\sigma_{2})\). To estimate the probability of the confirmed \(P_{confirmed}\) and conflicted \(P_{conflicted}\) states of the voxel \(V_{n}\), we use the joint probability distribution of two independent events \(P(A)\) and \(P(B)\):
\[V_{n}=\left\{\begin{aligned} P_{confirmed}(A,B)=P(A)*P(B)\\ P_{conflicted}(A,B)=1-P_{confirmed}(A,B)\end{aligned}\right\} \tag{3}\]
We obtain a _conflicts probability map_ (Fig. 4) by projecting the vector-intersecting voxels to the vector plane, where the cell spacing is consistent with the voxel grid; each pixel receives probability values of the states _conflicted_, _confirmed_, and _unknown_, accordingly.
### 3D semantic segmentation on point clouds
We semantically segment 3D point clouds using the enhanced Point Transformer (PT) network [49, 55]. The enhancement involves fusing geometric features at the early training stage to increase 3D facade segmentation performance [23, 49]. In this work, we consider seven geometric features: _height of the points, roughness, volume density, verticality, omnivariance, planarity_, and _surface variation_[46, 8, 49], which are calculated within an Euclidean neighborhood search radius \(d_{i}\). We define eight pertinent classes for the facade segmentation task: _arch_, _column_, _molding_, _floor_, _door_, _window_, _wall_, and _other_[49].
The final softmax layer of the modified PT network provides a per-point vector of probabilities of each class as an output (Fig. 5). Notably, in contrast to [49], we do not discard points based on a probability threshold but consider each point and its class probability score for further processing. Finally, we create the _point cloud probability map_ (Fig. 7) by projecting the points onto the face of a building while preserving the probabilities and following the cell
Figure 4: Exemplary _conflict probability map_: high probability pixels present high conflict probability, whereas low probability pixels show high confirmation probability.
Figure 5: Exemplary results of the modified network: point cloud colors according to the probability vector of the class _window_.
Figure 3: Visibility analysis using laser scanning observations and 3D models on a voxel grid. The ray is traced from the sensor position \(s_{i}\) to the hit point \(p_{i}\). The voxel is: _empty_ if the ray traverses it; _occupied_ when it contains \(p_{i}\), _unknown_ if unmeasured; _confirmed_ when _occupied_ voxel intersects with vector plane; and _conflicted_ when the plane intersects with an _empty_ voxel [49].
spacing of the _conflict probability map_ (Sec. 3.1).
### 2D semantic segmentation on images
As demonstrated by Hensel et al., 2019, [12], Faster R-CNN [32] effectively identifies approximate facade openings positions. In our approach, we utilize Mask-RCNN [11], which builds upon the concept of Faster R-CNN and identifies probability masks within proposed bounding boxes. This trait allows us to obtain later a more accurate instances that are not necessarily restricted to a rectangular shape.
For the proposed facade opening detection, we focus on two classes: windows and doors. Analogically to the 3D semantic segmentation stage (Sec. 3.2), we preserve the pixel-predicted probabilities. To generate the texture probability map (Fig. 6), we project the pixels and their probabilities onto the building face, aligning with the cell spacing of the other probability maps (Secs. 3.1 and 3.2).
### Final segmentation with Bayesian network
To calculate the final shape, semantics, and probability score of opening instances, the multimodal probability maps are fused using a Bayesian network. The network quantifies uncertainties and assigns weights based on evidence when calculating the target probability map. Figure 7 shows the network architecture, including three input nodes for each probability map, to infer the probability of opening occurrence. The \(X\) and \(Y\) nodes exhibit a causal relationship, forming directed acyclic links. We utilize a conditional probability table (CPT) to assign weights to combinations of each node and state. The target node estimates two mutually exclusive states: opening and non-opening. The probability of node \(Y\) (opening space) being in the state \(y\) (opening) is calculated using the marginalization process, which combines the conditional probabilities of the parent nodes' \(X\) states \(x\) (i.e., of point cloud probability, conflicts probability, texture probability maps) [50, 38].
The probability maps serve as pieces of evidence updating the joint probability distribution \(P(X,Y)\) of the compiled network. The inference mechanism performs the update and estimates the posterior probability distribution (PPD), which provides the states' probability [50, 38]. In general, the network favors situations where there is a high probability of an opening occurring if at least two pieces of high-probability evidence co-occur; otherwise, it yields a low opening probability. For example, a very high _conflict_ probability overlying high texture _opening_ probability and medium point cloud _opening_ probability should yield a high _opening_ probability.
As an output from the Bayesian network, we extract the high probability clusters \(P_{high}\), which have a neighbor in any of the eight directions of the pixel. To distinguish between doors and windows, we compare overlying per-pixel class probabilities and select the more probable pixel class. The pixel-wise probability scores are then averaged per instance and kept for the final 3D model. Since the extraction can include noisy clusters, we employ their post-processing to obtain final, noise-free opening shapes. To this end, we apply morphological opening to reduce the effect of small distortions and weak-connected shapes. We also calculate a modified rectangularity index [3, 49], on which basis we reject erroneously elongated shapes using upper \(PE_{up}\) and lower \(PE_{lo}\) percentiles of the index score.
### Semantic 3D reconstruction
Since it is crucial to preserve the 3D model's watertightness and its given semantics, we use the prior building solid as the basis for the modeling. Specifically, the openings are cut automatically in the prior model using the constructive solid geometry (CSG) difference operation: the bounding boxes of found windows and doors cut the openings in the outer boundaries of the given solid. Then, we use these 3D cuts as matching and fitting geometries for automatically queried 3D models from a pre-defined library of LoD3 facade objects. To ensure the watertightness and prevent
Figure 6: Exemplary _texture probability map_: high probability pixels stand for a high probability of opening.
Figure 7: The Bayesian network architecture comprising three input nodes (blue), one target node (yellow), and a conditional probability table (CPT) with the assigned combinations’ weights.
self-intersections, each object is aligned with the respective face and scaled to the 3D cut shape.
We leverage the CityGML's traits to create a hierarchical semantic model structure [9]. Specifically, the prior solid and its constituting faces preserve their unique identifiers and associated semantic classes. The new unique identifiers are assigned to openings, which point to the respective solid's faces; each window and door obtain the standard _Window_ and _Door_ class, respectively. As it is pivotal to preserve the final detection confidence, we also add an attribute named _confidence_, keeping the final detection confidence of the shape opening. Ultimately, the model's LoD attribute value is upgraded to LoD3.
## 4 Experiments
In this section, we describe experiments concerning the proposed Scan2LoD3 method, which necessitated acquiring existing and creating new datasets. Within the scope of this work, we publish in the repository 2: textured LoD2 and modeled LoD3 building models, enriched TUM-FACADE point clouds, implementation, and settings.
### Datasets
To showcase the performance of Scan2LoD3, we evaluated the method on the public datasets: TUM-MLS-2016 [56], TUM-FACADE [51, 52] and textured CityGML building models at LoD2 [44] representing the Technical University of Munich main campus in Munich, Germany. Additionally, we used a proprietary MLS point cloud of the TUM area called MF. To validate the reconstruction, segmentation, and detection performance, we manually modeled a CityGML-compliant LoD3 building model [9] based on the combination of point clouds and LoD2 building model, serving as ground-truth; the LoD2 building models were additionally textured.
**The TUM-MLS-2016 dataset.** The point clouds in TUM-MLS-2016 were collected via obliquely mounted two Velodyne HDL-64E LiDAR sensors mounted on the Mobile Distributed Situation Awareness (MODISSA) platform. The entire point cloud covered an urban area with an inner and outer yard of the campus. The inertial navigation system was supported by the real-time kinematic (RTK) correction data of the the German satellite positioning service (SAPOS), which ensured geo-referencing.
**The TUM-FACADE dataset.** The TUM-FACADE dataset is derived from the TUM-MLS-2016 point clouds, where the former enriches the latter in 17 facade-level semantic classes. The dataset comprises 17 annotated and 12 non-annotated facades totalling 256 million facade-level labeled and geo-referenced points. Within the scope of this work, we additionally annotated four of the open-access non-annotated facades. As discussed in Section 3.2, we define seven facade classes as pertinent for the reconstruction. Therefore, we combined 17 TUM-FACADE's classes into seven by merging: _molding_ with _decoration_; _drainpipe_ with _wall_, _outer ceiling surface_ and _stairs_; _floor_ with _terrain_ and _ground surface_; _other_ with _interior_ and _roof_; _blinds_ with _window_; whereas _door_ remained intact.
**The MF dataset.** The MF point clouds were acquired at the TUM campus and covered an approximately the same area as the TUM-MLS-2016 dataset. The point cloud was geo-referenced by proprietary mobile mapping platform, supported by the German SAPOS RTK system [37].
**Textured LoD2 and LoD3 semantic building models.** We acquired open data CityGML-compliant building priors at LoD2 from the state open access portal of Bavaria, Germany [44], which were created using 2D cadastre footprints in combination with aerial observations [34]; comparable results can be achieved with methods such as PolyFit [26]. The textures were acquired manually at an approximately 45\({}^{\circ}\) horizontal angle using a 13MP rear camera of a Xiaomi Redmi Note 5A smartphone and projected to the respective faces: this approach simulated terrestrial acquisition of a mobile mapping unit or street view imagery where no ortho-rectifications were applied [15]. The LoD3 building model was created manually based on a combination of TUM-FACADE and textured LoD2 models. We modeled the so-called _building 23_ as it has been commonly used as a validation object for various methods [13, 40, 51, 52]. The pre-defined library of openings was downloaded from the open dataset of LoD3 building models of Ingolstadt, Germany [35].
### Implementation details
**Visibility analysis.** We set the size of voxels to \(v_{s}=0.1\)\(m\) and initialized them with a uniform prior probability of \(P=0.5\) to perform the ray casting on an efficient octree structure [14]; we used the standard [41, 49, 14] clamping and log-odd values. The uncertainty of building models and point clouds was assigned considering their reported global positioning accuracy. As such, the parameters of building models were set to \(\mu_{1}=0\) and \(\sigma_{1}=3\), while for the TUM-MLS-2016 and MF point clouds were set to \(\mu_{2}=0\), \(\sigma_{2}=2.85\) and to \(\mu_{2}=0\), \(\sigma_{2}=1.4\), respectively.
**Semantic segmentation.** For the modified Point Transformer data pre-processing, we followed [49] and removed redundant points within a \(5\)\(cm\) radius, which resulted in 10 million points; the point cloud was split into 70% training and 30% validation subsets. We chose the optimal geometric features search radius \(d_{i}\) following [7, 49]: As for the features _roughness, volume density, omnivariance, planarity_, and _surface variation_ the radius was set to \(d_{i}=0.8\)\(m\); whereas for _verticality_ to \(d_{i}=0.4\)\(m\). For the image segmentation, we deployed a pre-trained Mask-RCNN on the COCO dataset [21]. The inference was fine-tuned with 378 base images of the CMP facade database [42], where
we selected two classes for training: _door_ and _window_ including _blinds_. As \(P_{high}\) pixels in the Bayesian network, we deemed values higher than \(P_{high}=0.7\). To reject outliers, we fixed the modified rectangularity percentiles to \(PE_{up}=95\) and \(PE_{low}=5\).
### Results and Discussion
**Detection rate.** The methods of Hoegner & Gleixner, 2022, [13] and Wysocki et al., 2022 [49] were both tested on the three facades of the _building 23_ at the TUM campus using the TUM-MLS-2016 data; thus we validated the detection accuracy using the same setup and our manually modeled LoD3 building (Tab. 1). To show the ratio of the detection rate to the laser-covered rate, we introduced metrics for all existing facade openings (AO) and only laser-measured facade openings (MO).
Our multimodal fusion enabled a higher detection rate and still maintained a low false alarm rate. If compared to the Hoegner & Gleixner (H&G) [13] and CC [49] methods, Scan2LoD3 achieved higher detection rate on the TUM dataset by 10% and 6%, respectively (Tab. 1 and Fig. 8). The MF map provided more accurate results (i.e., 91% of the TUM-MLS-2016 conflict map (TUM) and on the higher accuracy conflict map of MF (MF).
Our experiments corroborate that, in contrast to the tested methods, our proposed solution identifies even closed openings, their full shapes, and reaches higher accuracy (Tab. 2 and Fig. 8). This fact enabled the whole-shape reconstruction of, for example, covered by blinds windows, which resulted in up to 20% higher IoU on the TUM-MLS-2016 dataset (red boxes, Fig. 8).
Similarly to the detection results, the accuracy of laser measurements significantly influenced the IoU results: Our method tended to overestimate opening shapes on the TUM point cloud, whereas on MF the shapes were approximately 14% more accurate. On the other hand, Scan2LoD3 was sensitive to poor segmentation results (facade C, Tab. 2).
**3D reconstruction.** We measured the accuracy of reconstruction by comparing our method using the TUM-FACADE data to the well-established and mesh-oriented Poisson reconstruction [18] and to the second-best-IoU performing CC method (Figs. 8 and 9 and Tab. 3). To highlight the influence of point cloud accuracy, we also added the results for MF point clouds.
As shown in Table 3 and in Figure 9, the 3D building priors provided more accurate reconstruction results than the standard Poisson reconstruction (i.e., RMS lower by 52%); the former also achieved the watertightness. Among the prior-driven methods, the improvement related to higher detection rate and IoU was noticeable: Scan2LoD3 had lower mean and RMS scores by up to 26% and 24%, respectively, compared to CC (Tab. 3).
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{H\&G [40]} & \multicolumn{4}{c}{CC [49]} & \multicolumn{4}{c}{Scan2LoD3} & \multicolumn{4}{c}{Scan2LoD3} \\ & & & & & & & & & (TUM) & & & & (MF) \\ \cline{2-13} & A & B & C & Tot & A & B & C & Tot & A & B & C & Tot \\ \cline{2-13} & AO & 66 & 17 & 20 & **103** & 66 & 17 & 20 & **103** & 66 & 17 & 20 & **103** \\ \hline MO & 60 & 17 & 10 & **87** & 60 & 17 & 12 & **87** & 60 & 17 & 12 & **89** & 66 & 12 & 18 & **96** \\ D & 60 & 15 & 4 & **75** & 60 & 15 & 6 & **81** & 60 & 16 & 11 & **87** & 65 & 16 & 16 & **97** \\ TP & 60 & 12 & 4 & **76** & 60 & 15 & 5 & **80** & 60 & 16 & 11 & **87** & 65 & 14 & 15 & **94** \\ FP & 0 & 3 & 0 & **3** & 0 & 0 & 1 & **1** & 0 & 0 & 0 & **0** & 0 & 0 & 1 & **3** \\ FN & 6 & 5 & 16 & **27** & 6 & 2 & 15 & **23** & 6 & 1 & 9 & **16** & 1 & 3 & 5 & **9** \\ \hline DA & 91 & 71 & 20 & **74** & 91 & 88 & 25 & **78** & 91 & 94 & 55 & **84** & 98 & 22 & 75 & **91** \\ FA & 0 & 0 & 0 & 4 & 0 & 0 & 17 & **1** & 0 & 0 & **0** & 0 & 12 & 6 & **3** \\ DM & 100 & 71 & 40 & 87 & 10 & 88 & 42 & **90** & 100 & 94 & 92 & 98 & 98 & 117 & 83 & **98** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Detection rate for all openings (DA) and laser-measured openings (DM) and the respective false alarm rate (FA) for facades A, B, and C (AO = all openings, MO = laser-measured openings, D = detections, TP = true positives, FP = false positives, FN = false negatives).
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & \multicolumn{2}{c}{vs. GT LoD3 \(\downarrow\)} \\ \cline{2-5} & \(\mu\) & RMS & WT \\ \hline Poisson (TUM) [18] & 0.35 & 0.54 & ✗ \\ CC [49] & 0.31 & 0.34 & ✓ \\ Scan2LoD3 (TUM) & 0.23 & 0.26 & ✓ \\ Scan2LoD3 (MF) & **0.13** & **0.25** & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of mesh-based Poisson, building-prior-driven CC, and our proposed method using the ground-truth LoD3 model and measuring watertightness (WT).
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{4}{c}{median IoU \(\uparrow\)} \\ \cline{2-5} F\({}_{\text{q}\text{c}\text{c}\text{a}\text{c}\text{d}}\) & A & B & C & Total \\ Openings & 66 & 17 & 20 & 103 \\ \hline PT+FL [55] & 7.3 & 4.6 & 3.7 & 7.3 \\ M-RCNN [11] & 63.7 & 47.4 & 38.6 & 58.4 \\ CC [49] & 66.5 & 56.4 & **53.2** & 60.6 \\ Scan2LoD3 (TUM) & 63.9 & 52.9 & 38 & 62.1 \\ Scan2LoD3 (MF) & **78.4** & **62.3** & 40.6 & **76.2** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of opening segmentation using only: 3D point clouds (Pt+Ft.), images (M-RCNN), binary masks (CC), and our method with TUM and MF conflict maps.
It is worth noting that the eaves were incorrectly reconstructed in any of the presented methods.
## 5 Conclusions
In this paper, we introduce Scan2LoD3, a multimodal probabilistic fusion method for the high-detail semantic 3D building reconstruction. Our work has led us to the conclusion that the multimodal probabilistic fusion can maximize the advantages of ray-casting- and learning-based methods for the LoD3 reconstruction. The findings of this study indicate that while joining images, point clouds, and model conflicts, a Bayesian network reveals a very high-level detection rate (i.e., 91%); and robustness as the false alarm rate is negligible (i.e., 3%). Crucially, our method segments and reconstructs complete opening shapes, even when closed by blinds, which can provide up to around 76% shape accuracy. By such detection and segmentation, we minimize the final reconstruction deviations by 54% and 24% when compared to mesh-based and other prior-driven methods, respectively. Such method's characteristics are of great importance for applications necessitating object-oriented semantics, high robustness, and completeness, such as automated driving testing [36] or facade solar potential analysis [47], among others [28, 48]. Furthermore, an upshot of keeping reconstruction confidence score can be pivotal for confidence-based navigation algorithms, such as in autonomous cars [1, 48, 57]. It is worth noting that our method focuses on upgrading facades to LoD3; refining roofs to LoD3 would require additional, airborne data.
As the late fusion results so far have been very encouraging and do not require any training data, we deem Bayesian networks suitable for the task. Future work will concentrate on comparing the Bayesian network's generalization capabilities to deep neural networks, which, however, require extensive training data. Moreover, we expect the method's performance to be comparable on similar architecture styles; considering selected classes and small sample size. To tackle these issues, we plan to extend our open library of textured LoD2 and LoD3 models to foster the methods' development.
**Acknowledgments** This work was supported by the Bavarian State Ministry for Economic Affairs, Regional Development and Energy within the framework of the IuK Bayern project _MoFa3D - Mobile Erfassung von Fassaden mittels 3D Punktwoloken_, Grant No. IUK643/001. Moreover, the work was conducted within the framework of the Leonhard Obermeyer Center at the Technical University of Munich (TUM).
Figure 8: Comparison of different reconstruction results for the facade A: Our method reconstructs complete window shapes despite the presence of window blinds (red boxes).
Figure 9: Comparison of the Poisson to our reconstruction approach: Deviations are projected onto the ground-truth LoD3 model. |
2307.08540 | Utilization of Pre-trained Language Model for Adapter-based Knowledge
Transfer in Software Engineering | Software Engineering (SE) Pre-trained Language Models (PLMs), such as
CodeBERT, are pre-trained on large code corpora, and their learned knowledge
has shown success in transferring into downstream tasks (e.g., code clone
detection) through the fine-tuning of PLMs. In Natural Language Processing
(NLP), an alternative in transferring the knowledge of PLMs is explored through
the use of adapter, a compact and parameter efficient module that is inserted
into a PLM. Although the use of adapters has shown promising results in many
NLP-based downstream tasks, their application and exploration in SE-based
downstream tasks are limited.
Here, we study the knowledge transfer using adapters on multiple down-stream
tasks including cloze test, code clone detection, and code summarization. These
adapters are trained on code corpora and are inserted into a PLM that is
pre-trained on English corpora or code corpora. We called these PLMs as NL-PLM
and C-PLM, respectively. We observed an improvement in results using NL-PLM
over a PLM that does not have adapters, and this suggested that adapters can
transfer and utilize useful knowledge from NL-PLM to SE tasks. The results are
sometimes on par with or exceed the results of C-PLM; while being more
efficient in terms of the number of parameters and training time.
Interestingly, adapters inserted into a C-PLM generally yield better results
than a traditional fine-tuned C-PLM. Our results open new directions to build
more compact models for SE tasks. | Iman Saberi, Fatemeh Fard, Fuxiang Chen | 2023-07-17T14:58:52Z | http://arxiv.org/abs/2307.08540v2 | # Utilization of Pre-trained Language Model for
###### Abstract
Software Engineering (SE) Pre-trained Language Models (PLMs), such as CodeBERT, are pre-trained on large code corpora, and their learned knowledge has shown success in transferring into downstream tasks (e.g., code clone detection) through fine-tuning the PLMs. In Natural Language Processing (NLP), an alternative in transferring the knowledge of PLMs is explored through the use of _adapter_, a compact and **parameter efficient** module that is inserted into a PLM. Although the use of adapters has shown promising results in many NLP-based downstream tasks, their application and exploration in SE-based downstream tasks are limited.
Here, we study the knowledge transfer using adapters on multiple downstream tasks including cloze test, code clone detection, and code summarization. These adapters are trained on code corpora and are inserted into a PLM that is pre-trained on English corpora or code corpora. We called these PLMs as NL-PLM and C-PLM, respectively. We observed an improvement in results using NL-PLM over a PLM that does not have adapters, and this suggested that adapters can transfer and utilize useful knowledge from NL-PLM to SE tasks. The results are sometimes on par with or exceed the results
of C-PLM; while being more efficient in terms of the number of parameters and training time. Interestingly, adapters inserted into a C-PLM generally yield better results than a traditional fine-tuned C-PLM. Our results open new directions to build more compact models for SE tasks.
Keywords:Transfer learning Adapter-based Training Programming Language Models Parameter Efficient Finetuning Code Clone Detection Code Summarization +
Footnote †: journal: Computer Science
## 1 Introduction
Pre-Trained Language Models (PLMs) such as BERT [15] and RoBERTa [34] are pre-trained on natural language text (i.e., sentences from Wikipedia). These PLMs provide rich linguistic representations and, when fine-tuned on multiple Natural Language Processing (NLP) downstream tasks such as text classification and language understanding [34], yield promising results.
In Software Engineering (SE), similar approaches were adopted - pre-training a language model on code (_instead of natural text_). In this work, we refer to the PLMs pre-trained on code and natural language text as **C-PLMs** and **NL-PLMs**, respectively. For example, CodeBERT [18] and CuBERT [27] are two C-PLMs that were developed to obtain code representations. CodeBERT is a multilingual PLM pre-trained on code and the code comments, while CuBERT trains a BERT model using code. These C-PLMs are then fine-tuned on SE downstream tasks such as code clone detection and code search [18; 27].
Fine-tuning PLMs is the most common approach in transferring knowledge from existing models to downstream tasks. When a PLM is fine-tuned, _all_ of its learned weights are adjusted (i.e., _relearned_) using the dataset of the downstream task. Although this approach achieves state-of-the-art performances on multiple NLP [34; 30] and SE tasks [18; 27; 49], it is computationally expensive and space inefficient as all the PLM's parameters need to be learned for every task, and each fine-tuned task need to be saved as another model. Therefore, researchers have been exploring and studying better ways to transfer knowledge from PLMs.
In NLP, _adapter_ has been introduced for the Transformer-based architecture. It is a parameter-efficient, compact, and more extensible approach to knowledge transfer [24]. An adapter _shares_ the parameters of a PLM for _all_ of its tasks/languages while introducing a small set of task/language-specific parameters on the intermediate layers of a PLM. In this way, only a small set of weights are learned (_as opposed to relearning all the weights of a PLM during the fine-tuning process_). In recent years, a number of adapter-based architectures such as serial to parallel [54], language-specific transformations [7; 5; 44; 45], and task-specific transformations [40; 42] were proposed. Even though adapters have shown promising results in the NLP domain, their capability has not been explored extensively in other domains, such as SE. Adapters have also not been studied to extend to other language
modalities, such as programming languages. In addition, despite the similarities between programming languages and natural languages [3; 23], recent effort on C-PLM is mostly on introducing new objectives, and there are limited studies to understand how knowledge is (_can be_) transferred from natural languages to programming languages.
In this paper, we extend our previous work [19] to explore the use of adapters for SE. In our previous work we studied the extent an adapter is used to transfer the representations of NL-PLMs. In this extended version, we further investigate adapters for knowledge transfer (_using C-PLMs_) on SE-related tasks. This is first done through a Cross-Modal model, which we refer to as **MODE-X**. MODE-X trains adapters on code and inserts the trained adapters on the layers of NL-PLMs (our previous work). Then, we add adapters into C-PLMs to assess whether they can improve the results by transferring the learned knowledge of the same modality (i.e., code) in C-PLM using a small set of trainable parameters. The resulting models are a modified version of the original NL-PLMs or C-PLMs with adapters embedded in them. We evaluate the models on three common tasks: cloze test, code clone detection, and code summarization. These tasks evaluate neural models trained on code, and they are the evaluation benchmark tasks found in CodexGLUE [35]1. We compare the results of our models (adapters on NL-PLMs and C-PLMs) with the results obtained by traditional fine-tuning of RoBERTa and C-PLMs. We also compare the number of learned parameters and the training time in the different models. Additionally, we apply attention and probing analysis to understand the learned representations using adapters.
Footnote 1: [https://github.com/microsoft/CodexGLUE](https://github.com/microsoft/CodexGLUE)
Our results show several interesting phenomena: 1) When MODE-X trains adapters on RoBERTa (NL-PLM) for the code clone detection task, the resulting models yield better performance than just fine-tuning RoBERTa traditionally. When compared to the traditional fine-tuning of CodeBERT (C-PLM), the results are similar. 2) When MODE-X is used, or we train adapters on CodeBERT and GraphCodeBERT (C-PLMs) for the code summarization task, the majority of the results yield better performance than just fine-tuning the C/NL-PLMs traditionally. 3) Despite MODE-X having similar or better performance than fine-tuning C/NL-PLMs traditionally, the training time and the number of parameters are significantly lower. We note here that the main objective of this work is not to present a new model architecture but to explore adapters in SE.
This is the first work that explores adapters comprehensively in SE. We empirically study adapters that are trained on (_or where the knowledge is learned from_) NL-PLMs and C-PLMs for SE-related tasks. Compared to the previous version, we included the following items: i) the literature review section is updated, ii) a new task, code summarization, is added, iii) evaluating the models with C-PLMs, and iv) analyzed the learned representations with probing tasks and attention analysis. In the new experiments, we are
mainly interested to understand the effect of adapters for other tasks and C-PLMs.
The rest of this paper is organized as follows. Section 2 and 3 survey the existing literature and provide the background details on adapters, respectively. The design of our study, including the description of our four research questions are described in Section 4. For the research questions, their experimental setup and results (_including discussions_) are shown in Section 5 (_RQ1: Code Representation Using Adapters_), Section 6 (_RQ2: Adapters' Ability for Cross-Modal Transfer to Code Clone Detection_), Section 7 (_RQ2 & RQ3: Adapters' Ability for Code Summarization_) and Section 8 (_RQ4: Computational Efficiency of Adapters_). Section 9 discusses our findings through probing and attention analysis of the adapters. We discuss the implications in Section 10 and the threats to the validity of our work are discussed in Section 11. We then conclude the paper with future work in Section 12.
## 2 Literature Review
Inspired by Transformers [48] and PLMs in NLP [15; 34; 46; 52], several studies have emerged using Transformer-based PLMs for code representation [27; 18; 10; 47; 29; 20] in software engineering. CuBERT [27] and CodeBERT [18] pioneered the pre-training of a BERT model [15] for code. Consequently C-BERT [10] and CodeTrans [17], based on the T5 architecture [46], were introduced. Roziere et al. [29] present DOBF, a MLM-based pre-training objective that encourages code comprehension. The authors of CodeBERT [18] were the first to incorporate bimodal pre-training for code; learning from the NL-PL pairs from the CodeSearchNet corpora [26]. Concurrently, Tufano et al. [47] showed that BART, a denoising autoencoder-based Transformer [32], initially pre-trained on a large English corpora and subsequently on a large corpus of code, can be fine-tuned for generating assert statements for unit tests. Drain et al. also used a pre-trained Transformer for generating bug fixes [16]. CSLEBERT is developed and is used for four tasks, including code clone detection [49]. Although many different code-based PLMs were developed to represent code, they share a common property: they must be fine-tuned separately for each of the downstream tasks. This becomes an issue when scaling up to many tasks is required, as an entirely new model is required for every task. Moreover, in multilingual PLMs like CodeBERT, the model learns features that can help its domain languages while discouraging representations that do not. Thus, it suffers from the "curse of multilinguality" as one begins to scale up the model to include new languages [14].
NLP researchers have explored other avenues of efficient knowledge transfer to eliminate the shortcomings associated with the fine-tuning of large PLMs. The compact and extensible bottleneck layers, known as adapters, are one of the main techniques [24] introduced. In terms of model parameters, adapters use a small fraction of that of the original Transformer.
A number of adapter-based frameworks ranging from language-focused [5; 42] to task-focused [7; 40] approaches were proposed. Bapna et al. [7] demonstrate the use of adapters in domain adaptation for Neural Machine Translation and employ them in a multilingual setting. Artetxe et al. [5] transfer a monolingual PLM to an unseen natural language via adapters. Subsequent studies showed that it is promising to use multiple distinct task-specific adapters to disentangle different elements of knowledge relevant to the target domain of the downstream task [40] and that stacking task- and language-specific adapters is effective in adapting a multilingual model to an unseen natural language [42].
Zeng et al. [50] conducted a comprehensive study on the existing NL-PLMs to enhance the understanding of their strengths and limitations. More specifically, they validated the performance of different NL-PLMs, compared such models with the previous domain-specific state-of-the-art models, and investigated the robustness of PLMs. They found that subtle performance fluctuations can refute the findings in the original papers, and none of the existing PLMs can dominate the other models. Shamil et al. [6] evaluated the two approaches widely used for the parameter-efficient fine-tuning of transformers for code, namely adapters and LoRA. Other research through empirical studies showed that parameter efficient fine-tuning approaches might outperform the traditional fine-tuning method in NLP tasks with small data, and as the data size increases, the traditional fine-tuning method could achieve better performance [21; 12].
**Differences of our work with the current literature:** Although there are many studies on C-PLMs as well as in exploring adapters for NLP, there is no attempt to extend the adapters to other modalities, and few works had utilize adapters for programming languages and SE tasks, which we have explored in this paper. This paper extends our previous work, which is the first to assess adapters in SE [19]. The closest study to ours is the work of Shamil et al. [6] in which they perform two widely used parameter-efficient fine-tuning approaches for the C-PLMs. They have applied adapters for SE tasks and their main contribution is to evaluate the performance of adapters compared to the traditional fine-tuning approaches on C-PLMs (i.e., application of adapters as a quick fine-tuning approach). However, in this work, we mainly focus on highlighting the adapters' abilities to transfer knowledge from natural language to the programming language domain and then extend this ability to C-PLMs.
## 3 Background
### Transformers and PLMs
Transformers are the state-of-the-art neural network architecture that achieved promising results in multiple NL tasks [48]. Transformer consists of stacks of encoders and decoders layers. It uses an attention mechanism through the
multi-head self-attention sub-layer, followed by a feed-forward sub-layer. The multi-head self-attention helps the model encode each word by attending to other words in the input sequence. Each of these sub-layers in the encoder has a residual connection, and a layer normalization is applied after each one (i.e., multi-head self-attention and feed-forward network). Bidirectional Encoder Representations From Transformers, BERT, is the base PLM for our study [15]. BERT enables fine-tuning the model on downstream tasks with an additional output layer. After BERT, multiple Transformer-based PLMs were introduced. One popular PLM is RoBERTa [34] and it is the main architecture for many C-PLMs.
### Adapters
An adapter is introduced as a new layer (with an additional small number of parameters) that is inserted into a PLM to enable the PLM to adapt to a new language [42]. An adapter can be trained as either a language-specific adapter module (_L-adapter_) or a task-specific adapter module (_T-adapter_). The former is trained via the masked language modeling objective on an unlabelled target language dataset (this allows the PLM to adapt to unseen languages that are not covered in the PLM), and the latter is used to optimize a target task on a labeled dataset.
The framework for adapters that we use in this study is based on the work _Multiple Adapters for Cross-lingual transfer (MAD-X)_[42], which uses architecture as its basis that allows the sharing of information between multiple tasks [40]. MAD-X enables the adaptation of unseen languages in a PLM "without learning expensive language-specific token-level embeddings" by freezing the initial learned parameters of the PLM and learning only a small number of parameters relative to that of the PLM. The overall architecture of the adapters is shown in Figure 1. The language and task adapter modules are inserted after the feed-forward network in _each_ layer of a Transformer-based PLM. The T-adapters are stacked on top of the L-adapters when they are used for downstream tasks. The language adapter \(LA_{l}\) at layer \(l\) of the Transformer is defined as
\[LA_{l}(h_{l},r_{l})=U_{l}(ReLU(D_{l}(h_{l})))+r_{l} \tag{1}\]
where \(D\in\mathbb{R}^{h\times d}\), \(h\) is the hidden size of the Transformer, \(d\) is the dimension of the adapter, and \(D\) is the down-projection. \(ReLU\) is the activation function used and \(U\in\mathbb{R}^{d\times h}\) is up-projection at every layer \(l\). \(h_{l}\) (output of the subsequent layer normalization) and \(r_{l}\) (output of the feed forward layer) are the hidden state and residual at layer \(l\) of the Transformer, respectively. During the training of T-adapters, the parameters of both the L-adapter and the Transformer are frozen. The task adapters, \(TA_{l}\) at layer \(l\) of the Transformer model is similar to \(LA_{l}\) and is computed as below:
\[TA_{l}(h_{l},r_{l})=U_{l}(ReLU(D_{l}(LA_{l})))+r_{l} \tag{2}\]
**Invertible Adapters** The invertible adapters are proposed in [42] to deal with the mismatch between the vocabularies of a multilingual PLM and an unseen language. These are inserted on top of the input embedding layer, and their inverses are inserted before the output embedding layer, as shown in the left part of Figure 1. Each language of a multilingual PLM will have an invertible adapter. The function of the invertible adapters is similar to that of language adapters. It is used to capture the language-specific transformations at the token level. They are trained with language adapters using the MLM objective on a set of unlabeled dataset. This inversion enables efficient utilization of the "parameter budget". This allows us to leverage the same set of parameters to adapt both the input and the output representations. For fine-tuning a model on a specific task, we remove the output
Figure 1: Language, task, and invertible adapters in the MAD-X framework, adapted from [42].
embedding layer and its corresponding inversion layer, following which we freeze the parameters of the L-adapters and the PLM's parameters.
## 4 Study Design
In this study, we seek the answers to the following research questions.
**RQ1: How do adapters perform on code representation when adapting NL-PLM to a target programming language?**
Each downstream task has its own programming language dataset. For example, for the code clone detection task, there are separate python-based and java-based datasets. We trained each _language adapter_ in the MODE-X architecture using the dataset of a programming language to learn the code representation. To study how well this code representation is being learned, we assess the language adapters using the cloze test task.
**RQ2: How effective does MODE-X facilitate cross-modal (natural language to programming language) transfer on downstream tasks compared to the traditional approach of fine-tuning a PLM?**
We study the impact that adapters of a NL-PLM e.g., ROBERTa, have on SE-related tasks (an adapter is trained on a single programming language/task). For example, we first train a task adapter on a SE-related task. This adapter is then inserted into RoBERTa and compared with other baseline models for the same SE-related task.
**RQ3: Can adapters in C-PLM have better performance compared to the traditional approach of fine-tuning a C-PLM?**
RQ3 is similar to RQ2, except that instead of training adapters on a NL-PLM e.g., ROBERTa, we use C-PLMs (_e.g._, _CodeBERT and GraphCodeBERT_). Since the adapters are trained on SE-related tasks, we wanted to study if a higher performance can be achieved through training adapters on C-PLMs as compared to NL-PLMs.
**RQ4: How computationally efficient are adapters compared to the traditional approach of fine-tuning a PLM?**
Fine-tuning a PLM on a downstream task generates another model that is similar to the PLM with respect to the the number of parameters and the training time. In this RQ, we are interested to study the number of parameters needed, as well as the training time in each approach.
### Methodology Overview
In this section, we first present an overview of our methodology and the details of each section are provided in Sections 5 to 7. We first choose a NL-PLM as the base PLM. In our study, the chosen NL-PLM is RoBERTa,
as it has a similar architecture to CodeBERT. We train three programming language (PL) specific adapters (i.e., L-adapters) on three sets of unlabeled dataset, one for C/C++, one for Java, and one for Python on the Masked Language Modeling objective. The L-adapters are inserted into each layer of RoBERTa. These three PLs are chosen based on the availability of the training and testing datasets on the downstream task. We use the CodeSearchNet [26] dataset to train the L-adapters for Java and Python, and the CodeNET dataset [45] to train the L-adapter for C/C++.
The tasks we chose for evaluation in our experiments are from the code-code category and code-natural language category in the CodeXGLUE benchmark [35] published by Microsoft. This benchmark is chosen as it is a popular collection of tasks, datasets, and platform for evaluating and comparing machine learning models for code understanding and generation. The two code-code tasks are Cloze Test (CT) and code clone detection. The third task (code-natural language) is code summarization. These tasks are selected as they cover both PL-PL and PL-NL, thus allowing us to better understand the ability of adapters to transfer the learned knowledge of the PLMs to code-related tasks.
Note that we work with both L-adapters (i.e., adapters trained in a programming language such as Java using MLM objective) and T-adapters (i.e., adapters trained on a specific task in one programming language such as code summarization in Java) in our work. For RQ1, only the L-adapters are used. For code clone detection, the L-adapters and T-adapters are used, and for code summarization, only the T-adapters are inserted within the layers of a PLM. The rationale for our choices and the reasons are illustrated in the subsequent paragraphs.
Code clone detection is a task that mostly requires code only. So the code clone detection datasets do not have any code comments in natural language and consist of only code; therefore, it has only one modality. Code summarization, on the other hand, consists of both code and natural language, therefore, having two modalities.
We conduct experiments with slightly different approaches to evaluate the benefits of adapters in knowledge transfer from PLMs for each of these tasks. For code clone detection, we train both the language adapters and task adapters and insert them into RoBERTa. This helps the NL-PLM to learn both the PL and the code-specific task. For code summarization, we have opted to only train the task adapters in our approach. This decision was made given the similarities in architecture between the language and task adapters as well as the fact that code summarization involves both natural language and code. These adapters are then inserted within the layers of RoBERTa to assess the cross-modal transfer using adapters. We specifically focus on the task of code summarization for our third research question (RQ3). This task is chosen due to its generative nature and the need to address both the NL and PL aspects of C-PLMs.
For RQ1, we evaluate the ability of the trained model to predict the missing tokens, i.e., Cloze Test (CT) task [18]. This task shows how the contextual
knowledge of the NL-PLM is transferred using adapters to a new modal, i.e., code [35]. Our model consisting of L-adapters inserted into the NL-PLM is tested for CT. The results of MODE-X are then compared to RoBERTa and CodeBERT. The evaluation metric for cloze test is accuracy, as explained in Section 5. Note that cloze test is only evaluated for the language adapters.
For RQ2, code clone detection and code summarization are used. For code clone detection, we insert T-adapters on top of the L-adapters in all the layers within RoBERTa. For code summarization, only T-adapters are inserted within the layers of RoBERTa. Note that a task adapter is created for each task, i.e., code clone detection and code summarization. The parameters of the NL-PLM and L-adapters are frozen in the RQ2 experiments. It can be seen as another PLM with T-adapters injected into it to adapt the PLM's parameters away from its pre-trained MLM objective to a new objective on the downstream tasks. We refer to the model containing adapters in the NL-PLM as **MODE-X**. The results are evaluated against the fine-tuned NL-PLMs and C-PLMs on clone detection and code summarization. Three datasets are used for code clone detection in C/C++, Java, and Python with the evaluation metrics of F1 and MAP@R explained in Section 6. Note that only the code clone detection task requires language adapters. The cloze tests in RQ1 are applied to these three languages. For code summarization, smooth BLEU-4 is used as explained in Section 7, and we conduct the experiments on the six languages in CodeSearchNet dataset, Ruby, JavaScript, Go, Python, Java, and PHP.
Based on the results we obtained for each of the tasks in RQ2, for RQ3, we consider the task of code summarization and add the related adapters to the C-PLMs, CodeBERT [18] and GraphCodeBERT [20]. The main motivation for investigating code summarization is to study how to leverage the strengths of both NL and PL in the underlying network. We aim to evaluate the effectiveness of adapters in improving the results of C-PLMs in this context. Additionally, we aim to assess the performance of adapters on a generative task through this experiment.
Finally, in RQ4, we compare the computational efficiency of PLMs (with adapters) with the traditional fine-tuned PLMs (without adapters).
## 5 Code Representation Using Adapters
To answer the first research question, we evaluate the ability of the language adapters to represent code using the cloze test task because cloze test evaluates the linguistic knowledge of the models. In this section, we explain the experimental setup and then report the results of RQ1.
### Experimental Setup
#### 5.1.1 Dataset
We used CodeSearchNet [26], a joint dataset effort from GitHub and Microsoft Research that consists of code and code comment pairs in six programming languages to train the L-adapters.
We selected three main benchmark datasets on code clone detection as one of the tasks to evaluate the performance of MODE-X, and one of them is in C/C++ language. However, CodeSearchNet does not include C/C++. Therefore, we chose the dataset from CodeNet to train the L-adapters for C/C++. The CodeNet dataset is a large-scale, high-quality dataset that is collected for studies on artificial intelligence for code, published by IBM [45]. To train the L-adapters, we randomly split the dataset of each programming language into 90 (train): 10 (validation). The L-adapters are trained on the training dataset and then evaluated on the validation dataset. The statistics of the datasets are shown in Table 1. Note that as only C/C++, Java, and Python language adapters are trained and used for code clone detection, the cloze test results are only depicted for these three programming languages.
#### 5.1.2 Task
**Cloze Test (CT)** is a probing task that is designed by the authors of CodeBERT to evaluate their model's capability in learning the linguistic information of code without modifying the model's parameters [18].
For the CT task, given a code snippet, the model will predict the masked token of interest. It has two setups: CT-all and CT-max/min. In the former, the model will predict tokens from the code, where the tokens are from the entire vocabulary, while in the latter, the tokens that are to be predicted by the model are from the {max,min} set. The CT-max/min setup evaluates the model's ability to understand code semantics [35]. For testing a model on CT, no fine-tuning is required. Both the CT-all and CT-max/min datasets are the combination of the validation and test sets of the CodeSearchNet dataset.
CodeSearchNet does not include C/C++. We have tried to build such dataset ourselves, but we were unable to find a dataset with similar vocabularies and thresholds to the CT task in CodeXGLUE. Therefore, CT is applied only to the CodeSearchNet languages.
\begin{table}
\begin{tabular}{c c c c}
**Language** & **Train \#** & **Validation \#** & **Total \#** \\ \hline \multicolumn{4}{c}{**CodeNet (CN)**} \\ \hline C/C++ & 559,497 & 62,167 & 621,664 \\ \hline \multicolumn{4}{c}{**CodeSearchNet (CSN)**} \\ \hline Java & 454,451 & 26,909 & 481,360 \\ Python & 412,178 & 23,107 & 435,285 \\ \hline \end{tabular}
\end{table}
Table 1: Statistics of datasets for training L-adapters.
#### 5.1.3 Training \(L\)-adapters
We trained the L-adapters using the invertible configuration [42] as mentioned in Section 3.2. The L-adapters are trained on the code corpora of the dataset on each of the programming languages separately, leading to three L-adapters: C/C++-adapter, Java-adapter, and Python-adapter. The L-adapters are trained using the Adam optimizer and a learning rate of 1E-4.
#### 5.1.4 Baselines
**RoBERTa** (Robustly optimized BERT approach) [34] is based on BERT, and its difference from BERT is in its pre-training steps - it uses only the MLM objective and includes a longer input sequence. RoBERTa is used in previous SE studies [53], and it is the base model for many C-PLMs, including CodeBERT [18]. RoBERTa is released in different model sizes, and we use the 12 layers architecture variant known as RoBERTa-base.
**CodeBERT** is a BERT-based model trained on code [18]. It is one of the models that is used as a baseline on the CodeXGLUE platform. CodeBERT uses the same architecture as RoBERTa-base and is trained on two objectives: MLM and Replaced Token Detection (RTD) [13]. There are two publicly available versions of CodeBERT, one trained using the code corpus of CodeSearchNet dataset and utilized the MLM training objective, (CodeBERTMLM), while the other uses a combination of MLM and RTD objectives (CodeBERT) and it is trained on the bimodal dataset (i.e., code and documents) of CodeSearchNet. CodeBERT is trained on a combination of six PLs from CodeSearchNet. For the cloze test, we use CodeBERTMLM, as the cloze test task requires the model to predict the masked token. CodeBERT (i.e., the model trained on MLM and RTD) cannot perform cloze test as the final layers include a discriminator model. Thus, the CodeBERT authors only published the results of the MLM variant for the cloze test in their work [18].
These two models are chosen for comparison as they display the transferability of the adapters from NL to PL, where RoBERTa and CodeBERT are at two extremes. In addition, they both use a similar architecture, and this can provide a fair comparison between the models, especially for comparing their parameter and training time efficiency.
#### 5.1.5 Evaluation Metric
**Accuracy** is calculated as \(\frac{TP+TN}{TP+TN+FP+FN}\). Here, TP refers to the number of records that are correctly identified as belonging to the positive class, whereas FP are the records that are incorrectly classified as belonging to the positive class. TN are the records that are correctly predicted as negative examples, whereas FN are the records that are incorrectly predicted as belonging to the negative class.
### Results
For this RQ, neither the L-adapters nor RoBERTa or CodeBERT are fine-tuned. We evaluate the trained models for the accuracy of the CT task, which is on evaluating the performance of L-adapters in capturing the representation of the PLs. The results are presented in Table 2. The RoBERTa\({}_{\text{L-adapter}}\) shows the model where the Python-adapter or the Java-adapter are trained and inserted into RoBERTa. The L-adapters are tested on CT on the PL that they were trained on. As mentioned earlier, in CT-All, the tokens to be predicted are from the entire vocabulary and in CT-max/min, they are from \(\{\text{max},\text{min}\}\).
Note that RoBERTa is pre-trained on natural language and the L-adapters are used to adapt this NL-PLM to the PLs. We observe that with adapters the results fall between the results of RoBERTa and CodeBERT. For Java in CT-all, interestingly, \(\text{Roberta}_{\text{L-adapter}}\) outperforms CodeBERT slightly. Note that our goal is not to improve the CodeBERT results. Instead, we evaluate whether by using adapters, we can transfer the knowledge from a NL-PLM. The results for CT show that the L-adapters can help improve the obtained results from RoBERTa, by infusing PL knowledge into it and leveraging the PLM's previously learned knowledge.
## 6 Code Clone Detection Adapters
In RQ2, we are interested in evaluating whether MODE-X can facilitate the cross-modal transfer from natural language to programming language. For this purpose, two tasks are chosen. The first is the code clone detection task that will be discussed in this section, and the second is the code summarization task in the next section (Section 7). In this section, we first explain the experimental setup before detailing the results.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Model** & Python & Java \\ \hline \multicolumn{3}{c}{**CT-max/min**} \\ \hline
**RoBERTa** & 59.18 & 59.75 \\
**RoBERTa\({}_{\text{L-adapter}}\)** & 66.30 & 66.81 \\
**CodeBERT\({}_{\text{MLM}}\)** & **79.27** & **91.08** \\ \hline \multicolumn{3}{c}{**CT-all**} \\ \hline
**RoBERTa** & 54.49 & 50.75 \\
**RoBERTa\({}_{\text{L-adapter}}\)** & 74.35 & **75.63** \\
**CodeBERT\({}_{\text{MLM}}\)** & **83.33** & 75.53 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Accuracy scores of the models on Cloze Test (CT). Best scores are in bold and the second high scores are underlined.
### Experimental Setup
#### 6.1.1 Dataset
We conducted experiments on code clone detection in three programming languages: C/C++, Java, and Python. For C/C++, the L- adapters are trained using the CodeNet dataset [45]. For Java and Python, the L-adapters are trained using the CodeSearchNet [25] dataset. We utilize the POJ-104 and the Big Clone Bench (BCB) datasets which are a part of the code-code pipeline dataset from CodeXGLUE [35]. POJ-104 contains C/C++ programs, and it is used for retrieving the top-k semantically similar code fragments. The retrieval result is then evaluated using the MAP@R score (see below for details). BCB is used to discern whether a pair of code fragments are semantically equivalent, and it is evaluated using the F1 score (see below for details). As there is no Python dataset in CodeXGLUE to be used for the code clone detection task, we consider the python-specific subset from the cross-language clone detection (XCD) dataset [38]. We refer to this as the SCD-88 dataset, where 88 refers to the number of problems with multiple submitted solutions on Python. We then reformulate it as a retrieval task and evaluate using the MAP@R score, similar to POJ-104. Table 3 shows the respective splits for POJ-104, BCB, and SCD-88.
#### 6.1.2 Task
**Code Clone Detection (CCD)** involves the identification of code fragments that share similarities within a given codebase, enabling developers to manage and maintain the code more efficiently. The primary objective of CCD is to accurately locate these similar code fragments and group them together to avoid redundancy, inconsistencies, and other potential issues that can arise from duplicated code [49].
Overall, the ability to effectively detect and manage code clones is essential in maintaining software quality, reducing development time, and improving productivity. With the help of CCD and C-PLMs, developers can optimize their codebase, improve their workflows, and deliver high-quality software products.
#### 6.1.3 Training Models
**Training the T-Adapters:** We follow the same configuration as Jonas et al. to train the T-adapters [41]. We use in-batch negative sampling to train these
\begin{table}
\begin{tabular}{c c c c}
**Dataset** & **Train \#** & **Validation \#** & **Test \#** \\ \hline
**POJ-104** & 32,000 & 8,000 & 12,000 \\ \hline
**BCB** & 901,028 & 415,416 & 415,416 \\ \hline
**SCD-88** & 7,800 & 1,040 & 2,600 \\ \hline \end{tabular}
\end{table}
Table 3: Statistics of code clone detection datasets
adapters while keeping in line with the experimental setup described by the authors of CodeBERT [18]. To prevent the adapters from overfitting, dropout and early stopping are used.
**Training the Baselines:** To maintain consistency across our evaluations, we re-trained and re-evaluated the existing downstream task benchmark performances of RoBERTa and CodeBERT in our study. The authors of CodeBERT have confirmed that our obtained results are accurate, although our results fall within 2% error rate of what was reported in CodeBERT. Keeping in line with the benchmark experiments of CodeXGLUE, we also utilize in-batch negative sampling. The choice of hyperparameters, learning rate schedules, and optimizers was similar to CodeXGLUE's benchmarking experiments.
All the experiments were conducted on an Nvidia Tesla V100 32GB GPU.
#### 6.1.4 Baselines
Similar to cloze test, we compare the results with RoBERTa and CodeBERT. For code clone detection, we use the full CodeBERT, i.e., the model that is trained on MLM+RTD objectives.
#### 6.1.5 Evaluation Metric
_F1-Score (F1):_ F1 Score is the weighted average of Precision and Recall: \(F1=\frac{2\cdot(P\cdot R)}{P+R}\). Here, P stands for Precision, and it is computed as \(P=\frac{TP}{TP+FP}\), whereas R is the Recall and it is calculated as \(R=\frac{TP}{TP+FN}\).
**Mean Average Precision at R (MAP@R)**[36] is a metric used for measuring the informative accuracy, which mitigates the weakness of the R-Precision metric and only accounts for the ranking of the correct retrievals. In R-Precision, a score of \(r/R\) is assigned to each query, wherein each query (e.g., a code that we want to find similar code samples from), we find the r nearest samples that are in the same class as the query from a total number of references, R. Here, R denotes the total number of references in the searchable dataset. Therefore, MAP@R calculates the Mean Average Precision based on the number of nearest neighbors for each sample, relative to R. For a single query, it is defined as follows where \(P(i)\) is the Precision at \(i\) if the \(i\)th retrieval is correct and \(0\) otherwise:
\[MAP@R=\frac{1}{R}\sum_{i=1}^{R}P(i)\]
### Results
The results of MODE-X for code clone detection are shown in Table 4. The programming language of the adapters in MODE-X is shown as subscript.
For the T-adapters in natural language, it is reported that the last layer of the model learns the MLM objective better [43] - better results are obtained when the L-adapters are dropped from the last layer, leaving only the T-adapters in this layer. In this RQ, we ran the following ablation experiments i) when we did not drop the L-adapters, ii) when dropping the L-adapters from the last layer (layer 12), and iii) when dropping the L-adapters from the last two layers (layers 11 and 12). We reported the best scores obtained, though the difference among the scores in each experiment was very low. Similar to NLP T-adapters, the best results were obtained when the L-adapters are dropped from the last or the last two layers. For BCB and SCD-88, the best scores are from the model with the dropped L-adapter from its final layer. The best results achieved for POJ-104 is by dropping the L-adapters from the last two layers.
For all three datasets, the results of MODE-X are between the results of the fine-tuned RoBERTa and CodeBERT on the code clone detection task. For the C/C++ and Python datasets, the adapters' results are 3-4 MAP@R points below CodeBERT. An interesting observation is that CodeBERT is not pre-trained on C/C++, but on other programming languages. It is only fine-tuned on C/C++ for the code clone detection task. The higher score of CodeBERT, in this case, is related to its learned knowledge from the other programming languages. In comparison, RoBERTa has not seen any programming language during pre-training. However, adding the C/C++-adapters to its layers helps to improve the model's results for code clone detection, which is similar to CodeBERT's results. For Java language, MODE-X improves the results of RoBERTa and has similar scores with CodeBERT. Note that Java is among the programming languages that CodeBERT is pre-trained and fine-tuned on.
In this study, we focus on evaluating the effectiveness of the cross-modal transfer abilities of the adapters. Therefore, we train the adapters solely on the MLM objective, whereas CodeBERT is trained on two objectives of MLM and RTD. The RTD explicitly injects the code information onto the CodeBERT's representation space. Although the impact of this dual objective may be unclear for code clone detection, CodeBERT has been reported to have
\begin{table}
\begin{tabular}{c c c} \hline \hline Model & Dataset & Score \\ \hline
**RoBERTa** & POJ-104 & 81.52 (MAP@R) \\
**MODE-X\({}_{\text{C/C++}}\)** & POJ-104 & 82.40 (MAP@R) \\
**CodeBERT** & POJ-104 & **86.48** (MAP@R) \\ \hline
**RoBERTa** & BCB & 95.96 (F1) \\
**MODE-X\({}_{\text{java}}\)** & BCB & 96.61 (F1) \\
**CodeBERT** & BCB & **96.65** (F1) \\ \hline
**RoBERTa** & SCD-88 & 73.90 (MAP@R) \\
**MODE-X\({}_{\text{python}}\)** & SCD-88 & 75.65 (MAP@R) \\
**CodeBERT** & SCD-88 & **78.95** (MAP@R) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Scores of the code clone detection for RoBERTa, CodeBERT, and MODE-X. The best scores are bold.
better results over CodeBERT\({}_{\text{MLM}}\) for multiple tasks [18]. The MODE-X results are close to CodeBERT while being more parameter efficient.
We note here that for training the T-adapters, we used the recommended hyperparameters on AdapterHub [41]. We also ran additional experiments with different learning rates for code clone detection on the SCD-88 dataset. When a different learning rate is used (\(5E-4\)), the results of MODE-X are improved to 79 MAP@R, which is equal to the CodeBERT's scores. Finding the best hyperparameters for adapters is not the scope of this work, but it is worth noting that adapter results may be improved further when the best hyperparameters are chosen.
## 7 Code Summarization Adapters
In the code clone detection experiments, we observed that adapters aid in transferring the learned knowledge from a NL-PLM. In the second task, we apply adapters to code summarization to evaluate this cross-modal transfer. For code summarization, we also take one step further to see whether adapters can boost the performance of C-PLMs. These are the questions asked in RQ2 and RQ3 that we seek the answers to in this section.
### Experimental Setup
#### 7.1.1 Dataset
We used the CodeSearchNet [25] dataset, which comprises of bimodal data, the code, and their corresponding comments, for six programming languages: Ruby, Javascript, Go, Python, Java, and PHP. For code summarization, we train the T-adapters separately for all these six programming languages. The statistics of the dataset is shown in Table 5. Note that we only use the bimodal part of the data as code summarization involves both the code and its comments.
\begin{table}
\begin{tabular}{|c|c|} \hline Language & bimodal Data \\ \hline Ruby & 52,905 \\ JavaScript & 143,252 \\ Go & 317,832 \\ Python & 458,219 \\ Java & 500,754 \\ PHP & 662,907 \\ \hline \end{tabular}
\end{table}
Table 5: CodeSearchNet dataset for code summarization
#### 7.1.2 Task
**Code summarization** is a common SE task that generates a description of a given code [11]. This is considered a PL-NL downstream task, as the input to the model is the code, and the output is a generated text in natural language.
#### 7.1.3 Training Models
As mentioned in Section 4, for code summarization, we only use T-adapters. For each baseline model, we inserted the T-adapters into the PLM, fixed the PLM weights, and then trained them for each programming language on the code summarization task. T-adapters are trained on a 4x NVIDIA V100 GPU configuration, with a total of 30,000 training steps. A batch size of \(32\) is utilized, along with the AdamW optimizer and a learning rate of \(10e-5\).
#### 7.1.4 Baselines
We consider **RoBERTa**[34] as our baseline model, which is pre-trained on a large corpus of English data with a mask language modeling objective function. Similar to the code clone detection task, we also compare the results with that of CodeBERT. Additionally, to answer RQ3, we use CodeBERT and GraphCodeBERT as the C-PLMs where we explore the performance of adding adapters into a C-PLM. We compare the results between the fine-tuned C-PLMs and C-PLMs with adapters. Details of RoBERTa and CodeBERT are found in Section 5.1.4. Here, we only provide the description for GraphCodeBERT.
**GraphCodeBERT**[20] uses BERT as its backbone and and includes the data flow of code to pre-train on three objective functions, Mask Language Modelling, Edge Prediction (in which some edges on the data flow graph are masked, and the goal is to predict the correct value for them), and Node Alignment (in which the model is required to predict the correct alignment between code and data flow).
#### 7.1.5 Evaluation Metric
We used the smoothed BLEU-4 [37] score to evaluate the performance of the model on generating document summaries. BLEU, Bilingual Evaluation Understudy, is used to count the proportion of n-grams in candidate text that appear in the ground truth code description, where \(n\) can be \(1,2,3\) or \(4\). The higher the BLEU value, the higher the quality of code summarization is [51]. In this study, we utilize the smoothed BLEU-4 variant [33] by adding a small constant to the numerator and denominator of the precision calculation, allowing for a more lenient evaluation of the generated summaries. To calculate the BLEU score, we first measure the brevity penalty factor (BP) as follows:
\[BP=\begin{cases}1&\text{if }c>r\\ e^{1-r/c}&\text{if }c\leq r\end{cases} \tag{3}\]
where \(c\) is the length of the candidate translation and \(r\) is the effective reference corpus length. Then, we take the geometric mean of the test corpus' modified precision scores, and multiply the result by the brevity penalty factor (\(BP\)) as follows:
\[BLEU=BP.exp\big{(}\sum_{n=1}^{N}w_{n}\text{log}P_{n}\big{)} \tag{4}\]
Here, \(P_{n}\) is the geometric average of the n-gram precision, using n-grams up to length \(N\) and \(w_{n}\) is the positive weights summing to one.
### Results
_Transferability of MODE-X from natural language to code (RQ2):_ To evaluate the extent where we could adapt an NL-PLM on a PL-NL target task, we plug the T-adapter into RoBERTa and train them on the target task for each programming language separately. As shown in Table 6, MODE-X outperforms the fine-tuned CodeBERT on Ruby, Go, and Java and has similar results with the fine-tuned CodeBERT for the other three languages. In both the code clone detection and code summarization tasks, MODE-X improved on the results of RoBERTa, though we got better results for code summarization, and MODE-X _outperforms_ the fine-tuned CodeBERT's results for three languages. Note that MODE-X is RoBERTa plus task adapters, meaning that the underlying PLM is only trained on natural language, and the only introduction of code into the PLM is through the task adapters. Moreover, in MODE-X, the number of trainable parameters is much less than in CodeBERT. In CodeBERT, we fine-tune the entire model, which is pre-trained on programming languages. However, we can still outperform it by transferring from a natural language model.
_Ability of adapters when a C-PLM is used (RQ3):_ Similar to the code clone detection task in Section 6, we obtained better results on MODE-X for the code summarization task. In RQ3, we investigate to what extent we can improve the code summarization results of C-PLMs such as CodeBERT by inserting adapters. To answer this question, we chose two widely used C-PLMs, CodeBERT and GraphCodeBERT. We insert adapters in their layers and then train the models as we did for RoBERTa. In Table 6 the results of these models are shown as CodeBERT\({}_{\text{T-adapter}}\) and GraphCodeBERT\({}_{\text{T-adapter}}\). These results are compared with the scores of CodeBERT and GraphCodeBERT, which are fully fine-tuned on the code summarization datasets. As shown in Table 6, when adapters are inserted in these models, majority of the results are improved (except for Java in CodeBERT). For all languages except Python, the results of CodeBERT\({}_{\text{T-adapter}}\) are higher than the fine-tuned
CodeBERT. Similarly, the results of the GraphCodeBERT\({}_{\text{T-adapter}}\) are higher than the fine-tuned GraphCodeBERT for Ruby, JavaScript, Go, and Python, and we observed similar results for Java and PHP.
We relate these improvement in results to the fact that adapter-based fine-tuning can enhance the performance and stability of models during the fine-tuning phase [22]. He et al. [22] conducted a comprehensive study on the effectiveness of adapters, focusing on the accuracy of the models rather than the efficiency of parameters. They argue that adapter-based fine-tuning can produce superior results compared to fully fine-tuning in low-resource languages, i.e., the languages for which small training data is available. Furthermore, Lee et al. [31] proposed a regularization method called mix out to encourage the PLM's weights to remain closer to their initial pre-trained values during fine-tuning. As adapters kept the PLM's weights fixed during fine-tuning, adapter-based fine-tuning can enhance performance.
## 8 Computational Efficiency of Adapters
We evaluate the efficiency of the adapters based on their parameter budget and training time, with that of a traditional fine-tuned PLM. _The parameter budget is the number of learnable parameters in the model._ For adapters, as we do not re-train RoBERTa, the parameter budget is the number of parameters required for training the adapters only. We report the parameter budgets of the adapters for the entire 12 layers of the model.
CodeBERT and GraphCodeBERT have \(\sim 110\) million of parameters, and RoBERTa has \(123\) million of parameters. The parameter budget for task adapters is \(\sim 0.9\) million of parameters. For code summarization, a decoder stack for the code summary generation is trained from scratch during the adapter fine-tuning process, which allocates \(\sim 47.8\) million of parameters. In total, the number of trainable parameters for code summarization are around \(\sim 48.7\) million of parameters during the adapter-based fine-tuning phase. The L-adapters have a parameter budget of \(7.39\). For code clone de
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline
**Models** & **Ruby** & **JavaScript** & **Go** & **Python** & **Java** & **PHP** \\ \hline GraphCodeBERT\({}_{\text{T-Adapter}}\) & **14.53** & **16.54** & **23.74** & 18.73 & 19.08 & 25.05 \\ CodeBERT\({}_{\text{T-Adapter}}\) & 14.12 & 15.67 & 23.21 & 18.47 & 18.99 & **25.55** \\ MODE-X & 12.79 & 14.20 & 23.05 & 17.72 & 18.43 & 24.27 \\ \hline GraphCodeBERT & 12.62 & 14.79 & 18.40 & 18.02 & **19.22** & 25.45 \\ CodeBERT & 12.16 & 14.90 & 18.07 & **19.06** & 17.65 & 25.16 \\ RoBERTa [34] & 11.17 & 11.90 & 17.72 & 18.14 & 16.47 & 24.02 \\ \hline \end{tabular}
\end{table}
Table 6: Smooth BLEU-4 scores on code summarization. CodeBERT\({}_{\text{T-Adapter}}\) is fine-tuned on the monolingual dataset for each programming language (same as CodeBERT). The best scores are bold, and the second-best scores for each language are underlined. Note that MODE-X results are better than or on par with CodeBERT results. Moreover, when we add adapters to CodeBERT or GraphCodeBERT, the results are better than the C-PLMs. This demonstrates that if we encourage the model weights to be closer to the PLM, we could improve the fine-tuning results without using additional data.
tection on POJ-104 and SCD-88, the number of parameters (in millions) required for MODE-X is \(8.29\) (\(7.39\) for L-adapters + \(0.9\) for T-adapters). For code clone detection on the BCB dataset, MODE-X requires more parameters, a total of \(9.46\) million parameters. The difference is that the code clone detection problem is a retrieval-based one for the other two and a classification problem for BCB.
For task-specific fine-tuning, we only consider the parameters that are required for fine-tuning CodeBERT and the parameters for training T-adapters in MODE-X (i.e., excluding the pre-training parameters of CodeBERT and the parameters of L-adapters). For task-specific fine-tuning, adapters are \(53=110/(9.46-7.39)\) times and \(122.2=110/0.9\) times more parameter efficient than CodeBERT on BCB, and POJ-104/SCD-88, respectively. For code summarization, we also obtain the same results of \(122.2\) times more efficient. When considering the overall budget, i.e., the number of parameters required for training and fine-tuning CodeBERT and the number of parameters used for training L-adapters and T-adapters are \(11.63=110/9.46\) to \(13.27=110/8.29\) times more efficient than CodeBERT for code clone detection, and \(2.25=110/48.7\) times more efficient for code summarization. Note that this efficiency is lower for code summarization as we need to add a decoder for this task in addition to the adapters.
It is worth mentioning that pre-training the CodeBERT needs \(384\) hours of training on 16 interconnected V100s for a batch size of \(64\)[18]. In contrast, L-adapters need \(35\) hours of training on a single V100 GPU for the same batch size. Moreover, fine-tuning CodeBERT required between \(2.5\) hours and an hour for T-adapters for code clone detection. As adapters are significantly more parameter efficient, they have shorter training and inference time, hence are more suitable in practice. The task of training adapters for code summarization require approximately \(10\) hours per programming language, utilizing \(30,000\) training steps and consuming a similar parameter budget to T-adapters used for code clone detection. Note that the training time depends on the number of training steps - since we train all the programming languages with the same number of steps, they have the same training time.
## 9 Discussions
In this section, we provide additional insights on our findings through two experiments, analysis of the probes and attention of the models.
### Probing Analysis
One way to investigate if programming language models encode the characteristics of code well enough to be applicable to a broad spectrum of downstream tasks is with the diagnostic task known as probing [28]. Probing is
used to determine if specific properties of the input can be predicted based on the fixed, pre-trained vector embeddings of a model. The effectiveness of probing in predicting these properties suggests whether or not the desired information is present in the model. Probing has been extensively studied in natural language models [1; 2; 39; 8]. It consists of two main components: a probing task and a probing classifier. The probing task is a diagnostic task designed to determine whether a specific property is encoded in the PLM's weights. The probing classifier, on the other hand, is trained on the probing task using input vectors extracted from the frozen hidden layers of the PLM. It is usually a linear classifier with no hidden layers of its own. If the probing classifier can accurately predict a particular attribute from the pre-trained embeddings, we could infer that the original model has encoded this information in its hidden layers. The raw accuracy of probing is not the primary focus of the analysis, but rather probing is used to compare the encoding of the characteristic between different models or layers within a model [28].
We evaluate the syntactic and semantic characteristics of C-PLMs and NL-PLM with adapters to assess how they could be adapted for programming languages. We construct three initial probing tasks on RoBERTa [34] with task adapters, CodeBERT [18] and GraphCodeBERT [20]: Abstract Syntax Tree (AST) Node Tagging, Cyclomatic Complexity Prediction, and Code Length Prediction. These three tasks are meant to assess whether the models are able to capture syntactic, structural, and surface-level features of code, respectively. The tasks were chosen to cover the most commonly identifiable abstractions of code. Note that the objective of the probing task here is to assess the ability of adapters to adapt NL-PLMs to code-related target tasks, as compared to C-PLMs. To this end, we will compare the results of the probing task conducted on RoBERTa with task adapters, CodeBERT, and GraphCodeBERT.
#### 9.1.1 Probing Tasks
Abstract Syntax Tree (AST) Node TaggingAbstract Syntax Trees (AST) consist of rich syntactical information and they are the bases for many structural code representation approaches [28; 4; 9]. One way to evaluate whether a PLM is good at code tasks such as code clone detection is that it should be able to learn and interpret the syntactic information of a sequence of code tokens as well as to predict them correctly. The authors from [28] provided a Java dataset consisting of the Java code tokens and their corresponding node tags extracted from the AST, which we use in our analysis.
**Code Length Prediction (LEN)** The Code Length Prediction (LEN) task is a method for evaluating the ability of language models to encode surface-level information within code snippets [28]. Our hypothesis is that the length of a code snippet, when presented as a sequence of code tokens, should be relatively straightforward to predict for transformer models. To evaluate that, we use the Java dataset proposed by Karmakar and Robbes [28]. Each
sample in the dataset consists of a code snippet and a label indicating its length. The probing task is to predict the label of each sample correctly.
**The Cyclomatic Complexity (CPX)** The Cyclomatic Complexity (CPX) task is an evaluation method designed to investigate the ability of the code models to encode structural information within code. Cyclomatic complexity is an intrinsic characteristic of any code, and thus, it should be predictable for models without the need for explicit fine-tuning [28]. Predicting the complexity of code based solely on the sequence of tokens rather than understanding the underlying control flow is a challenging task. This is because it requires a deep understanding of the code and the ability to infer the relationships between different parts of the code. This is particularly difficult for programming languages that have complex control flow structures, such as loops and branches.
#### 9.1.2 Probing Analysis for Code Clone Detection
The analysis in this subsection is conducted for code clone detection.
Figure 2 shows the results of AST node tagging for RoBERTa with task adapters, depicted as _RoBERTa+Adapters_, CodeBERT, and GraphCodeBERT. _RoBERTa+Adapters_ exhibits a lower accuracy compared to C-PLMs in the early layers (i.e., layers 3, 4, and 5). However, starting from the sixth layer, its performance becomes comparable to that of C-PLMs and in some cases (e.g., layer 8), it even outperforms GraphCodeBERT. It is important to note that not all code characteristics are expected to be improved in the subsequent layers of a model. In fact, certain characteristics may be specific to certain layers of the model. As seen in the baseline C-PLMs, there are some variations in accuracy across different layers.
Figure 3 demonstrates the classification results of the code length prediction task. The performance of RoBERTa with adapters is found to be comparable to that of the two C-PLMs. As depicted in the figure, the prediction of code length, which is surface-level information, is primarily determined by the middle layers of the models (e.g., the fourth layer), and the accuracy is decreased in the later layers.
The results of the cyclomatic complexity analysis are presented in Figure 4. The experiment results show that the performance of RoBERTa with task adapters is comparable to that of CodeBERT and GraphCodeBERT. Note that the cyclomatic complexity is a highly complex task, which measures the amount of structural information that is encoded onto each transformer block of a model.
It is worth noting that for all of the probing tasks considered in this study, the variations in accuracy follow a consistent trend. This is a strong indication that RoBERTa has been effectively adapted to programming languages, as RoBERTa+adapters exhibit similar behavior across all of these probing tasks. This consistency in the results suggests that the RoBERTa with task adapters has a good understanding of the code structure, and in general, it is well-suited for modeling programming languages.
## 6 Conclusion
Figure 3: The accuracy of predicting code length in the probing task is evaluated on each layer of the models for code clone detection. The x-axis represents the classification results, with the first layer being the input embeddings, which serves as the naive baseline accuracy. All models have 12 transformer layers, and the y-axis displays the accuracy at each layer for each model. _RoBERTa+Adapters_ shows the model with code clone detection task adapters, and the other two plots present the results for CodeBERT and GraphCodeBERT.
Figure 2: Accuracies of AST node tagging probing task for code clone detection. The x-axis demonstrates the classification results at each layer (the first layer is the input embeddings which represents the naive baseline accuracy). All the models have 12 transformer layers and the y-axis shows the accuracy at each layer for each model. The _RoBERTa+Adapters_ show the model with code clone detection task adapters, and the other two plots belong to CodeBERT and GraphCodeBERT.
#### 9.1.3 Probing Analysis for Code Summarization
In this section, we utilize the trained task adapters for code summarization and evaluate the performance of the C-PLMs, CodeBERT and GraphCodeBERT, and NL-PLM adapted for code (i.e., RoBERTa with task adapters) on different code characteristics. We follow the same setting in our previous code clone detection target task. We leverage the AST node tagging, length prediction, and cyclomatic complexity probing tasks to assess the models' ability to capture syntactic, surface-level, and structural information present in the code.
Figure 5 demonstrates the performance of the AST node tagging probing task on each transformer layer for the baselines. As shown in the early layers (i.e., layers 2, 3, and 4), the performance of RoBERTa with task adapters is significantly less than the performance of CodeBERT and GraphCodeBERT models. However, in the later layers (i.e., layers 10, 11 and 12), we observe that the NL-PLM adapts well to the AST node tagging task and have comparable accuracy with CodeBERT and GraphCodeBERT. In the tenth layer, it has the best accuracy among these models, and in the last layer, it outperforms CodeBERT.
The results of the code length probing task are presented in Figure 6. From the plots, it is evident that this feature is more prominent in the early layers of the PLM (i.e., until layer 7), and there is a significant decrease in code length accuracy in the later layers. Additionally, we observe that the
Figure 4: The accuracy of predicting cyclomatic complexity in the probing task is evaluated on each layer of the model for code clone detection. The x-axis represents the classification results at each layer, with the first layer being the input embeddings, which serves as the naive baseline accuracy. All models utilized in the experiment have 12 transformer layers, and the y-axis displays the accuracy at each layer for each model. _RoBERTa+Adapters_ shows the model with code clone detection task adapters, and the other two plots show the results for CodeBERT and GraphCodeBERT.
## 6 Conclusion
Figure 5: Accuracies of AST node tagging probing task for code summarization. The x-axis demonstrates the classification results at each layer (the first layer is the input embeddings which represents the naive baseline accuracy). All the models have 12 transformer layers, and the y-axis shows the accuracies at each layer for each model.
Figure 6: The accuracy of predicting code length in the probing task for code summarization is evaluated on each layer of the model. The x-axis represents the classification results, with the first layer being the input embeddings, which serves as the naive baseline accuracy. All models have 12 transformer layers, and the y-axis displays the accuracy at each layer for each model.
RoBERTa with task adapters surpasses the C-PLMs in the early layers, which suggests that the adapters are effectively adapting this task from RoBERTa.
Figure 7 displays the accuracy of the cyclomatic complexity classification problem. In the final layers, we can see that RoBERTa with task adapters performs similarly to the C-PLMs. All of the models show a similar trend in accuracy across different layers, which implies that adapters are capable of adapting to the NL-PLM (i.e., RoBERTa) to understand the structural information in code.
### Attention Analysis
In this section, we analyze the attention of the function name tokens in two code samples, one written in Ruby and the other in Go. The goal is to gain insights into how adapters influence attention at different layers when they are inserted into the PLMs, as compared to when they are excluded from the PLMs.
As an example, consider Figure 8 which illustrates the attention behavior when a Go code sample is fed to RoBERTa. We use the Bertviz 2 tool to visualize the attention weights of the model. On the left side, the attention of the last layer of RoBERTa is depicted without adapters, and it can be observed that the third head in the last layer only attends to the sum token. The
Figure 7: The accuracy of predicting cyclomatic complexity in the probing task for code summarization is evaluated on each layer of the model. The x-axis represents the classification results at each layer, with the first layer being the input embeddings, which serves as the naive baseline accuracy. All models utilized in the experiment have 12 transformer layers, and the y-axis displays the accuracy at each layer for each model.
figure on the right side shows the same attention head when the adapters are inserted. On the right side, we observe that the same head in the same layer pays more attention to other tokens related to the function name when adapters are included. For example, the attention on the keyword func indicates that the model bounds the function name with this keyword, or the tokens method, prints, the, and sum are semantically related to the function name, and so they are given more attention by the sum token.
Figure 8: An illustrative example of how adapters affect the last layer of RoBERTa when a Go sample is fed to the model. The left figure shows the attention of the third head on the function name sum when adapters are not plugged. The right figure depicts the same attention head when adapters are inserted. As shown, while RoBERTa without adapters only attends to the local token neighbors, RoBERTa equipped with adapters has an in-depth knowledge of the code and pays more attention to the parts more related to the function name (e.g., it has strong attention to the func keyword which means it knows the sum is somehow related to that keyword).
Another example can be seen in Figure 9, where a Ruby code sample is fed to the model, and the attention of the tenth layer from the fourth head is shown. On the left, adapters are excluded, and on the right, adapters are inserted into the PLMs, highlighting the impact of adapters on the attention gained by the model. Similar observations are noticed in this figure too. When adapters are used, the attention of the token sun to other related tokens increases.
The provided examples suggest that adapters have a significant impact on the output of the internal embeddings. They operate by magnifying the embeddings in a way that the performance of the target task is optimized. This is an important aspect of fine-tuning PLMs, as it allows the model to adapt to specific tasks and improve its performance. The ability of the adapters to enhance the internal embeddings of the model is a key factor in its effectiveness for fine-tuning, and it is a powerful technique for improving the performance of PLMs on specific tasks.
## 10 Implications
Fine-tuning PLMs for different tasks and using deep neural networks are known to be computationally expensive. Furthermore, not everyone has access to powerful resources such as high-end GPUs, which are essential to fine-tuning a PLM. On the other hand, an adapter is a pluggable module that can be inserted into a PLM, and the learned model requires lesser space than a fine-tuned PLM. Adapters have been adopted in the NLP field recently, but their introduction for code representation and understanding is new. We anticipate different avenues that should be investigated in the SE domain, including searching for the best hyperparameters, identifying an optimised setup - meaning that they should be inserted in all or some layers and how this affects their performance, and whether they can be used to integrate different tasks in SE. We note that these are some aspects for SE research, and they are not exhaustive. Multiple other research on SE-specific adapters could also be studied.
We studied both NL-PLMs and C-PLMs with adapters. Interestingly, RoBERTa's results for some languages and some tasks were close to that of CodeBERT. This finding shows the potential of NL-PLM to be used for various code-related software engineering tasks. Researchers could conduct studies on the reasons behind such results and other ways to transfer this knowledge into the current C-PLMs or other code-specific trained models.
In this study, we conducted experiments on two tasks, code clone detection, and code summarization, with a restricted number of programming languages. We observed that the results of using adapters in RoBERTa were better than the code clone detection results. Interestingly, the scores we obtained varied among different languages. Separate empirical studies could provide insights into the tasks or languages for which adapters are more useful.
Adapters are parameter and space efficient modules that enable scaling up a PLMs to multiple **tasks** and **languages**, without noting a significant drop in in-domain performance associated with the "curse of multilinguality" of a model [14]. The curse of multilinguality is related to PLMs trained on multiple languages, which is the trade-off between language coverage and model capacity. The limited capacity of a PLM leads to a performance drop when more languages are added to a multilingual model compared to its monolingual variants. However, the number of languages or tasks studied in NLP is much more than the ones used in the SE domain. Researchers can pursue this trade-off and find the capacity of using adapters for multiple languages and tasks when the number of programming languages used in pre-training a C-PLM increases. Developing more computation-efficient models and alternatives to fine-tuning models for code analysis is still in its infancy.
Figure 9: The figures illustrate the tenth attention layer when a Ruby sample is fed to RoBERTa. The left figure shows the attention of the forth head on the function name sum when adapters are not inserted. The right figure shows the same attention head when adapters are included. In this example, RoBERTa without adapters only has weak attention to the code tokens. On the other hand, RoBERTa equipped with adapters has stronger attention to the code and pays more attention to the parts more related to the function name (e.g., function name pays high attention to the def and end keywords which means it can detect the boundaries of the function on this attention head).
initial steps for the SE domain, which could benefit from more research in this field.
Parameter efficient model may result in faster inference time. Due to the lower overhead of switching among fine-tuned PLMs, we can use the same NL-PLM (with adapters) or the same C-PLM (with adapters) for a higher number of tasks. This also allows us to better integrate the models in an Integrated Development Environment (IDE), as we will be integrating a single model instead of multiple fine-tuned models. Developing such a tool to help adapt the learned knowledge in NL-PLMs or C-PLMs and use it in an IDE could benefit a wider community.
## 11 Threats to Validity
**External Validity** relates to the generalization of the results. This study is limited on the number of downstream tasks and the programming languages used. The results might not be generalizable to other tasks and programming languages. Though we expect similar results for other tasks, additional studies are required to validate this.
**Internal Validity** is related to having unanticipated relationships. One of the threats can be related to training the models. The authors who trained the models have extensive experience with adapter modules, and have both theoretical and technical knowledge on NLP and SE domains. Therefore, we anticipate minimal threats related to training the models.
We used the CodeXGLUE benchmark, re-ran all the experiments for the PLMs, and confirmed the differences in the results that we obtained. To mitigate obtaining unwanted results, we used the publicly available datasets from this benchmark platform and followed the steps mentioned in their pipeline to evaluate the models. Additionally, we conducted pilot studies to find the best setup for the adapters and baselines.
In our experiments, the selection of hyperparameters can impact the performance of the adapter fine-tuning phase. Determining the optimal values for these parameters is challenging and it is still an open research question. For example, the internal embeddings of adapters during down and upsampling are set to the default value, which could result in sub-optimal fine-tuning of the model. To mitigate this potential issue, we have used the hyperparameters suggested in [42], as their work has conducted an extensive search for the best hyperparameters for adapters. However, the obtained results with adapters might still be improved with different hyperparameters.
**Construction Validity** relates to what a test claims to measure and what it actually measures. To mitigate such threats, we followed the evaluation metrics used for each task as it was done in previous studies. Additionally, for the probing analysis, we obtained the datasets and the tasks from previous studies to ensure a consistent experimental setup.
## 12 Conclusion and Future Works
In this paper, we studied the ability and efficiency of transferring the learned knowledge from a NL-PLM to programming language-based software engineering tasks through the use of adapters, thus assessing the ability of adapters to transfer from one modality to another modality. Our results demonstrate that NL-PLMs equipped with adapters exhibit comparable performance to that of C-PLMs, suggesting that adapters can effectively adapt natural language models for programming-specific tasks such as code summarization and code clone detection. Additionally, we evaluated the utility of adapters as a rapid fine-tuning method for C-PLMs and found that even for models specifically designed for code, adapters can enhance performance on the code summarization task. Training and fine-tuning adapters require a lower number of parameters with less storage. Thus, adapters can be used in SE to scale up models for multiple tasks and languages, making them beneficial and practical in practice. Adapters are pluggable modules, and they can be easily inserted for another language or task. We plan to study this characteristic and the training of multi-lingual adapters on code in the future.
## Data Availability Statement
The data that support the findings of this study are available in the CodeXGLUE Github repository 3.
Footnote 3: [https://github.com/microsoft/CodeXGLUE](https://github.com/microsoft/CodeXGLUE)
## Conflict of Interest
The authors declare that they have no conflict of interest.
## Acknowledgement
We thank the original authors of the ICPC 2022 [19] paper, Divyam Goel and Ramansh Grover. Part of the experiments from our previous paper coauthored with Divyam and Ramansh are rewritten in the current paper.
|
2306.11338 | FDINet: Protecting against DNN Model Extraction via Feature Distortion
Index | Machine Learning as a Service (MLaaS) platforms have gained popularity due to
their accessibility, cost-efficiency, scalability, and rapid development
capabilities. However, recent research has highlighted the vulnerability of
cloud-based models in MLaaS to model extraction attacks. In this paper, we
introduce FDINET, a novel defense mechanism that leverages the feature
distribution of deep neural network (DNN) models. Concretely, by analyzing the
feature distribution from the adversary's queries, we reveal that the feature
distribution of these queries deviates from that of the model's training set.
Based on this key observation, we propose Feature Distortion Index (FDI), a
metric designed to quantitatively measure the feature distribution deviation of
received queries. The proposed FDINET utilizes FDI to train a binary detector
and exploits FDI similarity to identify colluding adversaries from distributed
extraction attacks. We conduct extensive experiments to evaluate FDINET against
six state-of-the-art extraction attacks on four benchmark datasets and four
popular model architectures. Empirical results demonstrate the following
findings FDINET proves to be highly effective in detecting model extraction,
achieving a 100% detection accuracy on DFME and DaST. FDINET is highly
efficient, using just 50 queries to raise an extraction alarm with an average
confidence of 96.08% for GTSRB. FDINET exhibits the capability to identify
colluding adversaries with an accuracy exceeding 91%. Additionally, it
demonstrates the ability to detect two types of adaptive attacks. | Hongwei Yao, Zheng Li, Haiqin Weng, Feng Xue, Kui Ren, Zhan Qin | 2023-06-20T07:14:37Z | http://arxiv.org/abs/2306.11338v2 | # FDINET: Protecting against DNN Model Extraction using Feature Distortion Index
###### Abstract
Machine Learning as a Service (MLaaS) platforms have gained popularity due to their accessibility, cost-efficiency, scalability, and rapid development capabilities. However, recent research has highlighted the vulnerability of cloud-based models in MLaaS to model extraction attacks. In this paper, we introduce FDINET, a novel defense mechanism that leverages the feature distribution of deep neural network (DNN) models. Concretely, by analyzing the feature distribution from the adversary's queries, we reveal that the feature distribution of these queries deviates from that of the model's training set. Based on this key observation, we propose Feature Distortion Index (FDI), a metric designed to quantitatively measure the feature distribution deviation of received queries. The proposed FDINET utilizes FDI to train a binary detector and exploits FDI similarity to identify colluding adversaries from distributed extraction attacks. We conduct extensive experiments to evaluate FDINET against six state-of-the-art extraction attacks on four benchmark datasets and four popular model architectures. Empirical results demonstrate the following findings: (1) FDINET proves to be highly effective in detecting model extraction, achieving a **100% detection accuracy** on DFME and DaST. (2) FDINET is highly efficient, using just 50 queries to raise an extraction alarm with an **average confidence of 96.0%** for GTSRB. (3) FDINET exhibits the capability to identify colluding adversaries with an accuracy **exceeding 91%**. Additionally, it demonstrates the ability to detect two types of adaptive attacks.
Model extraction, model stealing, Feature Distortion Index.
## 1 Introduction
As the performance of deep neural networks (DNN) remarkably improves, DNN has been widely used in various fields (e.g., image recognition and natural language processing). However, the construction of high-performance DNN models requires tremendous amounts of training data and computational resources, making it challenging for end-users to create their own private models. Therefore, many companies choose to deploy DNN models to Cloud Service Providers (CSP) in order to offer online paid services through Machine Learning as a Service (MLaaS) [1]. Recent reports even predict a remarkable economic boost of 21.72 billion dollars in the MLaaS market [2]. Unfortunately, the value associated with these models also leads to the emergence of model extraction attacks (also known as model stealing attacks) [3, 4].
In MLaaS, only the CSP has access to the parameters and architecture of the cloud-based model. The clients can only interact with the model through a public API. While the cloud-based model may appear as a black-box to clients, it is still possible for a malicious client to interact with the model and replicate its behavior using input-output pairs. This poses a significant risk of privacy breach for the cloud-based models. Recent studies [5, 6, 7, 8] have shown that an adversary can launch model extraction attacks by querying MLaaS, imitating the behaviors of the target DNN model, and creating a surrogate model. Furthermore, using the extracted surrogate model, the adversary can launch additional attacks, including membership inference attacks [9], adversarial examples [10, 11, 12, 13], and model explanations [14, 15]. Consequently, the protection of cloud-based models against model extraction attacks emerges as a critical issue that demands increased attention.
To enhance the security of MLaaS, there have been growing research efforts on model extraction detection [16, 17, 18, 19]. Existing detection approaches typically involve analyzing query distributions and utilizing statistical tools to identify potentially malicious queries. Although the existing detection approaches have made promising progress, they still have several limitations. One key limitation is that most existing methods rely on strong assumptions about the adversary, which limits their generalizability to different extraction attacks. For example, PRADA [17] is designed specifically to detect adversarial example-based queries, DeepDefense [20] fails to identify synthetic data-based queries. As a result, it remains a challenge to identify the intrinsic characteristics of extraction attacks and develop a generic detection method to identify diverse attacks. Second, existing detectors need to maintain local proxy models [16], historical queries [17], or training points [18]. While these components contribute to detection accuracy, they may fail to identify malicious clients efficiently. Therefore, improving the efficiency of detection methods poses a significant challenge. Furthermore, advanced stealth attacks, such as distributed model extraction attacks, can evade the existing detectors. To the best of our knowledge, there is still no countermeasure to defend against the distributed attack, which adopts multi-clients to launch the same model
extraction attack. Thus, devising an effective defense to mitigate the impact of advanced stealth attacks is a pressing issue.
To address the aforementioned limitations, we propose FDINet, a generic effective and efficient detector against model extraction attacks that can be easily integrated into MLaaS systems. In order to identify the intrinsic differences between benign and malicious queries, we investigate the attack strategies. Concretely, we analyze the queries submitted by adversaries and make a motivative observation: the feature distribution of the adversaries' queries deviates from that of the training set. We refer to it as "feature distortion," which is a universal characteristic across various model extraction attacks. Based on this observation, we introduce the Feature Distortion Index (FDI), a metric designed to quantitatively measure the feature distribution deviation of received queries. Furthermore, we observe a high degree of FDI similarity among queries generated by the same model extraction strategy. This observation opens up new possibilities for identifying colluding adversaries in distributed extraction attacks. Leveraging this insight, we propose a distributed extraction attack detector with the capability of identifying colluding adversaries. Additionally, we consider the adaptive adversary who knows our defense strategy may correct features before submitting queries to MLaaS. We propose an adaptive attack, namely _Feature Correction_, _FentC_, to evade our defense.
To validate the efficacy of FDINet, we conduct extensive experiments using four benchmark datasets, namely CIFAR10, GTSRB, CelebA, and Skin Cancer Diagnosis. The evaluation results demonstrate that the proposed FDINet effectively and efficiently reveals malicious clients and identifies colluding adversaries. The major contributions of this paper are summarized as follows:
1. **Proposal of Feature Distortion Index (FDI)**: We propose a novel metric, the FDI, to measure the feature deviation of submitted queries. By utilizing the FDI, we train a binary detector that accurately identifies malicious queries.
2. **Identifying colluding adversaries**: We propose a distributed model extraction attack in which the adversary controls multiple colluding clients to send queries with a common attack goal. We analyze the FDI similarity of queries and develop a novel classifier that can identify colluding adversaries. This classifier is a pioneering approach in defending against distributed model extraction attacks.
3. **Proposal of the adaptive attack, _FeatC_**: In order to assess the robustness of FDINet, we propose an adaptive attack called _FeatC_, specifically designed to bypass our defense mechanism.
4. **Extensive experiments and evaluations**: We conduct extensive experiments to evaluate the performance of FDINet on four benchmark datasets and four popular model architectures. The results demonstrate the effectiveness and efficiency of FDINet in detecting malicious queries. Additionally, our approach is robust that achieves high performance in identifying colluding adversaries and two types of adaptive attacks.
## 2 Related Works
### _Model Extraction Attacks_
The concept of model extraction and the demonstration of its feasibility of stealing intellectual property from private models on commercial MLaaS was initially proposed by Tramer _et al._[3]. The main principle of model extraction is to replicate the behavior and functionality of a black-box victim model by analyzing query submissions and their corresponding outputs. In this context, the selection of representative data plays a crucial role in determining the efficiency of model extraction attacks. According to the strategy of sample selection, extraction attacks can be categorized into:
**Surrogate data-based schemes (\(\mathcal{A}_{sur}\)).** In this scenario, the adversary possesses a comprehensive surrogate dataset, such as ImageNet and Flickr, which consists of both problem domain (PD) and non-problem domain (NPD) samples. To enhance the efficiency of query selection, the adversary commonly employs active learning strategies (e.g., Knock-off [28], ActiveThief [29], PAC-based active extraction [30], CopycatCNN [31], Bert Stealing [7], and GNN Stealing [32]).
**Adversarial example-based schemes \(\mathcal{A}_{adv}\).** The adversary is assumed to have access to a limited number of the problem domain data. In this scenario, the adversary crafts adversarial examples using primitive data, intending to approximate the decision boundary of the target model (e.g., Jacobian-based Augmentation (JBA) [10], T-RND [17], DualCF [33], and Cloud-Leak [34]).
**Synthetic data-based schemes (\(\mathcal{A}_{syn}\)).** In this scenario, the adversary employs a generative model to craft large-scale synthetic samples. For example, Black-box Ripper [35], Data-free Substitute Training (DaST) [11], Data-free Model Extraction (DFME) [36], DFMS-HL [37], MEGEX [38] and MAZE [39].
**Hybrid schemes (\(\mathcal{A}_{hyb}\)).** The idea behind hybrid schemes is to improve the effectiveness and efficiency of the model extraction attack by combining the strengths of each type of attack and mitigating their limitations (e.g., InverseNet [40], DivTheft [41]).
Since the \(\mathcal{A}_{hyb}\) scenario is the combining of other scenarios, we focus on \(\mathcal{A}_{sur}\), \(\mathcal{A}_{adv}\), and \(\mathcal{A}_{syn}\) adversaries in this study. It should be noted that those attacks compass a wide range of cutting-edge techniques, covering diverse attack scenarios.
### _Defenses against Model Extraction Attacks_
The countermeasures against model extraction attacks can be categorized into real-time defense and post-stealing defense. Real-time defense aims to detect and prevent the extraction process when the stealing action is in-progress. On the other hand, post-stealing defense strategies utilize copyright verification techniques, such as DNN Watermarking [42, 43], DNN Fingerprinting [44, 45, 46], or Dataset Inference [47, 48, 49, 50], to verify the ownership of the potentially stolen model.
This paper focuses on real-time model extraction defense, which comprises two primary branches of techniques: passive defense and active defense schemes. To provide a comprehensive overview and comparison of the those defense methods, we present a taxonomy of these techniques
in Table I. In the following paragraphs, we will discuss those two branches of methods.
**Passive defense.** The passive defense approach aims to detect and interrupt malicious actions by monitoring and analyzing the distribution (e.g., abnormal distributions, significant information gain) of incoming queries [16, 17, 19, 20, 27]. PRADA [17] keeps track of the minimum L2-norm distance between last sample and previous samples for each client to detect the adversary. Since PRADA is based on the assumption that samples submitted by an adversary deviate from a normal (Gaussian) distribution, it fails to identify the queries fit with a normal distribution. Extraction Monitor (EM) [16] employs local proxy models to quantify the extraction status of an individual client. However, EM has two main drawbacks: (1) employing local proxy models results in high memory consumption that degrades the efficiency of MLaaS, (2) large false alarms may be raised for the benign client.
**Active defense.** The active defense approach involves adding perturbation to the victim model's output [21, 22, 23, 24, 26, 51, 52, 53, 54, 55]. Orekond _et al._ propose Prediction Poisoning [24], which adds utility-constrained perturbations on the posteriors. The perturbations maximize the angular deviation between the gradient of the poisoned posteriors and that of the victim model. Zheng _et al._ propose BDPL [22, 23], which exploits Differential Privacy to add obfuscating noises on the prediction logits.
Passive defense strategies are the main focus of this paper due to the inherent limitations of active defense methods. Active defense approaches rely on strong assumptions about the output forms, which give rise to two significant drawbacks. Firstly, introducing perturbed probabilities may negatively impact the utility of cloud-based models. Secondly, these methods are not applicable in scenarios where hard-label outputs are used instead of probability vectors. Therefore, this paper primarily discusses the utilization of passive defense strategies.
## 3 Preliminaries
A Deep Neural Network (DNN) model is a function \(F:\mathcal{X}\rightarrow\mathcal{Y}\) parameterized by a set of parameters, where \(\mathcal{X}\in\mathbb{R}^{d}\) denotes the \(d\)-dimensional feature space and \(\mathcal{Y}\) represents the output space. For an online MLaaS application, the private DNN model \(F_{V}\) is first trained by the developer using enormous training data \(\mathcal{D}_{train}\) to achieve a high accuracy on testing set \(\mathcal{D}_{test}\) and then deployed to the CSP. Through querying prediction API using a pay-as-you-go manner, the client can access prediction probabilities \(F_{V}(x)\) for any given input data \(x\). The goal of model extraction is to create a surrogate model \(F_{S}\) that replicates the functionality of the black-box victim model \(F_{V}\).
### _Attack Capabilities_
In real-world scenarios, the adversary is typically restricted from accessing the inner operations of cloud-based models, including private training data, model architecture, and model parameters. However, the adversary can still engage with the black-box model through the submission of queries and the retrieval of prediction probabilities via the publicly accessible API interface.
**Adversary's query set.** We consider three types of adversaries as mentioned in Section 2.1 (i.e., \(\mathcal{A}_{sur}\), \(\mathcal{A}_{adv}\), and \(\mathcal{A}_{syn}\)). For \(\mathcal{A}_{sur}\), the incoming queries come from PD and NPD natural images. For \(\mathcal{A}_{adv}\), the submitted queries contain adversarial examples. For \(\mathcal{A}_{syn}\), the malicious queries are synthetic data from a generator.
**Colluding adversaries.** Within the context of MLaaS, adversaries may employ multiple clients \(N(N>1)\) to enhance the stealthiness of their attacks and bypass request limitations. These colluding clients, under the control of a central adversary, collaborate to carry out model extraction attacks using similar query selection strategies, all working towards a common objective. In this paper, we refer to these clients as colluding adversaries.
**Adaptive adversary.** In the context of model extraction, we must consider the presence of an adaptive adversary who possesses knowledge of the defense methods employed. This adversary can modify their query submission strategy to enhance the stealthiness of the extraction process. In this paper, we focus on two types of adaptive adversaries: (1) _Dummy Query:_ This adaptive method, proposed by PRADA [17], involves generating dummy queries that do not contribute to the model extraction process. These queries are designed to maintain a normal distribution within a sequence of historical queries, thereby evading detection.
\begin{table}
\begin{tabular}{l|c|c c c c|c|c|c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Type**} & \multicolumn{2}{c|}{**Support Outputs**} & \multicolumn{2}{c|}{**Defending Capability**} & \multicolumn{1}{c|}{**Dummy**} & \multicolumn{1}{c|}{**Feature-corrected**} & \multicolumn{1}{c}{**Distributed**} \\ \cline{3-8} & & Hard-label & Soft-label & \(\mathcal{A}_{sur}\) & \(\mathcal{A}_{adv}\) & \(\mathcal{A}_{syn}\) & **Query** & **Query** (Ours)** & **Attack (Ours)** \\ \hline Kariyappa _et al._[21]2020 & Active & ✗ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ \\ Zheng [22, 23]2022 & Active & ✗ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ \\ Tribhuvanesh _et al._[24]2020 & Active & ✗ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ \\ Adam _et al._[25]2022 & Active & ✗ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ \\ Kariyappa _et al._[26]2020 & Active & ✗ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ \\ Kesarwani _et al._[16]2018 & Passive & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ \\ Juuti _et al._[17]2019 & Passive & ✓ & ✓ & ✗ & ✓ & ✗ & ✗ & ✗ \\ Zhang _et al._[19]2021 & Passive & ✓ & ✓ & ✗ & ✓ & ✗ & ✗ & ✗ \\ Pal _et al._[27]2021 & Passive & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ \\ Lee _et al._[20]2022 & Passive & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ \\
**FDINet (Ours)** & Passive & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table} TABLE I: Taxonomy of real-time model extraction defense approaches.
(2) _Feature Correction_: To evade detection, the adversary employs an auxiliary encoder, denoted as \(\hat{F}\), which is a pre-trained encoder drawn from model zoo. This auxiliary encoder is used to correct the query's feature maps before submitting them to the MLaaS system. We will discuss the detail of _Feature Correction_ in Section 4.4.
### _Defense Objective and Assumptions_
Protection of user data is of utmost importance in the MLaaS platform. It is imperative for a secure MLaaS system to prioritize the confidentiality of user models.
**Defense objective.** The defender acts as a crucial intermediary between the CSP and clients, with the main goal of detecting and preventing extraction actions. It aims to create a powerful extraction attack detector \(\mathcal{C}\) that can distinguish between benign and malicious queries. As a result, the goal of defender is:
\[\underset{\mathcal{C}}{max}P_{(x\in\mathcal{D}_{adv})}1\left[\mathcal{C}(F_{ V};x)=1\right]+P_{(x\in\mathcal{D}_{test})}1\left[\mathcal{C}(F_{V};x)=0\right], \tag{1}\]
where \(\mathcal{D}_{adv}\) is query set of adversary, \(\mathcal{C}\) is the extraction attack detector. Additionally, the defender aims to evaluate the performance of detector using the following metrics: (1) _effectiveness_, the detector is expected to effectively identify various types of extraction attacks. (2) _efficiency_, for the low-latency MLaaS platform, the defense algorithm should immediately raise extraction alarms using only a few queries. (3) _robustness_, the defender has the ability to resist stealthy attacks, such as distributed attacks, as well as adaptive attacks.
**Defending assumptions.** We consider the attack-agnostic scenario, where the defender has no prior knowledge of sample selection strategies. Besides, the attack model architecture, training mechanism, and relationship between clients are unknown to the defender. We suppose that the defender knows developer's private training data \(\mathcal{D}_{train}\) and has access to the feature maps of victim model \(F_{V}\). The ProFedi is generic and flexible since it makes no assumptions about the adversary, the DNN model, and the developer's private training data. This results in its defense capability of identifying extraction attacks in all scenarios, including \(\mathcal{A}_{sur}\), \(\mathcal{A}_{adv}\), and \(\mathcal{A}_{syn}\)).
## 4 Methodology
As shown in Figure 1, the detection process of FDINet includes the following phases: (1) selecting anchor samples, (2) measuring feature distortion, (3) training extraction attack detector. In the first phase, we select high prediction confidence data from the training set as anchor samples. Subsequently, the feature space distances between each inspected sample and the anchor samples are calculated to generate the FDI vector. Finally, the extracted FDI vector is employed as an intrinsic feature to train the extraction attack detector.
### _Selecting Anchor Samples_
The adversary selects and/or crafts samples to query the public API, then uses the prediction results as labels to train a clone model. Intending to extract more information from the cloud-based model, the adversary explores enormous input space to increase the diversity of queried samples. Therefore, the feature distributions of sample submitted by an adversary deviate from benign training data. This phenomenon motivates us to design an effective metric to measure the feature distributions deviation between received query and training data.
The first step toward measuring feature space deviation is to select anchor samples. To ensure the selected representative samples lie in the center, we select \(K\) high-confidence data from training set as anchor samples for each class. It should be noted that these anchor samples lie at the center of each class, encapsulating the statistical features of the benign query set. Formally, for each class \(c\), the selected samples are denoted as \(\left\{(x_{c,j},c)\right\}_{j=1}^{K}\in\mathcal{D}_{anc}\). Afterwards, we use the selected anchor sample \(\left\{x_{c,j},c\right\}\) to extract feature maps \(F_{V}^{l}\) (\(x_{c,j}\)) for layer \(l\).
### _Measuring Feature Distortion_
The observed deviation in feature distributions between an inspected sample and the anchor samples is referred to as the feature distortion of the inspected sample. To quantitatively measure this feature distortion, we introduce a novel metric called the Feature Distortion Index (FDI). The FDI serves as a quantitative measure to assess the extent of
Fig. 1: Overview the pipeline of FDINet. In the first step, we select \(K\) anchor samples for each class \(c\) (airplane in figure). In the next step, we measure feature distortion to obtain FDI vector for each inspected sample. Finally, the extracted FDI vector is used to create a binary extraction attack detector and a colluding adversaries detector.
feature distortion in the inspected sample compared to the anchor samples. Formally, the FDI is defined as follows:
**Definition 4.1** (_Feature Distortion Index_).: Given a victim model \(F_{V}\), an anchor set \(\mathcal{D}_{anc}\), the feature distortion index for an inspected sample \((x,c)\) is defined as:
\[\mathcal{I}_{j}^{l}=d\left(F_{V}^{l}(x)-F_{V}^{l}(x_{c,j})\right)\quad\text{s.t }\,x_{c,j}\in\mathcal{D}_{anc}, \tag{2}\]
where \(F_{V}^{l}(x)\) denotes the output feature of \(F_{V}\) for layer \(l\), \(d\) indicates the \(l2\)-norm in our paper, and \(c\) is the label of \(x\) predicted by \(F_{V}\).
Given a victim model \(F_{V}\), we extract a total of \(L\) layer feature maps (i.e., \(l\in\{1,...,L\}\)). We then concatenate all \(\mathcal{I}^{l}\) to obtain a \((K\times L)\)-dimension FDI vector. For example, for VGG19 of task CIFAR10, we select \(K=100\) anchor samples and extract \(L=5\) layer feature maps to obtain a \((5\times 100)\)-dimension FDI vector \(\mathcal{I}\left(x,c;\mathcal{D}_{anc},F_{V}\right)\).
### _Training Extraction Attack Detector_
In this section, we employ the extracted FDI vector as the inputs to **(1)** construct a binary classifier to detect extraction queries, **(2)** perform hypothesis tests to identify collud adversaries.
#### 4.3.1 Detecting Extraction Attacks
Table II illustrates the architecture of \(\mathcal{C}\). The binary classifier, takes in a \((K\times L)\)-dimension FDI vector as input and produces a \(2\)-dimensional probability vector as output.
**Training.** In order to train \(\mathcal{C}\), we adopt the training strategy commonly used in Out-of-Distribution detection. Specifically, we gather a positive dataset and a negative dataset, which are utilized during the training process. In this paper, we utilize the \(\mathcal{D}_{train}\) as the negative dataset and collect an auxiliary dataset \(\mathcal{D}_{aux}\) as the positive dataset. The selection of \(\mathcal{D}_{aux}\) will be discussed in Section 5.1.3.
**Evaluation.** Identifying malicious clients based on a single query can be challenging due to its low information entropy. To overcome this limitation, we introduce a majority voting algorithm that utilizes a batch size of \(bs\) queries to detect malicious clients. In this approach, for each submitted query with a batch size of \(bs\) samples, we calculate the average confidence score of the predictions and utilize it as an indicator to determine the maliciousness of the client.
\[ac=\frac{1}{bs}\times\sum_{i=1}^{bs}\arg\max\mathcal{C}(\mathcal{I}\left(x,c; \mathcal{D}_{anc},F_{V}\right)), \tag{3}\]
where \(bs\) is the length of sequence and \(ac\in[0,1]\) is a confidence score.
Finally, we adopt a threshold \(\tau_{1}\) to determine whether the queries come from malicious or benign clients. Note that if confidence score \(ac\) high than threshold \(\tau_{1}\), the query is predicted as extraction attack. The training and prediction of \(\mathcal{C}\) are described in Algorithm 1.
#### 4.3.2 Identifying Colluding Adversaries
In this section, we first define distributed model extraction attacks and introduce our method to identify colluding adversaries.
**Distributed extraction attack.** Given an MLaaS platform with \(M=\{1,2,...,M\}\) clients, a central adversary controls a set of \(N(2\leq N\leq M)\) clients. The central adversary adopts sample selection strategies (e.g., \(\mathcal{A}_{sur}\), \(\mathcal{A}_{adv}\), and \(\mathcal{A}_{syn}\)) to build a query set and distributes them to each client. The distributed attack is stealthy since each controlled agent only has a small overhead that is easy to evade request limitations.
**Colluding adversaries detection.** We observe a significant level of FDI similarity among queries that are generated using the same model extraction attack. This key observation motives us to detect colluding adversaries using FDI similarity.
In order to determine if two examined clients, \(u\) and \(v\), are colluding adversaries, our approach involves collecting a set of \(n\times bs\) historical queries from each client. Subsequently, we extract their FDI vectors and perform two-sample hypothesis tests for further analysis as follows:
**Proposition 4.1** (_Two-sample Hypothesis Tests_).: Given two inspected clients \(u\) and \(v\), and their \(n\times bs\) FDI vectors \(\mathcal{I}_{u}\) and \(\mathcal{I}_{v}\), the null hypothesis can be expressed as: \(\mathcal{H}_{0}:\mu_{u}=\mu_{v}\), while the alternative hypothesis is expressed as \(\mathcal{H}_{a}:\mu_{u}\neq\mu_{v}\). Though calculating the test statistic \(\mathbf{t}=\left(\bar{x}_{1}-\bar{x}_{2}\right)/s_{p}\left(\sqrt{\frac{1}{| \mathcal{I}_{a}|}+\frac{1}{|\mathcal{I}_{v}|}}\right)\), we can obtain its p-value, where \(\bar{x}_{1}\) and \(\bar{x}_{2}\) are means of samples, \(s_{p}\) indicates variance.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline
**Index** & **Layer Type** & **Weight** \\ \hline
1 & Linear & \([K\times L,256]\) \\
2 & Linear & \([256,32]\) \\
3 & Linear & \([32,2]\) \\
4 & Softmax & - \\ \hline \hline \end{tabular}
\end{table} TABLE II: Architecture of binary classifier \(\mathcal{C}\)
Finally, we select a threshold \(\tau_{2}\) chosen for statistical significance (usually \(\tau_{2}=0.05\)) for testing. If the calculated p-value is below \(\tau_{2}\), then the null hypothesis \(\mathcal{H}_{0}\) is rejected in favor of the alternative hypothesis \(\mathcal{H}_{a}\). In this case, it indicates that clients \(u\) and \(v\) are not colluding adversaries.
### _Adaptive Attacks_
An adaptive adversary who knows FDINet may potentially modify attack strategies to evade our detection. In this section, we assume that the adaptive adversary has a mini-batch of substitute anchor samples and an auxiliary encoder \(\hat{F}\) drawn from the model zoo. We proposed _Feature Correction (FeatC)_, an adaptive attack that utilizes an auxiliary encoder to make the query similar to anchors samples in feature maps. Formally, the adaptive adversary locally perturbates \(x\) using loss function \(\mathcal{L}\):
\[\mathcal{L}(x;\hat{F},\hat{x}) =\left|\hat{F}^{l}(x+\delta)-\hat{F}^{l}(\hat{x})\right|_{2}^{2} \tag{4}\] \[\text{s.t.}\quad F_{V}(x)=F_{V}(\hat{x}),\delta<\epsilon,\]
where \(\hat{F}\) is a pre-trained feature extractor drawn from model zoo and \(\hat{x}\) denotes auxiliary anchor sample from victim training data \(\mathcal{D}_{train}\). Through generating feature-corrected queries, the adaptive adversary intends to bypass our detection. Since _FeatC_ exploits the full knowledge of our defense mechanism, we believe _FeatC_ is a strong adaptive attack against FDINet.
## 5 Experiments
In this section, we conduct extensive experiments to validate the performance of FDINet against six advanced model extraction attacks, covering four different deep learning systems. We begin by introducing the experimental setup in Section 5.1. Subsequently, we evaluate the performance of FDINet against extraction attacks in Section 5.2 and distributed attacks in Section 5.3. Additionally, we conduct ablation studies in Section 5.4 and explore the adaptive attacks in Section 6.1. All experiments are performed on an Ubuntu 20.04 system equipped with a 96-core Intel CPU and four Nvidia GeForce GTX 3090 GPU cards. The machine learning algorithm is implemented using PyTorch v1.10.
### _Experimental Setup_
#### 5.1.1 Datasets and Victim Models
Our method is assessed on four benchmark datasets: CIFAR10 [56], GTSRB [57], CelebA [58], and Skin Cancer [59]. These datasets cover four distinct deep learning systems that are commonly employed in security-critical domains: general visual recognition, traffic sign recognition, face recognition, and disease diagnosis. To conduct our experiments, we utilize four different convolutional neural networks: VGG19, MobileNetV2, DenseNetI21, and ResNet50. To accommodate the input size of \(32\times 32\), we adjusted the filter size of the first convolutional layer in the original architecture by downsampling. Table III presents a summary of the datasets and victim models utilized in our experiments.
#### 5.1.2 Setting of Attack Methods
Six advanced model extraction attacks are considered in our experiments, covering three adversarial scenarios, i.e., \(\mathcal{A}_{adv}\), \(\mathcal{A}_{sur}\) and \(\mathcal{A}_{syn}\) (as mentioned in Section 2.1). For \(\mathcal{A}_{adv}\) scenario, we assume the adversary has a mini-batch of victim's training data and employs Jacobian-based Augmentation (JBA) [10] Targeted Randomly chosen Direction (T-RND) [17] to create extraction query. For \(\mathcal{A}_{sur}\) scenario, we follow the experiment setting of Knockoff [28], which assumes the adversary selects queries from a surrogate dataset. Specifically, we adopt CINIC-10 [60], TSRD 1, LFW [61], and BCN20000 [62] as Knockoff and ActiveThief [29]'s surrogate dataset for the tasks of CIFAR10, GTSRB, CelebA and Skin Cancer, respectively. For \(\mathcal{A}_{syn}\) scenario, we follow DFIME [36] and DaST [11]'s experiment setting that employs a generative model to craft surrogate data as query set. Table IV provides a summary of the surrogate model's Top-3 accuracy for each attack. Note that those six model extraction attacks compass a wide range of cutting-edge techniques, and their queries cover problem domain data, non-problem domain data, adversarial examples, and synthetic data.
Footnote 1: [http://www.nlpria.ac.cn/pal/trafficdata/recognition.html](http://www.nlpria.ac.cn/pal/trafficdata/recognition.html)
#### 5.1.3 Setting of Defense Methods
In the main paper, we conduct a performance comparison between FDINet, PRADA [17], SEAT [19] and Extraction Monitor [16]. To ensure consistency, we utilize the official implementation of PRADA and make adjustments to the hyperparameter \(\tau_{1}\) in order to achieve a low false positive rate (FPR) on the validation set. Regarding SEAT, we employ the victim model as an encoder and perform fine-tuning for 20 epochs using the Stochastic Gradient Descent (SGD) optimizer with a learning rate of 0.001. Following SEAT's methodology, we select a threshold that yields a low FPR on the validation set. For the Extraction Monitor, we adopt the same architecture as the victim model and treat it as a
\begin{table}
\begin{tabular}{l|c|c c|c c} \hline \multirow{2}{*}{**Dataset**} & \multicolumn{2}{c|}{\(\mathcal{A}_{adv}\)} & \multicolumn{2}{c|}{\(\mathcal{A}_{sur}\)} & \multicolumn{2}{c}{\(\mathcal{A}_{syn}\)} \\ \cline{2-6} & **JBA** & **T-RND** & **Knockoff** & **ActiveThief** & **DFIME** & **DaST** \\ \hline CIFAR10 & 54.87 & 62.61 & 97.23 & 95.23 & 96.98 & 35.18 \\ \hline GTSRB & 26.79 & 21.65 & 34.26 & 33.99 & 92.73 & 40.09 \\ \hline CelebA & 76.55 & 76.94 & 99.85 & 82.36 & 92.74 & 42.84 \\ \hline Skin Cancer & 74.44 & 69.50 & 94.89 & 99.48 & 82.56 & 61.35 \\ \hline \end{tabular}
\end{table} TABLE IV: Surrogate model’s Top-3 accuracy(%) of various model extraction attacks for \(\mathcal{A}_{sur}\), \(\mathcal{A}_{adv}\) and \(\mathcal{A}_{syn}\) scenarios.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline
**Dataset** & **Model** & **Acc.(\%)** & **Scenario** \\ \hline CIFAR10 & VGG19 & 87.73 & General Visual Recognition \\ \hline GTSRB & MobileNetV2 & 90.99 & Traffic Sign Recognition \\ \hline CelebA & DenseNetI21 & 93.49 & Face Recognition \\ \hline Skin Cancer & ResNet50 & 98.52 & Disease Diagnosis \\ \hline \end{tabular}
\end{table} TABLE III: Datasets and victim models.
proxy model, as suggested in the original paper. To train the proxy model, we utilize SGD with a learning rate of 0.005 for 2 iterations per batch of submitted queries.
When considering FDINet, we establish the number of anchor samples (\(K\)) as 20 for GTSRB, CelebA, and Skin Cancer datasets, whereas for CIFAR10, \(K\) is set to 100. We divide the neural network into multiple convolutional blocks and extract 5 layers (i.e., \(L=5\)) for all tasks. As for the auxiliary dataset \(\mathcal{D}_{aux}\), we employ the testing set from CINIC-10, TSRD, VGGFace2, and BCN20000 as negative data for CIFAR10, GTSRB, CelebA, and Skin Cancer, respectively. It is important to note that the auxiliary datasets are based on realistic assumptions, and the testing set does not overlap with the surrogate set used by the attacker. To detect distributed attacks, we utilize two-sample hypothesis tests for the two inspected clients, denoted as \(u\) and \(v\).
#### 5.1.4 Evaluation Metrics
In the experiments, we utilize five commonly employed metrics to assess the efficacy of our method: Detection Accuracy (DAcc.), False Positive Rate (FPR), Extraction Status (ES), Colluding Detection Accuracy (CDAcc.), and p-value of hypothesis tests. We will discuss the detail of each metric in the following experiments.
#### 5.1.5 Threshold Selection
Achieving optimal DAcc. and FPR in binary classification tasks relies heavily on selecting the right threshold. Nonetheless, this task can be quite challenging. Inspired by previous research [17], we introduce a data-driven threshold selection strategy. Initially, we utilize the validation set \(\mathcal{D}_{val}\) to calculate the values of \(\mu_{ac}\) and \(\sigma_{ac}\), and subsequently apply the \(3\sigma\) rule to set \(\tau_{1}=\mu_{ac}+3\times\sigma_{ac}\).
### _Detecting Extraction Attacks_
In this experiment, we launch extraction attacks and generate \(50,000\) samples as malicious client's query set \(\mathcal{D}_{adv}\). We evaluate FDINet using DAcc., FPR, and extraction status. The DAcc. serves as a measure of accuracy for detecting malicious queries within the MLaaS system. On the other hand, the FPR quantifies the rate at which negative samples are erroneously classified as positive by the binary detector. It helps evaluate the system's performance in terms of mis-classifying negative instances. Additionally, the ES metric evaluates the fidelity between the proxy model and the victim model.
#### 5.2.1 Detection Accuracy and FPR
Table V presents a summary of the performance comparison between FDINet, PRADA, and SEAT against six model extraction attacks. In our experiments, we examine two query batch sizes, \(bs=50\) and \(bs=500\). Figure 2 illustrates the ROC curve of our method for detecting extraction queries.
In terms of performance, FDINet outperforms PRADA and SEAT with high DAcc. and low FPR. Specifically, FDINet achieves a DAcc. of **100%** and an FPR close to **0.0** for DFME and DaST with a batch size (\(bs\)) of 500. Furthermore, FDINet is capable of identifying malicious clients with just 50 queries. On the other hand, both PRADA and SEAT fail to detect extraction attacks when \(bs\) is set to 50. It should be noted that FDINet achieves a lower DAcc. in CIFAR10 for Knockoff and ActiveThief. This is because the surrogate dataset (CINIC-10) used by Knockoff and ActiveThief has some overlap with CIFAR10. In the ablation study, we will explore the effects of threshold \(\tau_{1}\) and batch size (bs) further, which can be found in Section 5.4.
\begin{table}
\begin{tabular}{c c c c|c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Strategy**} & \multirow{2}{*}{\(bs\)} & \multirow{2}{*}{\(\tau_{1}\)} & \multicolumn{2}{c|}{**JBA**} & \multicolumn{2}{c|}{**T-RND**} & \multicolumn{2}{c|}{**Knockoff**} & \multicolumn{2}{c|}{**ActiveThief**} & \multicolumn{2}{c|}{**DFME**} & \multicolumn{2}{c}{**DaST**} \\ \cline{4-19} & & & DAcc. & FPR & DAcc. & FPR & DAcc. & FPR & DAcc. & FPR & DAcc. & FPR & DAcc. & FPR \\ \hline \multirow{8}{*}{**Category**} & \multirow{2}{*}{SEAT} & 50 & 0.87 & 9.30 & 0.20 & 11.40 & 0.30 & 5.80 & 0.0 & 5.30 & 0.12 & 2.20 & 0.10 & 13.90 & 0.0 \\ & & 500 & 0.81 & 91.00 & 0.0 & 90.00 & 0.5 & 55.00 & 0.0 & 45.00 & 0.0 & 75.00 & 0.0 & 77.00 & 0.0 \\ \cline{2-19} & & 50 & 0.99 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ & & 500 & 0.97 & **100.0** & 36.00 & **100.0** & 36.00 & 28.00 & 36.00 & 39.00 & 36.00 & 30.00 & 36.00 & 19.00 & 0.0 \\ \cline{2-19} & & 50 & 0.47 & 89.90 & 4.10 & 93.10 & 2.60 & 80.26 & 4.10 & 82.00 & 2.90 & **100.0** & 1.70 & 98.03 & 1.80 \\ & & 500 & 0.48 & 91.00 & 0.0 & 93.00 & 0.0 & 90.00 & 0.0 & 92.00 & 0.0 & **100.0** & 0.0 & 100.0 & 0.0 \\ \hline \multirow{8}{*}{**Category**} & \multirow{2}{*}{SEAT} & 50 & 0.90 & 4.20 & 4.40 & 3.80 & 4.40 & 8.20 & 4.20 & 8.60 & 4.20 & 34.20 & 4.40 & 43.20 & 4.40 \\ & & 500 & 0.88 & 74.00 & 4.00 & 77.00 & 4.00 & 65.00 & 4.00 & 55.00 & 4.00 & 89.00 & 4.00 & 85.00 & 4.00 \\ \cline{2-19} & & 50 & 0.94 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ \cline{2-19} & & 50 & 0.93 & 97.00 & 90.00 & 97.00 & 90.00 & 0.00 & 90.00 & 1.00 & 90.00 & 1.00 & 90.00 & 16.00 & 9.00 \\ \cline{2-19} & & 50 & 0.77 & 90.30 & 0.0 & 90.40 & 0.0 & **100.0** & 0.0 & 95.60 & 0.0 & **100.0** & 0.0 & **100.0** & 0.0 \\ & & 500 & 0.78 & 88.00 & 0.0 & 87.00 & 0.0 & **100.0** & 0.0 & 95.00 & 0.0 & **100.0** & 0.0 & **100.0** & 0.0 \\ \hline \multirow{8}{*}{**Category**} & \multirow{2}{*}{SEAT} & 50 & 0.90 & 5.50 & 0.0 & 8.40 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 14.10 & 0.0 & 19.30 & 0.0 \\ & & 500 & 0.82 & 89.00 & 0.0 & 79.00 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 79.00 & 0.0 & 90.00 & 0.0 \\ \cline{2-19} & & 50 & 0.96 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ \cline{2-19} & & 50 & 0.95 & **100.0** & 6.00 & **100.0** & 6.00 & 3.00 & 6.00 & 1.00 & 6.00 & 5.00 & 6.00 & 9.00 & 6.00 \\ \cline{2-19} & & 50 & 0.33 & 96.61 & 0.30 & 97.10 & 0.10 & 83.93 & 0.30 & 97.10 & 0.0 & **100.0** & 0.10 & **100.0** & 0.20 \\ \cline{2-19} & & 500 & 0.36 & 95.00 & 0.0 & 96.00 & 0.0 & 93.00 & 0.0 & 93.00 & 0.0 & **100.0** & 0.0 & **100.0** & 0.0 \\ \hline \multirow{8}{*}{**Category**} & \multirow{2}{*}{SEAT} & 50 & 0.90 & 10.20 & 0.0 & 8.90 & 0.0 & 5.30 & 0.0 & 6.00 & 0.0 & 4.50 & 0.0 & 8.40 & 0.0 \\ & & 500 & 0.82 & 69.00 & 2.00 & 73.00 & 2.00 & 62.00 & 2.00 & 63.00 & 2.00 & 81.00 & 2.00 & 87.00 & 2.00 \\ \cline{2-19}
#### 5.2.2 Extraction Status (ES)
ES serves as another important metric for detecting extraction queries. It is a metric introduced by EM [16], which employs information gain to quantify the level of model privacy leakage from the victim model. EM utilizes a local proxy model to monitor the information gain of each client. When the proxy model learns a surrogate model with high fidelity, extraction warnings are sent to the Cloud Service Provider (CSP). Formally, the ES is defined as:
\[es=\frac{100}{|\mathcal{D}_{test}|}\sum_{x\in\mathcal{D}_{test}}\mathbb{1}\left[ F_{V}(x)=F_{V^{\prime}}(x)\right]. \tag{5}\]
Since FDINet doesn't make use of proxy model, we use average confidence (i.e., \((100\times ac)\,\%\)) as ES for comparison.
Figure 3 depicts the average ES reported by FDINet and EM for both benign and malicious clients. The results demonstrate that FDINet's ES is **below 34.60% for CI-FAR10**, **44.20% for GTSRB**, **17.40% for CelebA**, and **9.30% for Skin Cancer**. Additionally, we achieve an average of **96.08% and 84.94% for GTSRB and Skin Cancer**, respectively. On the other hand, FDINet effectively identifies high ES for malicious clients (i.e., JBA, T-RND, Knockoff, ActiveThief, DFME, and DaST). In contrast, EM reports high ES for benign clients due to their significant information gain. Furthermore, the ES of EM is very low for DFME and DaST since these synthetic data samples have low information gain. It is important to note that lower ES is preferable for benign queries, while higher ES is better for malicious queries. Therefore, while EM effectively detects certain extraction attacks, it may also produce considerable false alarms for benign queries.
#### 5.2.3 Memory Costing and Detection Fine-grained
Efficiency is crucial in the context of MLaaS, particularly when dealing with real-time APIs. In the case of security-focused MLaaS, efficiency encompasses two key aspects: resource consumption and detection fine-grained. To assess our method's efficiency, we compare it with state-of-the-art defense methods.
First, PRADA, relies on calculating the L2 distance between new and previous samples to identify malicious queries. However, this approach necessitates significant memory storage. Additionally, EM requires the maintenance of a local proxy model for each client, resulting in substantial computational overhead. In contrast, our approach, FDINet, is lightweight and flexible. It doesn't depend on historical queries and doesn't make assumptions about the victim model. We conducted an experiment using 50,000 testing queries. The results demonstrate that FDINet achieves a throughput of 838.36 queries on the task of CIFAR10. This showcases the efficiency of FDINet in processing queries promptly and effectively.
Furthermore, as shown in Table V, FDINet is efficient in identifying extraction queries using only 50 queries. This highlights the efficiency of FDINet in swiftly and accurately detecting adversaries, thereby maximizing the protection of the victim model.
### _Detecting Distributed Extraction Attacks_
In distributed extraction attack, the adversary distributes malicious queries to \(N(N>1)\) clients. The primary goal of FDINet is not to identify malicious queries, but rather to identify colluding adversaries. To evaluate the performance of FDINet, we simulate an MLaaS system with \(M=100\) clients, each submitting \(50,000\) queries. Among them, \(2\sim 20\) are colluding adversaries who jointly launch the same extraction attack. In this experiment, we set \(bs=500\) and \(n=100\), with a total of \(50,000\) samples. For each pair of clients under inspection, denoted as \(u\) and \(v\), we extracted their FDI vectors. Subsequently, we conducted two-sample
Fig. 3: Results of average Extraction Status for benign and malicious clients (**lower is better for benign clients**). Extraction Status (ES) is a metric proposed by [16] that uses information gain to quantify model privacy leakage from the victim model.
Fig. 2: ROC curve of model extraction attacks detection.
hypothesis tests (as discuss in Proposition 4.1) to determine whether these clients were colluding adversaries. This process allowed us to identify and expose potential collusive behavior among the clients in the MLaaS system. The Colluding Detection Accuracy (CDAcc.) can be formulated as:
\[\text{CDAcc.}=\frac{\sum_{u\in C_{n},u\in[1,N],u\in[1,N]}\frac{1|\text{FDNIet}(u,v )=J(u,v)|}{C_{N}^{n}}\times 100\%, \tag{6}\]
where \(C\) is combination formula (i.e., \(C_{n}^{m}=\frac{m!}{m!(n-m)!}\)), \(J\) is the judgment function that returns \(1\) if client \(u\) and client \(v\) are colluding adversaries. In order to speed up the detection of FDINet, we first use binary detection to filter the benign clients, then setup two-sample hypothesis tests.
Figure 4 depicts the confusion matrix for average hypothesis tests' p-values over different clients. When the p-value exceeds 0.05, we accept the null hypothesis (\(\mathcal{H}_{0}\)), indicating that clients \(u\) and \(v\) are colluding adversaries. The observation from Figure 4 indicates that FDINet achieves high p-values along the diagonal of the confusion matrix, indicating its effectiveness in identifying colluding adversaries. However, our method doesn't performant well in distinguishing between Knockoff and ActiveThief attacks. This challenge arises because both attacks utilize the same surrogate dataset, resulting in similar FDI for these two attacks. Figure 5 demonstrates the effectiveness of FDINet in detecting colluding adversaries within a large-scale MLaaS platform comprising 100 clients. Notably, FDINet achieves an impressive CDAcc. of **over 91%** for all extraction attacks. As Figure 5 illustrates, with the increasing number of colluding adversaries, FDINet can still remain high accuracy for colluding detection. This experiment serves as compelling evidence demonstrating the capability of our method to identify colluding adversaries within a large-scale MLasS platform effectively.
### _Ablation Study_
To further understand how different components influence FDINet, we carry out evaluations on two significant factors within our approach, i.e., threshold \(\tau_{1}\) and batch size \(bs\).
#### 5.4.1 Impacts of Threshold
As discussed in Section 5.1.5, the threshold is a critical factor that affects the DAcc., and the process of selecting a suitable threshold is demanding. To shed light on this matter, we conduct an experiment where we employ various thresholds (ranging from 0.2 to 0.8), to observe the trend in detection accuracy. This experiment aims to provide guidance on selecting an optimal threshold that ensures accurate detection for new datasets.
Figure 6 illustrates the impact of thresholds (\(\tau_{1}\)) on the DAcc. of FDINet. It can be observed that there is a notable decrease in DAcc. as \(\tau_{1}\) increases, particularly for Knockoff and ActiveThief. This decline in accuracy can be attributed to the fact that the surrogate data used by Knockoff and ActiveThief are derived from natural images, which may have a higher degree of feature overlap with \(\mathcal{D}_{train}\). In our approach, the threshold \(\tau_{1}\) represents the tolerance for abnormal samples in the batch query. By increasing the threshold \(\tau_{1}\), we can minimize false alarms since benign clients might occasionally send a few malicious queries.
#### 5.4.2 Impacts of Batch Size
Relying on a single query for identifying adversaries can lead to a significant false alarms due to the limited entropy
Fig. 4: Illustration of the confusion matrix for average hypothesis tests’ p-values over different clients. If the p-value is higher than 0.05, we accept \(\mathcal{H}_{0}\), meaning clients \(u\) and \(v\) are colluding adversaries.
Fig. 5: Performance of colluding adversaries detection for distributed attacks. We consider a 100 clients MLaaS platform. Among them, \(2\sim 20\) are colluding adversaries for each attack.
within the query's features. To mitigate this, we adopt a majority voting strategy, as explained in Section 4.3. The choice of batch size (\(bs\)) plays a crucial role in determining the overall detection accuracy of our approach. In order to assess the performance of our defense, we conduct an empirical evaluation where we examine the effectiveness of our method across a range of batch sizes, specifically from 2 to 128.
Figure 7 illustrates the impact of batch size (\(bs\)) on the DAcc. of FDINet. It is evident from the figure that as the batch size increases, the DAcc. also improves. Furthermore, Figure 7 highlights that our defense attains high DAcc. for JBA, T-RND, DFME, and DaST using just 64 queries.
## 6 Discussion
### _Adaptive Attacks_
In this section, we explore two specific adaptive attacks: _Feature Correction_ and _Dummy Query_. In those attacks, the adversaries know FDINet and potentially modify their attack strategies in order to evade our detection mechanisms.
#### 6.1.1 Feature Correction (FeatC)
The adaptive adversary knows our method utilizes the FDI phenomenon to detect malicious queries. Consequently, the adversary can strategically modify the feature maps before submitting them to the MLaaS platform, as discussed in Section 4.4. In this experiment, we make two assumptions about the adversary: (1) the adaptive adversary possesses a pre-trained encoder drawn from the model zoo (VGG11 and ResNet50), and (2) the adversary has access to a mini-batch of training data \(\mathcal{D}_{train}\), which serves as the anchor samples. The adaptive adversary initiates the process by generating 50,000 queries using existing model extraction attacks (such as JBA and T-RND). Subsequently, the adversary applies the L-BFGS optimizer within _FeatC_ to re-correct the feature maps associated with these queries.
Table VI illustrates the performance of FDINet in defending against _FeatC_, where the auxiliary encoders of _FeatC_ are VGG11 and ResNet50. However, there is a slight decrease in DAcc. for Knockoff and ActiveThief, as these model extraction techniques employ natural images that may overlap with the training set. Nonetheless, FDINet continues to be effective in defending against the majority of attacks.
\begin{table}
\begin{tabular}{c c|c|c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{\(\hat{F}\)} & \multicolumn{2}{c|}{**JBA**} & \multicolumn{2}{c|}{**T-RND**} & \multicolumn{2}{c|}{**Knockoff**} & \multicolumn{2}{c|}{**ActiveThief**} & \multicolumn{2}{c|}{**DFME**} & \multicolumn{2}{c}{**DaST**} \\ \cline{3-14} & & DAcc. & FPR & DAcc. & FPR & DAcc. & FPR & DAcc. & FPR & DAcc. & FPR & DAcc. & FPR \\ \hline \multirow{2}{*}{CIFAR10} & VGG11 & 90.30 & 6.10 & 93.10 & 5.70 & 51.30 & 4.70 & 64.00 & 6.10 & **100.0** & 5.70 & 98.90 & 4.70 \\ & ResNet50 & 91.60 & 4.00 & 93.30 & 3.40 & 32.10 & 2.80 & 43.30 & 4.00 & **100.0** & 3.40 & 98.70 & 2.80 \\ \hline \multirow{2}{*}{GTSRB} & VGG11 & 86.60 & 0.0 & 84.50 & 0.0 & 99.60 & 0.0 & 99.90 & 0.0 & 99.90 & 0.0 & **100.0** & 0.0 \\ & ResNet50 & 86.60 & 0.0 & 84.50 & 0.0 & 99.80 & 0.0 & **100.0** & 0.0 & **100.0** & 0.0 & **100.0** & 0.0 \\ \hline \multirow{2}{*}{CelebA} & VGG11 & 90.40 & 0.0 & 91.10 & 0.0 & 86.80 & 0.0 & 96.30 & 0.0 & **100.0** & 0.0 & **100.0** & 0.0 \\ & ResNet50 & 91.60 & 0.0 & 92.30 & 0.0 & 87.00 & 0.0 & 82.00 & 0.0 & **100.0** & 0.0 & **100.0** & 0.0 \\ \hline \multirow{2}{*}{Skin Cancer} & VGG11 & 90.30 & 0.0 & 93.00 & 0.0 & 56.40 & 0.0 & 76.10 & 0.0 & 99.70 & 0.0 & 99.30 & 0.0 \\ & ResNet50 & 91.20 & 0.0 & 92.20 & 0.0 & 64.00 & 0.0 & 80.30 & 0.0 & **100.0** & 0.0 & 99.60 & 0.0 \\ \hline \hline \end{tabular}
\end{table} TABLE VI: The DAcc., FPR of FDINet to defend against _FeatC_.
Fig. 6: The performance of FDINet with various thresholds \(\tau_{1}\).
Fig. 7: The performance of FDINet with various batch size \(bs\).
#### 6.1.2 Dummy Query
PRADA introduced the _Dummy Query_ attack, an adaptive strategy where the adversary maintains a normal distribution of distances between queries. Although these queries do not contribute to the surrogate model's construction, they serve the purpose of evading detection. It is assumed that the adaptive adversary possesses complete knowledge of the detection algorithm, including the secret detection threshold value \(\tau_{1}\). With the objective of creating a query set comprising 50,000 samples, the adversary injects benign samples into the submitted queries. In our evaluation, we inject a percentage \(p\%\) of benign samples in the submitted queries, with a batch size (\(bs\)) set to 50. We incrementally increase \(p\) from 0 to 100 in intervals of 10 until the batch queries are predicted as benign by FDINet. Our evaluation provides an estimated lower bound on the number of queries required to evade FDINet detection.
Table VII shows the increased overhead to circumvent FDINet's detection. The results indicate that our method increases the query overhead ranging from +252.06% to +581.27%. This experiment serves as evidence that, despite the adaptive adversary's ability to distribute queries among multiple clients to evade detection, our method can still enhance its query budget at least \(\times 2.5\) times.
### _Limitations and Future Work_
**Language model.** This paper primarily focuses on empirical studies conducted in the field of computer vision. However, it is crucial to recognize the significant advancements achieved in language model development. Prominent pre-trained language transformers, including BERT and GPT-3, have been extensively employed in various downstream applications. Nonetheless, these models are still under the threat of model extraction attacks [63, 7]. We believe that our proposed method is able to transfer to language models. In the future, we plan to extend this research to encompass language models and devise a novel model extraction detector approach specifically designed for the NLP domain.
## 7 Conclusion
This paper introduces FDI, a metric that quantitatively measures the deviation in the feature distribution of incoming queries. Through FDI, we develop both an extraction attacks detector and a colluding adversaries detector. Extensive experiments demonstrate the effectiveness and efficiency of FDINet in detecting extraction attacks. Furthermore, FDINet exhibits robustness in identifying stealthiness attacks, including distributed attacks, _Dummy Query_, and _Feature Correction_. We hope this research can contribute to building a more secure MLaaS platform and promoting the scientific community's awareness of defending against model extraction attacks.
|
2305.08339 | Assessing the potential of AI-assisted pragmatic annotation: The case of
apologies | Certain forms of linguistic annotation, like part of speech and semantic
tagging, can be automated with high accuracy. However, manual annotation is
still necessary for complex pragmatic and discursive features that lack a
direct mapping to lexical forms. This manual process is time-consuming and
error-prone, limiting the scalability of function-to-form approaches in corpus
linguistics. To address this, our study explores automating pragma-discursive
corpus annotation using large language models (LLMs). We compare ChatGPT, the
Bing chatbot, and a human coder in annotating apology components in English
based on the local grammar framework. We find that the Bing chatbot
outperformed ChatGPT, with accuracy approaching that of a human coder. These
results suggest that AI can be successfully deployed to aid pragma-discursive
corpus annotation, making the process more efficient and scalable. Keywords:
linguistic annotation, function-to-form approaches, large language models,
local grammar analysis, Bing chatbot, ChatGPT | Danni Yu, Luyang Li, Hang Su, Matteo Fuoli | 2023-05-15T04:10:13Z | http://arxiv.org/abs/2305.08339v4 | # Using LLM-assisted Annotation for Corpus Linguistics
###### Abstract
Cathots based on Large Language Models (LLMs) have shown strong capabilities in language understanding. In this study, we explore the potential of LLMs in assisting corpus-based linguistic studies through automatic annotation of texts with specific categories of linguistic information. Specifically, we examined to what extent LLMs understand the functional elements constituting the speech act of apology from a local grammar perspective, by comparing the performance of ChatGPT (powered by GPT-3.5), the Bing chatbot (powered by GPT-4), and a human coder in the annotation task. The results demonstrate that the Bing chatbot significantly outperformed ChatGPT in the task. Compared to human annotator, the overall performance of the Bing chatbot was slightly less satisfactory. However, it already achieved high F1 scores: 99.95% for the tag of APOLOGISING, 91.91% for REASON, 95.35% for APOLOGISER, 89.74% for APOLOGISEE, and 96.47% for INTENSIFIER. This suggests that it is feasible to use LLM-assisted annotation for local grammar analysis, together with human intervention on tags that are less accurately recognized by machine. We strongly advocate conducting future studies to evaluate the performance of LLMs in annotating other linguistic phenomena. These studies have the potential to offer valuable insights into the advancement of theories developed in corpus linguistics, as well into the linguistic capabilities of LLMs..
Keywords: linguistic annotation, large language models, local grammar analysis, Bing chatbot, ChatGPT
Previous studies have explored the intersection of Natural Language Processing (NLP) and corpus linguistics (CL) (e.g., Dunn, 2022; Pustejovsky & Stubbs, 2013), which both deal with natural language dataset, i.e., _corpora_. Dunn (2022), for example, showed how NLP models (e.g., text classification model, text similarity model) can be used to annotate large corpora and thus facilitate corpus analysis. Recently, significant advancements have been made in the research of large language models (LLMs) by both academia and industry. LLMs are general purpose models that excel at a wide range of tasks, as opposed to the traditional NLP models which are trained for one specific task (e.g., sentiment analysis, named entity recognition) (J. Wei et al., 2022). Cai et al. (2023), for example, demonstrated that LLM-driven chatbots like ChatGPT can effectively emulate human language processing. In the context of rapidly evolving LLMs, this paper suggests that LLMs can be used for automatic linguistic annotation of corpora, which can then be used to verify or develop linguistic theories.
Recent studies have demonstrated the exceptional performance of LLMs in recognizing several linguistic aspects. Gilardi et al. (2023) found that ChatGPT outperformed crowd-workers in identifying relevance, stance, topics, and frames in text. This shows the potential for low-cost LLM techniques to significantly improve the efficiency of text classification. Kuzman et al. (2023) found that ChatGPT surpassed the X-GENRE classifier, a Transformer-based language model fine-tuned on more than 1,700 texts manually annotated with genres, in the task of automatic genre identification. Their study hints at a new era for text categorization. Zhong et al. (2023) examined quantitatively ChatGPT's understanding ability and found that ChatGPT outperformed all BERT models on inference tasks by a large margin and achieved comparable performance to BERT on sentiment analysis and question-answering tasks, while ChatGPT fell short in handling paraphrase and similarity tasks. Moreover, they emphasized that advanced prompting strategies can be used to further improve the understanding ability of ChatGPT.
In the field of NLP, both academics and industry have begun to exploit and develop annotation system or annotation platforms assisted by LLMs. For example, Wei et al. ( 2023) proposed ChatIE, a framework for zero-shot information extraction (IE), which has achieved impressive performance on IE tasks such as entity-relation triple extract, named entity recognition, and event extraction, surpassing even some full-shot models on several datasets (e.g., NYT11-HRL). With the idea of using LLMs to assist the labelling of training datasets, the company of Kili Technology has proposed
Segment Anything Model for ChatGPT pre-annotations of text and image projects1.While NLP scholars have quicky addressed the role of LLMs in NLP annotation tasks, no studies, to the best of our knowledge, have explored the potential of LLMs in assisting linguistic annotation for corpus linguistics. This study used the local grammar approach as an example to investigate to what extent LLMs like ChatGPT and the Bing chatbot can understand and thus effectively annotate the functional elements of the speech act of apology with the help of adequate prompting strategies. More specifically, this study compared the performance of ChatGPT, the Bing chatbot, and a human annotator in annotating the local grammar elements of apology in English. The aim was to assess the feasibility of using LLM-assisted annotation for corpus analysis.
Footnote 1: [https://kili-technology.com/](https://kili-technology.com/)
## 2 Linguistic Annotation
In the field of corpus linguistics, linguistic annotation serves to "encode linguistic information within a corpus text in such a way that we can systematically and accurately recover that analysis later" (McEnery & Hardie, 2012, p. 30). Annotated corpora can then contribute to theory formation, theory redefinition or theory enrichment through the use of quantitative linguistic data (Hovy & Lavid, 2010).
The advancement of annotation techniques also contributes to the development of natural language processing (NLP). NLP relies on annotated linguistic data to train Human Language Technologies (HITs) (Ide, 2017, p. 2). Most HITs are statistical machine learning (ML) models which often work better when they are provided with annotated corpora (sometimes referred to as labeled data) (Pustejovsky & Stubbs, 2013). Human linguistic annotation has grown to encompass various linguistic phenomena (Ide, 2017, p. 2). It has also led to the emergence of annotation tools that enable the creation and storage of labeled data, collaborative and distributed annotation efforts, and crowdsourcing mechanisms such as Amazon Mechanical Turk (Ide, 2017, p. 2).
### Linguistic Annotation in Corpus Linguistics
In the early stage of corpus linguistics, annotation was often done manually on small corpora, which was time-consuming and labor-intensive. In recent decades, advances
in computer science have provided powerful methods for corpus linguists to annotate linguistic data automatically Dunn (2022). There are three main approaches to linguistic annotation: fully automatic, semi-automatic (automatic followed by manual correction), and fully manual McEnery & Hardie (2012), p. 30). Several software tools (or _taggers_) are available for automatic annotation of English text, such as CLAWS for part-of-speech tagging Garside et al. (1987), Constraint Grammar system for dependency parsing Karlsson et al. (1995), USAS for semantic tagging Raysson et al. (2004), and Fidditch for constituency parsing Hindle (1983). Part-of-speech tagging is the most common type of linguistic annotation, where every word in the corpus is automatically assigned to its grammatical category Ruhlemann & Ajimer (2015), p. 5).
Automatic annotation is more challenging for certain types of linguistic analysis.
Pragmatic phenomena, which are often realized in flexible linguistic forms, cannot be easily identified by machines. Despite the challenges, some scholars have attempted to automate pragmatic annotation. They have focused on annotating pragmatic features such as speech acts Garcia (2007); Kallen & Kirk (2012); Stiles (1992), discourse markers Kallen & Kirk (2012), quotation Kallen & Kirk (2012); Ruhlemann & O'Donnell (2012), participation roles Ruhlemann & O'Donnell (2012) and politeness Recasens et al. (2013). As corpus pragmatics, the intersection of corpus linguistics and pragmatics, develops rapidly, several speech act annotation schemes have emerged, such as Dialogue Act Markup in Several Layers DAMSL; Allen & Core (1997), Speech Act Annotated Corpus (SPAAC; Leech & Weisser (2003); Weisser (2003), Dialogue Act Markup Language DiAML; Bunt et al. (2010), and the recent Dialogue Annotation and Research Tool DART; Weisser (2016), 2019).
These tools are great advances that promote the development of corpus pragmatics. However, they are still not widely applied. Only a few corpora have discourse and pragmatic annotations Ruhlemann & Ajimer (2015), p. 5). The main reason for the limited use of pragmatic annotation is that most pragmatic phenomena do not match their linguistic forms, so automatic tagging is often inaccurate and manual tagging is necessary (but costly and time-consuming) Ruhlemann & Ajimer (2015), p. 11). DART, for example, is a semi-automatic annotation tool which allows users to identify speech acts (in English) and to create pragmatically annotated corpora Weisser (2019), p. 1). However, the results that it produces may be inaccurate and need manual verification Weisser (2016), p. 2).
### Linguistic Annotation in NLP
In the field of NLP, linguistic annotation can be done manually or automatically. Manual annotation is done by annotators with expert knowledge and experience in annotation. This method has high quality, but it is inefficient, time-consuming and costly. Automatic annotation is based on rule-based methods or machine learning models to predicate the annotation labels. Automatic annotation has the advantages of high efficiency and low cost, but it also has a certain error rate.
Automatic annotation can be realized in two methods: unsupervised and semi-supervised. Unsupervised methods assign pseudo-labels (Zhang et al., 2022) to the corpus by mining the intrinsic property of the data. These methods usually apply the rules specified by experts or clustering algorithms. For example, in dialogue linguistic annotation, some researchers utilized clustering to mine the intentions contained in the dialogue data, and locate the slots of intentions, so as to generate an indefinite number of intentions and slots and transform an unlabeled dataset into a labeled dataset (Shi et al., 2018).
Semi-supervised methods use a small amount of labeled data and a large amount of unlabeled data to train models for automatic annotation. Specifically, Bootstrapping is used to build a classification model based on a small number of labeled samples (Zhang et al., 2022). Then the trained model classified the other unlabeled samples, and the samples with high confidence in the classification are added to the training set (Wu et al., 2022). In addition, there are tools that can help with automatic annotation in a semi-supervised way, such as Prodigy2. It is a script-enabled data annotation tool for building training and test datasets for machine learning models, which allows users to iterate on models quickly and independently.
Footnote 2: [https://prodi.gv/](https://prodi.gv/)
This study aims to find a solution for the automatic recognition of speech acts and their components. In NLP, a concept similar to speech act is dialogue act (DA), which is defined as the function of utterances in a conversation. Dialogue Act Recognition (DAR) is a primary task for dialogue processing. The technique is applied in creating human-machine dialogue systems or chatbots (Firdaus et al., 2021; Ostyakova et al., 2022, p. 1). DAR is treated as a classification problem. Types of dialog act considered include categories representing typical speech acts such as _apology_,
thanking_, _reject_, _agree_, as well as categories such as _self-talk_, _abandon_, _wh-question_ which are not typically investigated by speech act studies (Yu & Yu, 2019).
From a technical perspective, the speech act annotation is more similar to tasks of sequence tagging, such as Named Entity Recognition (NER), Part-of-Speech tagging (POS), and Semantic Role Labeling (SRL). Several recent works have begun to apply LLMs for sequence tagging tasks. GPT-NER, for example, is a model which transforms the NER task to a generation task that can be solved by LLMs (Wang et al., 2023). Similar techniques can also be adopted to realize linguistic annotation for corpus linguistics. In particular, we propose that it is feasible to develop a speech act annotation tool integrated with LLM techniques.
## 3 Using LLM-assisted Annotation for Local Grammar Analysis: Methods and corpus
This section presents the methodological approach of LLM-assisted annotation for local grammar analysis, which can hopefully be replicable for corpus linguists who would like to adopt LLM-assisted annotation for the analysis of other linguistic phenomena. The approach contains three main procedures: a) determine the annotation task; 2) design a prompt with which the selected LLMs can generate expected annotated outputs; 3) evaluate the performance of the LLMs in order to choose the suitable LLM, then compare the performance of the LLM with a human annotator, in order to assess whether it is feasible to use fully automatic method or semi-automatic method for the annotation task.
### 3.1 Determining the annotation task
As an exploratory study, this paper focused on the task of local grammar annotation. Local grammar is an approach to linguistic analysis that seeks to describe language use associated with one specific meaning or function (Hunston, 2002, p. 178). In particular, the local grammar approach analyzes a specific speech act (e.g., apology, request). Researchers have used this method to study various speech acts such as evaluation (Hunston & Sinclair, 2000; Hunston & Su, 2019), requesting (Su, 2017), and apology (Su & Wei, 2018), disclaiming (Cheng & Ching, 2018) and exemplification (Su & Zhang, 2020).
In analyzing a specific speech act, the local grammar approach includes mainly five analytical procedures: 1) identify lexical markers that were conventionally used to
realize the speech act; 2) search for the lexical markers in a corpus to retrieve instances realizing the target speech act, establishing a subcorpus of the speech act; 3) analyze a sample of corpus instances to identify a set of functional elements that realize the speech act, establishing a codebook of functional elements; 4) _annotate manually_ all the instances according to the codebook; 5) analyze the local grammar patterns of the annotated instances. We suggest that LLMs can be used to develop an automatic annotation tool to assist the fourth step of local grammar analysis.
In this study, we examined the local grammar of the speech act of apology in English, which has been explored in several previous studies (Su, 2021; Su & Wei, 2018). As an exploratory study, we only examined the apology instances realized by _sorry_, the most frequent lexical marker used for apologizing in English (Su, 2021; Su & Wei, 2018). The corpus under investigation consisted of 5539 instances containing _sorry_ retrieved from the Spoken BNC2014.
According to Su and Wei (2018), the local grammar of apology in English contains seven functional elements: _apologiser_ (the one who apologises), _apologising_ (the elements that realize apologises), _forgiveness-seeking_ (the action of seeking forgiveness), _apologisee_ (to whom the apology is made or from whom the apologiser seeks for forgiveness), _intensifier_ (the elements that upgrade the degree of regret), _specification_ (the elements that specify the offense/reason), and _hinge_ (the elements that link different functional elements). With adaptations, in our study, the functional tags to be annotated are _apologiser_, _reason_, _apologising, apologisee_, and _intensifier_. The element of _forgiveness-seeking_ was not considered because it is mainly realized by lexical markers such as _forgive_ and _pardon_, which were not examined in this study. The element of _hinge_ was not considered because we regarded this element as not essential for the functional pattern of apology but might complicate the functional pattern since it refers to grammatical words such as _am_, _are_, _for_, _that_, which might on the other hand cause unnecessary machine confusion. The element of _specification_ was renamed as _reason_, because we noticed that this term could be better understood by the language model for being more explicit.
To sum up, the annotation task in this study is to detect the speech act of apology and annotate any functional elements such as APOLOGISING, REASON, APOLOGISER, APOLOGISEE, or INTENSIFIER in the given text.3.2 Prompt designing
To realize LLM-assisted annotation, a crucial step is to design a suitable prompt for the
specific task. The performance of LLMs depends much on the input of prompt. Prompting is the method of conditioning the language model Liu et al. (2023). Suitable prompts can manipulate the LLMs to generate expected output Liu et al. (2023), p. 195:2). In particular, annotation task could be better performed by LLMs by using more advanced prompting techniques Kuzman et al. (2023) such as standard few-shot prompting, or in-context learning Brown et al. (2020); manual few-shot chain-of-thought prompting J. Wei et al. (2023); and zero-shot chain-of-thought prompting Kojima et al. (2022). A brief description of these prompting techniques can be found in Zhong et al. (2023).
The selection of prompting strategies is closely related to the type of annotation task. The researcher would need to make several experiments to establish a candidate prompt, which should then be fine-tuned and testified on sample texts until it achieved satisfactory performance.
In this study, the prompt designing and testing was conducted on the Bing chatbot. After several exploratory experiments, we established a candidate prompt and testified it on three sets of samples, each containing 100 instances. In the first two rounds of test, necessary revisions were made to the prompt to enhance the Bing chatbot's annotation performance. In the third round of test, the fine-tuned prompt led to annotated results with an accuracy rate of 98%, suggesting that it could be used for the target annotation task.
The finalized prompt consists of three parts: exemplars, instruction, and task description (Appendix 1). The prompt was divided into two sets to be input separately since the Bing chatbot had a token limit for each question. The main technique that we used was few-shot prompting, with which we input a set of exemplars so that the LLMs can learn how to annotate the functional tags. Criteria for the selection of exemplars to be provided in the prompt are as follows:
1. Instances selected as exemplars contain highly frequent clusters of _sorry_. We used AntConc to extract clusters of _sorry_ (e.g., _I'm sorry_, _sorry I've, really sorry_, _sorry about that_). 2-gram, 3-gram, and 4-gram clusters (frequency \(\geq\) 10) were considered. These clusters were searched in the corpus to retrieve instances that can then be selected as examples in the prompt. Repetitive cases were excluded.
2. Our test showed that the more concise the prompt is, the better the AI can
perform in the annotation task. Therefore, we cut similar examples that would not affect the annotation results. Moreover, we cleaned and polished the exemplary utterances, in a way that AI could understand better the prompt.
In the prompt designing process, we noticed that the following aspects have had an impact on the performance of the LLM:
1. Formal layout. The use of textual boundaries such as paragraph segments and the Q&A format has enhanced the LLM's understanding of the prompt.
2. Grammatical correctness. The deletion of grammatical errors presented in the instances extracted from the corpus has enhanced the perfroamnce of the LLM.
3. Semantic accuracy. Using accurate and precise words in the prompt led to better machine performance than using general vague words. For example, the expression of _the speech act of apology_ was understood better than _the utterance of apology_.
4. Semantic emphasis. The machine generated more accurate results when we added the types of functional elements in the final question (_Can you detect the speech act of apology and annotate any functional elements such as APOLOGISER, REASON, APOLOGISEE, APOLOGISING, or INTENSIFIER in the following utterance?3_). Footnote 3: Added information is underlined.
5. Textual conciseness. Both instructions and exemplary instances should be as concise as possible. Complex texts may confuse the machine.
6. Textual order. The order of textual units was related to the priority of attention that the machine would pay to the task. In our annotation task, one major difficulty regards the functional element of REASON. We noted that the machine performed better in this aspect after that we moved the examplars containing the functional element of REASON from the last lines to the first lines.
7. Semantic clearness. Tags that are semantically clear, explicit and self-evident were better understood by the LLM. For example, using the tag
\(<\)REASON\(>\) to substitute the tag \(<\)SPECIFICATION\(>\) has enhanced the performance of the Bing chatbot in the annotation of the element that indicates the reason for the apology.
Elimination of inadequate words. When the texts to be annotated contain sensible words such as dirty words, the Bing chatbot is not able to generate annotated texts that would reproduce these words.**3.3** Evaluating the performance of LLMs in the annotation taskA wide range of linguistic theories have been developed in the field of corpus linguistics. Although several research have shown the impressive performance of LLMs in certain linguistic annotation tasks, it is still unknown whether they can understand other linguistic phenomena such as appraisal strategies [15], rhetorical moves [1, 16], metadiscourse strategies [13] and local grammar patterns of speech acts [12, 16]. Therefore, before entrusting machine to produce annotated corpus, it is necessary for the linguist to evaluate the performance of LLMs in annotating the linguistic aspect under investigation. The evaluation can include at least two steps: firstly, conduct experiments on different LLMs to select one that is most suitable for the target annotation task; secondly, evaluate the performance of the selected LLM with the performance of human annotators to decide whether to use fully automatic annotation or semi-automatic annotation, where manual annotation is still needed to check specific cases.
In this study, we compared the performance of Bing AI, ChatGPT and a human annotator in the local grammar annotation task. The annotation experiments were conducted on samples extracted from the corpus and included two main steps: 1) compare the performance of the Bing chatbot and ChatGPT in the annotation task of 50 instances; 2) compare the performance of the Bing chatbot and human annotator in the annotation task of 1000 instances. The next section will present the results of these experiments.
## 4 Experimental Results and Analysis
The performance of the Bing chatbot and ChatGPT
The experiments were conducted on the websites of [https://www.bing.com/new](https://www.bing.com/new) (Mode of More Precise) and [https://chat.openai.com/](https://chat.openai.com/). Firstly, we input the same prompt (Appendix 1) into the chatbots of Bing and ChatGPT. Secondly, we input into the chatbots a sample of 50 instances extracted randomly from the corpus. Finally, we
collected the annotated instances from the generated texts and check the instance-level accuracy. The results showed that Bing's chatbot performed much better than ChatGPT in the local grammar annotation task (Table 1).
The deficiencies of ChatGPT mainly consisted in 1) confusion of tags (e.g., annotate _sorry_ as APOLOGISER) (see Appendix 2); 2) misidentification of tags such as REASON, INTENSIFIER, etc. (see Appendix 2); 3) inconsistent completion formats (i.e., not generating texts according to the formats indicated in the prompts). Considering that the unsatisfactory performance of ChatGPT may be due to inadequacy of prompt, we made further attempts to find more suitable prompts. Finally, we found that ChatGPT was able to identify more accurately the functional elements of apology only when texts irrelevant to the speech act of apology were excluded from the texts to be annotated. In other words, to complete our annotation task with ChatGPT, two steps are needed: firstly, identification of texts relevant to the speech act of apology; secondly, assignment of tags to the clean texts. However, the two-step procedure might import unnecessary noises to the results. Therefore, we then focused on testifying the performance of the Bing chatbot on a larger sample, comparing it with human annotation.
### 4.2 The performance of the Bing chatbot and the human annotator
A sample of 1000 instances randomly extracted from the corpus were used to testify the annotation performance of the Bing chatbot and the human annotator. To collect the annotated results of the Bing chatbot, the 1000 instances were input into the chatbot one by one, because we noticed that it tended to generate inaccurate results if several instances were input at the same time. The annotation task was conducted from 11 April to 28 April. An example of annotated text can be found in Appendix 3.
Afterwards, we involved a human annotator for the annotation of the same set
\begin{table}
\begin{tabular}{l l l} \hline & **Bing chatbot** & **ChatGPT** \\ \hline No. of tested instances & 50 & 50 \\ No. of correctly annotated instances & 42 & 25 \\ Instance-level accuracy rate & 84\% & 50\% \\ \hline \end{tabular}
\end{table}
Table 1: Instance-level accuracy of the annotated results of the Bing chatbot and ChatGPT
of instances. The training process was similar to those we conducted with the Bing chatbot: firstly, we provided to the annotator an instructive text similar to the prompt we input to the Bing chatbot; secondly, we asked the annotator to use Note Tab to annotate three sets of samples (each containing 100 instances) to ensure a correct understanding of the annotation task; finally, after that the third set of sample reached a 100% accuracy, the annotator started to annotate the sample of 1000 instances. As reported by the annotator, the time used to complete the task was four hours.
The annotated texts generated by the Bing chatbot and the human annotator were then checked by the authors. Afterwards, we applied precision, recall and F1 to evaluate the results of the annotation. The functions can be computed by counting the numbers in the confusion matrices shown in Table 2.
Precision is a metric that measures how many positive predictions are positive.
\[precision=\frac{TP}{TP+FP} \tag{1}\]
Recall is a metric that measures how many positive examples in the dataset were predicted correctly.
\[recall=\frac{TP}{TP+FN} \tag{2}\]
F1 reflects the accuracy and coverage of the prediction results for a certain class.
It is the harmonic mean of precision and recall.
\[F_{1}=\frac{2*precision*recall}{precision+recall} \tag{3}\]
As shown in Table 3, the overall performance of the Bing chatbot was slightly less satisfactory than human performance, while it performed slightly better in annotating the tag of REASON.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{Predicted condition} \\ & Total quantity & Positive (P) & Negative (N) \\ \hline Actual & Positive (P) & True Positive (TP) & True Positive (TP) \\ condition & Negative (N) & False Positive (FP) & False Positive (FP) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Confusion matrices
\begin{table}
\begin{tabular}{l c c} \hline \hline & \multicolumn{2}{c}{Predicted condition} \\ & Total quantity & Positive (P) & Negative (N) \\ \hline Actual & Positive (P) & True Positive (TP) & True Positive (TP) \\ condition & Negative (N) & False Positive (FP) & False Positive (FP) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of 1000 instances annotated by the Bing chatbot and the human annotator
#### 4.2.1 Recognition of NO APOLOGY
Instances containing the lexical marker _sorry_ do not indicate the presence of the direct speech act of apology in two main cases: 1) when _sorry_ was used to express sadness through sympathy with someone else's misfortune (Example 1); 2) when the apology was mentioned in the indirect speech (Example 2).
1. so I'm very _sorry_ to hear that
2. and he said _sorry_ about that
Among the sample of 1000 instances, 98 instances should be recognized as NO APOLOGY. The Bing chatbot has incorrectly recognized 28 instances as apology, while the human annotator has mistaken 11 cases. Among the Bing chatbot's 28 mistaken cases, 20 cases concern the mention of apology in indirect speech. This inaccuracy may be due to insufficient instruction of the prompts, which do not indicate explicitly that indirect speech containing apology should be recognized as NO APOLOGY.
#### 4.2.2 Recognition of APOLOGISING
In local grammar analysis, the functional element of APOLOGISING can be realized by lexical markers such as _sorry_, _apologize_, _apologies_, etc. In the present study, we examined only instances containing the lexical marker of _sorry_. In the instances where
the speech act of apology is present, each lexical marker of _sorry_ should be annotated with the tag \(<\)APOLOGISING\(>\) (Example 3).
(3) oh \(<\)APOLOGISING\(>\) sorry \(<\)/APOLOGISING\(>\)\(<\)APOLOGISING\(>\) sorry \(<\)/APOLOGISING\(>\) I just saw it waving at me (Annotated by the Bing chatbot)
Among the 902 instances containing the speech act of apology, only one case was not annotated as APOLOGISING by the Bing chatbot and the human annotator. This means that the Bing chatbot performs excellently in recognizing this functional element which is related to a fixed form (i.e., _sorry_) in our annotation task.
#### 4.2.3 Recognition of REASON
The functional element of REASON refers to the reason for the speech act of apology. The recognition of this functional element requires strong capacities in the understanding of the semantic meaning and the pragmatic function of language. The present study showed that among the 121 cases of REASON, the Bing chatbot correctly annotated 108 cases (Example 4). Only 13 cases were not detected (Example 5), while 6 cases were incorrectly recognized (Example 6). This means that GPT-4 has strong capacities in understanding both the semantic and pragmatic aspects of language.
(4) oh \(<\)APOLOGISING\(>\) sorry \(<\)/APOLOGISING\(>\)\(<\)REASON\(>\) I missed it \(<\)/REASON\(>\) (Annotated by the Bing chatbot)
(5) \(<\)APOLOGISING\(>\) sorry \(<\)/APOLOGISING\(>\) I forgot your birthday (Annotated by the Bing chatbot, REASON not identified)
(6) \(<\)APOLOGISER\(>\) I \(<\)/APOLOGISER\(>\)'m \(<\)APOLOGISING\(>\) sorry \(<\)/APOLOGISING\(>\)\(<\)REASON\(>\) I'm just trying to make it nice for you \(<\)/REASON\(>\) (Annotated by the Bing chatbot, REASON incorrectly identified)
In terms of manual annotation, the data show that the human annotator did not outperform GPT-4 in the annotation of REASON: 17 cases were neglected (Example 7), while 8 cases were incorrectly identified as REASON (Example 8). Obviously, this
does not mean that the annotator was not able to recognize the reason for an apology, because the errors were most probably due to cognitive tiredness and time limitation.
(7) yeah \(<\)APOLOGISER\(>\) I \(<\)/APOLOGISER\(>\)'m \(<\)APOLOGISING\(>\) sorry \(<\)/APOLOGISING\(>\) I was just just a bit distracted by his (Human annotation, REASON not identified)
(8) oh \(<\)APOLOGISING\(>\) sorry \(<\)/APOLOGISING\(>\) \(<\)REASON\(>\) I 'll tell you about this \(<\)/REASON\(>\) (Human annotation, REASON incorrectly identified)
#### 4.2.4 Recognition of APOLOGISER
The functional element of APOLOGISER refers to the person who apologizes. Its typical lexical forms in English are \(I\) and _we_ (Example 6 and Example 7). The Bing chatbot has identified all the 164 cases of APOLOGISER in the sample, while the human annotator has inattentively neglected two cases. However, the AI has incorrectly classified 16 cases as APOLOGISER, where the lexical form of \(I\) is actually used as a subject of other speech acts (Example 9).
(9) \(<\)APOLOGISING\(>\) sorry \(<\)/APOLOGISING\(>\)\(<\)APOLOGISER\(>\) I
\(<\)/APOLOGISER\(>\)'m not saying the same thing (Annotated by the Bing chatbot)
#### 4.2.5 Recognition of APOLOGISEE
The functional element of APOLOGISEE refers to the person to whom the apology is made (Example 10). As the functional element of REASON, it is not realized by fixed lexical forms and its recognition requires that the machine has strong capacities of language understanding. Its recall rate of the Bing chatbot is relatively lower than that of APOLOGISER and APOLOGISING, which are related to conventional linguistic forms: among the 42 cases of APOLOGISEE, 7 cases were not recognized (Example 11), while one case was mistakenly identified. The human annotator performed slightly better than the AI: only 5 cases were neglected, while no case was incorrectly identified.
(10) oh \(<\)APOLOGISING\(>\) sorry \(<\)/APOLOGISING\(>\)\(<\)APOLOGISEE\(>\) love \(<\)/APOLOGISEE\(>\) (Annotated by the Bing chatbot)
(11) \(<\)APOLOGISING\(>\) sorry \(<\)/APOLOGISING\(>\) man I 'll have a go (Annotated by the Bing chatbot)
**4.2.6**: _Recognition of INTENSIFIER_
INTENSIFIER refers to the element that upgrades the degree of apology. Among the 44 cases of INTENSIFIER in the sample, only one case was neglected by the human annotator, while three cases were not identified by the Bing chatbot. However, the machine error does not indicate its incapacity in meaning understanding, because the Bing chatbot was able to recognize in some cases the linguistic form which was instead neglected in other cases. For instance, Example (12) and Example (13) were both annotated by the Bing chatbot: in the former, _very_ was identified as INTENSIFIER, while in the latter, _very_ was neglected.
(12) \(<\)INTENSIFIER\(>\) very \(<\)/INTENSIFIER\(>\)\(<\)APOLOGISING\(>\) sorry \(<\)/APOLOGISING\(>\) (Annotated by the Bing chatbot)
(13) \(<\)APOLOGISER\(>\) I \(<\)/APOLOGISER\(>\)'m very \(<\)APOLOGISING\(>\) sorry \(<\)/APOLOGISING\(>\) (Annotated by the Bing chatbot)
We are not able to indicate why this kind of error occurs since the inner workings of the "black box" of LLMs are still obscure. If we think of artificial intelligence as similar to human intelligence, this type of error seems like a lapse of attention or concentration.
**4.2**: Summing up
This section has evaluated the performance of the Bing chatbot, ChatGPT, and a human annotator in the annotation task of local grammar elements. The results show that the Bing chatbot outperformed ChatGPT (the version powered by GPT-3.5) in completing the task in terms of several aspects. First, the Bing chatbot generated output in a more stable way, while ChatGPT tended to answer differently in each conversation turn. Second, the Bing chatbot could consistently present the annotated texts according to the form indicated in the prompt. Third, ChatGPT often confused the tags (e.g., using \(<\)APOLOGISER\(>\) in the place of \(<\)APOLOGISING\(>\)), while the Bing chatbot showed a high accuracy in the use of tags. Fourth, the Bing chatbot showed stronger abilities in understanding the local grammar tags. We were aware that the superior performance of the Bing chatbot might be related to the fact that the prompt being used was initially
designed and fine-tuned on the Bing chatbot. Consequently, we designed and tested several alternative prompts with ChatGPT, but it did not yield satisfactory results in the given task. Therefore, we suggest that the LLM of GPT-4 would be a better choice for the annotation of local grammar elements.
To assess whether the given annotation task could be done fully automatically or still need human intervention, we compared the performance of the Bing chatbot and a human annotator. The data show that the Bing chatbot achieved a high accuracy rate (92.7%) at the instance-level, which was only slightly lower than that of the human annotator (95.4%). The impressive overall performance suggests that it would be feasible to apply LLM-assisted annotation for local grammar analysis.
At the tag-level, the Bing chatbot showed different performance, as the human annotator did. The accuracy was related to some extent to the degree of flexibility of the linguistic forms representing a local grammar function. First, tags related to highly conventional forms were annotated more accurately. For example, in terms of the tag of APOLOGISING realized by _sorry_, both the Bing chatbot and the human annotator achieved an F1 score of 99.95%. For the tag of APOLOGISER, mostly related to \(I\), the Bing chatbot achieved an F1 score of 95.35%, while the human annotator achieved 99.39%. However, the high degree of correlation between function and form is sometimes a main reason for machine errors: the Bing chatbot have stubbornly recognize the form of \(I\) as APOLOGISER in some irrelevant cases (Precision = 91.11%), while this kind of error was not made by the human annotator (Precision = 100%).
Functional tags realized by flexible linguistic resources tend to be less accurately recognized by both human and machine. The tags of REASON and APOLOGISEE are typical cases: the F1 score achieved by the Bing chatbot is respectively 91.91% and 89.74%, while the human annotator achieved respectively 89.27% and 93.67%. However, the Bing chatbot already performed impressively in annotating this type of tags, demonstrating strong capabilities in understanding the pragmatic and semantic aspects of language.
The most noticeable weakness of the Bing chatbot concerns the recognition of NO APOLOGY: the recall rate is only 71. 43%. The human annotator achieved 88.78% in this respect. More specifically, the Bing chatbot tended to 1) misunderstand the cases where _sorry_ was used to express sympathy with someone and 2) recognize the cases where the apology was mentioned in indirect speeches. While the former mistake might indicate weakness in context understanding, the latter mistake could probably be avoided by providing relevant exemplars in the prompt.
Drawing on the experimental results, this study suggests that GPT-4 can be used to realize automatic annotation of local grammar elements in speech acts. However, to achieve more reliable results, human intervention would be needed to check specific functional tags. Identifying which tags need human verification would require evaluative experiments in terms of specific annotation tasks.
Conclusion
To assess to what extent LLM-assisted annotation can be applied to local grammar analysis, this study evaluated the performance of LLMs in annotating the local grammar elements constituting the speech act of apology in English. The results show that the Bing chatbot (GPT-4) outperformed ChatGPT (GPT-3.5) in the given annotation task. Although the overall accuracy of the Bing chatbot is slightly lower than that of the human annotator, it is still quite impressive. This suggests that using a large language model to assist annotation for local grammar analysis is highly feasible. In particular, the prompt provided in this study can be used as a model to assist local grammar analysts in automatically annotating other speech acts.
However, it would be imprudent to rely solely on fully automatic annotation for a study. We recommend that human intervention is still necessary to review tags that are less accurately annotated by the machine. In our study, we observed that the performance of the Bing chatbot varied depending on the type of tag. Generally, the F1 score tended to be higher for tags associated with conventional linguistic forms and lower for tags realized through flexible linguistic forms. However, this tendency is not universally applicable. The Bing chatbot sometimes adhered too rigidly to specific forms and recognized them even when they were used in unrelated contexts. Therefore, conducting in-depth analysis through evaluative experiments is essential to identify the specific tags that require human intervention.
In terms of limitations, this study has focused only on investigating the local grammar of the speech act of apology in English. The extend to which LLMs can comprehend other linguistic aspects in different languages remains unknown. Therefore, we encourage future studies to assess the annotation capabilities of LLMs across a wider range of linguistic phenomena. For instance, appraisal strategies (Martin & White, 2005), rhetorical moves (Bhatia, 1993; Swales, 1990), metadiscourse strategies (Hyland, 2004), etc., could be explored. The methodological approach proposed in this study can serve as a valuable reference for such investigations. These endeavors can help assess the feasibility of LLM-assisted annotation for specific types of linguistic analysis. It is important to note that the feasibility would depend on the selection of LLM and the design of prompts. Furthermore, these exploratory studies would not only enhance our understanding of LLMs but also contribute to the advancement of linguistic theories under investigation.
For future work, we propose to integrate LLMs with annotation platforms such as INCEpTION Platform (Klie et al., 2018), which supports interactive and semantic annotation tasks (Klie et al., 2018). This would enable LLM-assisted annotation for corpus linguists. Moreover, some researchers are developing LLM-based annotation platforms that could be useful when they become available.
|
2308.01487 | Data-Driven Nonlinear TDOA for Accurate Source Localization in Complex
Signal Dynamics | The complex and dynamic propagation of oscillations and waves is often
triggered by sources at unknown locations. Accurate source localization enables
the elimination of the rotor core in atrial fibrillation (AFib) as an effective
treatment for such severe cardiac disorder; it also finds potential use in
locating the spreading source in natural disasters such as forest fires and
tsunamis. However, existing approaches such as time of arrival (TOA) and time
difference of arrival (TDOA) do not yield accurate localization results since
they tacitly assume a constant signal propagation speed whereas realistic
propagation is often non-static and heterogeneous. In this paper, we develop a
nonlinear TDOA (NTDOA) approach which utilizes observational data from various
positions to jointly learn the propagation speed at different angles and
distances as well as the location of the source itself. Through examples of
simulating the complex dynamics of electrical signals along the surface of the
heart and satellite imagery from forest fires and tsunamis, we show that with a
small handful of measurements, NTDOA, as a data-driven approach, can
successfully locate the spreading source, leading also to better forecasting of
the speed and direction of subsequent propagation. | Chinmay Sahu, Mahesh Banavar, Jie Sun | 2023-08-03T00:57:29Z | http://arxiv.org/abs/2308.01487v2 | # Data-Driven Nonlinear TDOA for Accurate Source Localization in Complex Signal Dynamics
###### Abstract
The complex and dynamic propagation of oscillations and waves is often triggered by sources at unknown locations. Accurate source localization enables the elimination of the rotor core in atrial fibrillation (AFib) as an effective treatment for such severe cardiac disorder; it also finds potential use in locating the spreading source in natural disasters such as forest fires and tsunamis. However, existing approaches such as time of arrival (TOA) and time difference of arrival (TDOA) do not yield accurate localization results since they tacitly assume a constant signal propagation speed whereas realistic propagation is often non-static and heterogeneous. In this paper, we develop a nonlinear TDOA (NTDOA) approach which utilizes observational data from various positions to jointly learn the propagation speed at different angles and distances as well as the location of the source itself. Through examples of simulating the complex dynamics of electrical signals along the surface of the heart and satellite imagery from forest fires and tsunamis, we show that with a small handful of measurements, NTDOA, as a data-driven approach, can successfully locate the spreading source, leading also to better forecasting of the speed and direction of subsequent propagation.
Localization, non-linear, TDOA, rotors, atrial fibrillation, forest fire, tsunami.
## I Introduction
Target location estimation from data (source and target localization) using radio and acoustic signals has been subject to research for decades and it continues to receive interest in the signal processing community in research related to radar [1, 2], sonar [3], multimedia [4], underwater acoustic communication [5], animal tracking [6], mobile communications [7], wireless sensor network [8], and GPS [9]. Still, complexities in signal propagation through non-homogeneous media [10, 11] together with noise and uncertainty in received signal strength makes the quality of target localization a challenging task in many complex and dynamical systems.
Localization is essential in most wireless sensor network applications. In general, target estimation mostly utilizes the following methods: time of arrival (TOA), time difference of arrival (TDOA), angle of arrival (AOA), and received signal strength (RSS). Hybrid algorithms such as large aperture arrays have been investigated to improve the qualities of measurements of received signals [12]. TOA and TDOA are the most versatile and widely used localization methods. TOA usually estimates the distance by using one-way ranging and the signal propagation speed of the transmitter. When the signal transmission start time is known in a synchronous network, TOA implementation is fairly straightforward. However, in most cases, the networks are asynchronous [13]. As the signal transmission start time is unknown in most cases, TOA becomes irrelevant. In such cases, TDOA can be applied as it uses the difference in signal arrival time at different anchor nodes for target localization. Hence, the need for the unknown start time of signal transmission is not required. However, TDOA cannot localize an unknown target without knowledge of the speed of the signal. It also fails to solve for an unknown target in a non-homogeneous medium. In this paper, we present algorithms (mTDOA and NTDOA) that jointly estimate the source of the signals, the origin time, and the speed of propagation. Our algorithms can solve the localization problem without knowing the time of origin or the speeds of propagation of signals along complex media.
Our contributions are:
* We present two algorithms, mTDOA and non-linear TDOA (NTDOA), as solutions to the source localization problem in non-homogeneous and complex environments.
* We validate the effectiveness of the proposed algorithms with three real-life complex dynamical problems: atrial fibrillation, forest fires, and tsunamis.
* We demonstrate that our algorithms can be extended to estimate other parameters (other than the source) such as the speed of propagation.
The rest of this paper is organized as follows. We provide background in Section II, followed by the problem statement in Section III. Our proposed solutions are in Section IV and are validated numerically in Section V. Finally, concluding remarks and future work are presented in Section VI.
## II Literature Review
TDOA (Time Difference of Arrival) is a popular localization algorithm known for its ease of use [14, 15, 16, 10]. In TDOA, a reference anchor node is selected, and the time difference between it and other anchor nodes is estimated. The algorithm assumes that the signal propagates at a known, constant speed [17]. It measures the difference in the time it takes for a line-of-sight (LOS) signal to arrive at different anchor nodes, allowing the source of the signal to be localized [18]. This method of calculating a target's location is called multilateration. TDOA is useful in applications such as surveillance. Iterative algorithms, including MLE and constrained optimization methods [19, 20] are used to solve TDOA localization problems. Some attempts have been made to solve the problem without iteration, using simple operations such as
matrix inversion [21, 22]. However, if the signal propagation speed is unknown or varying, target localization becomes challenging [17].
Unknown signal propagation speeds in varying media make target localization challenging, as in the case of AFib where surgical ablation therapy is used to eliminate affected cardiac tissue. However, identifying the source is complex due to the intricate heart structure and spiral wave patterns [23]. An example AFib signal simulated with the FitzHugh-Nagumo model is shown in Figure 1[24]. In [24], a triangulation-based localization algorithm is proposed, which is a variant of the time difference of arrival (TDOA) algorithm. Here, three anchors (i.e. sensors) are placed at known locations in the path of the spiral waves. The speed of propagation of the waves is assumed to be known. The arrival times of the waves at the probes are recorded, and using the speed of the propagating wave and the locations of the probes, the center of the spiral wave is estimated as the intersection of two hyperbolas associated with two pairs of probes.
Similar to the heterogeneity of the propagation of signals on the surface of the heart, the spread of forest fires can vary based on wind direction, wooded area, and terrain. Remote sensing via computers and satellites is limited due to smoke occlusion and low camera resolution, hindering wildfire suppression. Unmanned aerial vehicles (UAVs) equipped with computer vision and GPS systems are used to detect and manage wildfires, which is a complex task as forests are non-structured environments [25]. Having vision sensors and GPS systems to determine fire origin location is intricate and very few researchers are working on this problem [26].
Tsunami localization is a crucial research area, where wave velocity can vary due to underwater topography changes [27]. Tsunameters are used for early detection and real-time measurements of deep-ocean tsunamis to improve our understanding of these phenomena and develop effective mitigation strategies [28]. Though there are numerous methods [29, 30, 31, 32, 33] to estimate source location and wave velocity, they are inherently expensive and complex in implementation [5].
In this paper, we present a nonlinear time difference measurement-based algorithm (NTDOA) that jointly estimates the source of pulsating waves, the origin time, and the speed of propagation. Our algorithm, which can be interpreted as a nonlinear generalization of TDOA, can solve the localization problem without knowing the time of signal origin or the varying speeds of propagation in non-homogeneous media. We validate the effectiveness of the proposed approach using simulated data and data collected from forest fires and tsunamis.
## III Problem Statement
The models described for phenomena such as tsunamis, forest fires, and electrical flow in AFib, are typically modeled using complex dynamical systems described by coupled differential equations. In order to derive our estimators, we use an observation-based model across all these types of systems, irrespective of the actual generation model. In order to do this, we define the forward models that the observations most closely resemble, leading to the derivation of the estimator as the inverse problem. In what follows, we describe these forward models, starting from a simple isotropic model with known propagation speed to an isotropic model with a fixed, but unknown speed, and finally, a model where the speed of propagation varies at different points in the medium.
In this article, the following notations are introduced and will be used consistently throughout:
* unknown source, \(\mathbf{r}_{0}\in\mathbb{R}^{3}\)
* anchors at known locations, \(\mathbf{r}_{l}\) with \(l=1,\ldots,N\)
* (observed) signal arrival time, \(t_{l}\) (\(l=1,\ldots,N\)) which represents the time at which each anchor detects the arrival of the signal from the propagation source
* (unobserved) anchor-to-source distances, denoted as \(d_{l}=\|\mathbf{r}_{l}-\mathbf{r}_{0}\|\) (unknown)
The general problem can be described as follows. Given the location of \(N\) anchors denoted as denoted as \(\{\mathbf{r}_{l}\}_{l=1}^{N}\), with observed signal arrival time \(\{t_{l}\}_{l=1}^{N}\). The goal is to estimate the unknown location of the propagation source \(\mathbf{r}_{0}\).
### _Case-I (Isotropic medium with known speed of propagation)_
In the ideal case, we assume that a radiating source at an unknown location \(\mathbf{r}_{0}\in\mathbb{R}^{3}\) is propagating a signal in all directions in an isotropic medium, with the signal propagation speed \(c\) known and constant (see Figure 2(a)). Signal transmission starts at an unknown time \(t_{0}\). There are \(N\) anchors placed at known locations. The \(l\)-th anchor, located at \(\mathbf{r}_{l}\) detects the signal at time \(t_{l}=t_{0}+d_{l}/c\), where \(d_{l}=\|\mathbf{r}_{l}-\mathbf{r}_{0}\|\). In most cases we encounter, the measurements of \(\{t_{l}\}\) are noisy. With these noisy \(t_{l}\) values and the locations of the anchors known, our task is to estimate the location of the radiating source. Note that in this case the problem can be solved by the classical time delay of arrival algorithm [17, 34].
Fig. 1: A cardiac wave is simulated using a modified FitzHugh–Nagumo model [24]. Each sub-figure shows an evolution of the system, with the rotor model being visible in the stripe identified by the white arrow.
### _Case-II (Isotropic medium with unknown, but constant speed of propagation)_
In the second, more advanced case, we drop the assumption that the speed of propagation is known while maintaining the assumption that the speed is constant. Here, the radiating source located at \(\mathbf{r}_{0}\) is propagating a signal in all directions in an isotropic medium, with an unknown, but constant signal propagation speed \(c\) (see Figure 2(a)). Signal transmission starts at an unknown time, \(t_{0}\). There are \(N\) anchors placed at known locations. The \(l\)-th anchor, located at \(\mathbf{r}_{l}\), detects the signal at time \(t_{l}=t_{0}+d_{l}/c\), where \(d_{l}=\|\mathbf{r}_{l}-\mathbf{r}_{0}\|\).
### _Case-III (Anisotropic medium with unknown speed of propagation)_
In the third and most challenging case, we consider the most practical setup by assuming that the signal propagation speed is not known and is variable due to the presence of non-homogeneous media in the transmission field.
Here, a source at \(\mathbf{r}_{0}\) starts transmitting at time \(t_{0}\). There are \(N\) anchors placed at known locations. The speed of propagation of the signal can vary across the medium (see Figure 2(b)) due to it being non-homogeneous. The \(l\)-th anchor, located at \(\mathbf{r}_{l}\) detects the signal at time:
\[t_{l}=t_{0}+t_{0,l}(\mathbf{r}_{0},\mathbf{r}_{l},\mathbf{s}_{0,l}), \tag{1}\]
where \(t_{0,l}(.)\) is a non-linear function that calculates the propagation time between the source \(\mathbf{r}_{0}\), the anchor \(\mathbf{r}_{l}\), and the path between them, \(\mathbf{s}_{0,l}\).
## IV Solutions
In the earlier section, we described forward models, starting from a simple isotropic model with known propagation speed, to an isotropic model with a fixed, but unknown speed, and finally, a non-isotropic model, where the speed of propagation can vary at different points in the medium. In what follows, we propose solutions for each case leading to the derivation of the estimator as the inverse problem together with computational methods of solving them.
### _Case-I (Isotropic medium with known speed of propagation)_
Inverse ProblemWith \(N\) anchors, the unknown target location and unknown start time can be estimated by solving:
\[\mathbf{x}=\underset{\mathbf{r}_{0},t_{0}}{\operatorname{argmin}}\sum_{l=1}^{ N}\left[c^{2}(t_{l}-t_{o})^{2}-\|\mathbf{r}_{l}-\mathbf{r}_{0}\|^{2}\right]^{2}, \tag{2}\]
where \(\mathbf{x}\) contains the estimates of the target location \(\mathbf{r}_{0}\), and the unknown start time \(t_{0}\). The location of the \(l\)-th anchor is \(\mathbf{r}_{l}\), and the time at which the signal passes it is \(t_{l}\).
Note that this optimization problem also has a matrix solution given by [17, 34]:
\[\mathbf{x}=\mathbf{H}^{\#}\mathbf{b}. \tag{3}\]
where the vector of unknowns \(\mathbf{x}\) contains the target location \(\mathbf{r}_{0}\) and the unknown start time \(t_{0}\). \(\mathbf{H}^{\#}\) is the pseudo-inverse of \(\mathbf{H}\), which contains time difference data and the corresponding distances between anchors. \(\mathbf{b}\) is the measurement vector from \(N\) anchors containing anchor location information. Since this is the classical TDOA problem, the solution is well known [34, 35]. The solution discussed here using Eqn. (3) is not necessarily optimal when the observations are noisy [36].
Fig. 2: (a) The source (star in the middle) starts transmitting at an unknown time, \(t_{0}\). The speed of propagation of the signal is fixed at all points. For Case-I (Section III-A), it is assumed that the speed of propagation is known, and for Case-II (Section III-B), it is unknown. The signal is received at the anchors at times \(t_{l}\), with \(t_{l}=t_{l}\), since they are at the same distance from the source. In the inverse problem, with the anchor locations and times \(t_{l}\), \(l\neq 0\) known, we estimate the location of the source, and the time \(t_{0}\), when the source begins transmission. Additionally, for Case-II, the speed of propagation is also estimated. (b) The source (star in the middle) starts transmitting at time \(t_{0}\). The speed of propagation of the signal can vary across the medium. The signal is received at the anchors at times \(t_{l}\). In this case, it is not required that \(t_{1}=t_{l}\), even though they are at the same distance from the source since the speed of propagation can vary between the source and each anchor. In the inverse problem, with the anchor locations and times \(t_{l}\), \(l\neq 0\) known, we estimate the location of the source, the speed of propagation at each point of interest, and the value of \(t_{0}\).
### _Case-II (Isotropic medium with unknown constant speed of propagation)_
_Inverse problem:_ With \(N\) anchors placed in the anchor field around the source, our proposed modified TDOA (mTDOA) algorithm jointly estimates the source location, speed of propagation, and initial time of the signal by solving the following optimization problem:
\[\mathbf{x}=\underset{\mathbf{r}_{0},\mathbf{b}_{0},c}{\text{argmin}}\sum_{l=1}^ {N}\left[c^{2}(t_{l}-t_{o})^{2}-\|\mathbf{r}_{l}-\mathbf{r}_{0}\|^{2}\right]^{ 2}, \tag{4}\]
where \(\mathbf{x}\) contains the estimates of the target location \(\mathbf{r}_{0}\), unknown signal propagation speed \(c\), and unknown start time \(t_{0}\). The location of the \(l\)-th anchor is \(\mathbf{r}_{l}\), and the time at which the signal passes it is \(t_{l}\). The optimization problem in (4) can be solved using numerical methods such as the Nelder-Mead simplex direct search [37] algorithm, or in matrix form as [34, 35]:
\[\mathbf{x}=\mathbf{H}^{\#}\mathbf{b}. \tag{5}\]
where the vector of unknowns \(\mathbf{x}\), contains the target location \(\mathbf{r}_{0}\), the unknown signal propagation speed \(c\), and the unknown start time \(t_{0}\). \(\mathbf{H}^{\#}\) is the pseudo-inverse of \(\mathbf{H}\), which contains time difference data and the corresponding distances between anchors. \(\mathbf{b}\) is the measurement vector from \(N\) anchors containing anchor location information.
### _Case-III (Anisotropic medium with unknown speed of propagation)_
_Inverse problem:_ With the anchor locations and times \(t_{l},l\neq 0\) known, we estimate the location of the source, the speed of propagation at each point of interest, and the value of \(t_{0}\), when the source begins transmission.
The earlier inverse solution considered only the problem of unknown constant signal propagation speed during problem formulation. Hence, it fails to work for a signal having variable propagation speed. To address this issue, we reformulate the optimization problem stated in (4) to
\[\mathbf{x}^{\text{(NTDOA)}}=\underset{\mathbf{r}_{0},\mathbf{t}_{0},c}{\text{ argmin}}\sum_{l=1}^{N}\left[c(\mathbf{r}_{l},\theta)^{2}(t_{l}-t_{o})^{2}-\| \mathbf{r}_{l}-\mathbf{r}_{0}\|^{2}\right]^{2}, \tag{6}\]
The speed at which the measured signal propagates, \(c(\mathbf{r}_{l},\theta)\), varies across different anchor locations \(\mathbf{r}_{l}\) due to the presence of non-homogeneous media and is dependent on the direction. It is not a fixed value. Unlike standard TDOA where speed is constant (linear), in this more general modeling framework Eqn. (6) models it as a _nonlinear_ function that varies in spatial coordinates. This leads to the nonlinear TDOA (NTDOA) algorithm.
While there are several approaches that can be used to model the non-linear function in Eqn. (6), in this paper, we make the simplifying assumption that we can decompose the non-linearity into the polar form, and represent \(c(\mathbf{r}_{l},\theta)\) as a product:
\[c(\mathbf{r}_{l},\theta)\approx f(R_{l})g(\theta_{l}), \tag{7}\]
where \((R_{l}=\|\mathbf{r}_{l}-\mathbf{r}_{0}\|,\theta_{l})\) represents the polar coordinates of \(\mathbf{r}_{l}-\mathbf{r}_{0}\), and \(f\) and \(g\) are nonlinear functions. Here, \(f\) and \(g\) can be modeled as per the complex and dynamical propagation medium.
One simple approach is to model it using the Taylor series and the Fourier series, respectively. Here, \(f\) represents non-linearity in speed as a function of radius which we further approximate using Taylor series as \(f(R)\approx\sum_{k=0}^{K}a_{k}R^{k}\); \(g\) encodes speed non-linearity as a function of angle, which we represent using Fourier series as \(g(\theta)\approx 1+\sum_{\ell=1}^{L}b_{\ell}\cos(\omega_{\ell}\theta)+d_{\ell} \sin(\omega_{\ell}\theta)\). Together, \(f\) and \(g\) encompass a large class of non-linear models to capture the non-homogeneity of propagation speed in space. The resulting optimization problem now reads
\[\mathbf{x}^{\text{(NTDOA)}}=\underset{\mathbf{t}_{0},\mathbf{r}_{0} \in\mathbb{R}^{2},\{a_{k}\}_{k=0}^{K},\{\omega_{\ell},b_{\ell},d_{\ell}\}_{ \ell=0}^{L}}{\text{argmin}}\sum_{i=1}^{N} \tag{8}\] \[\left[\left(\sum_{k=0}^{K}a_{k}R_{i}^{k}\right)^{2}\left(1+\sum_{ \ell=1}^{L}b_{\ell}\cos(\omega_{\ell}\theta_{i})+d_{\ell}\sin(\omega_{\ell} \theta_{i})\right)^{2}(t_{i}-t_{0})^{2}\right.\] \[- \left.\|\mathbf{r}_{i}-\mathbf{r}_{0}\|^{2}\right.\Bigg{]}^{2},\]
where \(\mathbf{r}_{i}=(R_{i}\cos(\theta_{i}),R_{i}\sin(\theta_{i}))\in\mathbb{R}^{2}\). Here the total number of unknowns to be determined is \(K+3L+4\). In this case, for the sake of simplicity, we assume the lowest-order nonlinear model using \(K=L=1\). Hence, there are 8 unknowns and at least 9 anchors are typically required to solve the problem. As the complexity of the medium changes, the values of \(K\) and \(L\) can be adjusted in order to derive better model fits. In an ideal isotropic medium, \(g=1\) and \(f=c\), in which case, Eqn. (8) reduces to Eqn. (4).
In the simplest case, the NTDOA algorithm involves the determination of 8 unknown variables in a non-linear optimization problem. When using numerical optimization methods to solve for these variables, multiple local minima may be obtained, or in some cases, the numerical algorithms may not converge. The initialization of the numerical algorithms is, therefore, important. In order to converge to the global solution, which is a solution that is valid for the entire set of data and not just a subset, we initialize the numerical algorithm being used to solve the NTDOA algorithm with the estimates from the mTDOA algorithm. This ensures that the first guess for the NTDOA algorithm is close enough to the true solution that we have convergence and to the global solution.
## V Results
The proposed modified time difference of arrival (mTDOA) and non-linear TDOA (NTDOA) algorithms can be used to solve localization problems where the start time, signal propagation speed, and target location are unknown. These algorithms have several potential applications, including the estimation of the spiral wave core in atrial fibrillation, the determination of the origin and speed of propagation of a forest fire, and the estimation of the source and speed of propagation of a tsunami. In each of these cases, the algorithms can be used to accurately and efficiently estimate the relevant parameters, enabling a better understanding of the underlying phenomena and potentially enabling more effective response or prevention
efforts. In what follows, we discuss the application of the proposed algorithms to atrial fibrillation, wildfires, and tsunamis.
### _Numerical Models and Data_
To demonstrate the effectiveness of the proposed algorithm, we made use of three different datasets. One is the simulation data obtained from the FitzHugh-Nagumo (FHN) model, where the FHN model captures the dynamics of the heart. The FHN model provides valuable insights into the behavior of spiral waves in the heart and can be used to explore the potential impacts of various interventions or conditions on these dynamics. The other two datasets are real-time satellite imagery from the Creek Forest Fire in California in 2020 and the Tonga tsunami in 2022.
We used a modified version of the FitzHugh-Nagumo (FHN) model [41, 42] to simulate the dynamics of spiral waves in the heart. This model allows for the generation of pulsating waves that move outward from a rotor core. In our simulations, following [41, 42], we used a fixed rotor core within an 80mm\(\times\)80mm square. The evolution of model simulation is shown in Figure 3 (left).
We obtained satellite imagery from NASA's Earth Observing System Data and Information System (EOSDIS) Worldview application [38], which was collected every 10 minutes using Band 13 during the Creek Forest Fire in California in 2020. Each pixel in the imagery represents a 2-kilometer distance, and the estimated average fire propagation speed was 6-14 miles per hour [39]. A snapshot of the evolution of the forest is shown in Figure 3 (middle).
Similarly, satellite imagery was collected during the 2022 Tonga volcanic eruption and tsunami. Each pixel in the image represents a 2-kilometer distance. The estimated wave propagation speed during the tsunami was 1000 kilometers per hour [40]. The evolution of the satellite imagery is shown in Figure 3 (right).
### _Experimental Results_
To assess the efficacy of the proposed algorithms, we conducted two experiments. In the first experiment, we evaluated the performance of the TDOA, modified TDOA (mTDOA) and non-linear TDOA (NTDOA) algorithms in determining the target's location in different scenarios as discussed in Section V-A. We calculated the discrepancy between the estimated and actual target locations and repeated this process 1000
Fig. 3: Signal propagation patterns in realistic scenarios. Each sub-figure (top to bottom) shows an evolution of the three dynamic processes. (L-R) AFib (modified FHN), Creek wildfire [38, 39], Tonga tsunami [40]. The Source of propagation is marked with a white hexagram in each figure.
times using 50 randomly placed anchors in each iteration. The effectiveness of the algorithms was determined by counting the number of estimates that fell within a specified radius, as shown in Figure 4.
In the second experiment, the results of which are shown in Figure 5, we varied the number of anchors from 10 to 50 in increments of 5 and estimated the mean absolute error for each evaluated method through a Monte-Carlo simulation of 1000 trials. We evaluated, in each case, the mean absolute error in the localization estimation of the signal sources.
Finally, we estimated the average signal propagation speed for all the methods over a Monte-Carlo run of 1000 and tabulated the results in Table I. It is interesting to note that the difference in average estimated speed can be attributed to the variation in measured propagation speed at different anchor locations in each method.
### _Discussion_
In what follows, the results presented in Section V-B are interpreted and discussed.
#### V-C1 AFib
To localize the rotor core leading to atrial fibrillation, we utilized a fixed rotor core within an 80mm x 80mm square and positioned anchors at known locations selected at random to determine the rotor center's position. The results shown in Figure 4 (left) indicate that using TDOA, 100% of the estimates were within a 15mm radius of the rotor center. The performance improved for mTDOA and NTDOA, with nearly 100% of the estimates being within a 5mm radius for NTDOA. Similarly, from Figure 5 (left), the mean absolute error (MAE) was highest for TDOA, which assumed a constant speed of propagation. The MAE decreased by 50% when the signal propagation speed was considered unknown for mTDOA, and the MAE was lowest for NTDOA as it continuously estimated the speed and direction as it solved for the target.
It is important to note that we used an average signal propagation speed of 0.7 m/s from literature [43] as the assumed propagation speed when using the TDOA algorithm. We can see from Table I that the speed of propagation estimates from mTDOA and NTDOA are closer to 0.36 m/s. This leads to a significant estimation error when using TDOA. However, NT
Fig. 4: Localization Results using NTDOA: (Error Quantification) To estimate the accuracy of the algorithm, the number of estimates within a circle of a given radius are counted. The percentage of estimates for each radius value is plotted. The faster the curve reaches 100%, indicates a more accurate the estimator. The algorithm is tested by randomly placing 50 anchors in the coordinate plane. The process is repeated one thousand times, and the results are averaged. The results show that NTDOA performs better compared to other methods.
Fig. 5: Localization error results using NTDOA: (Error Quantification) To estimate the accuracy of the algorithm, the mean absolute error is evaluated by varying the number of anchors from 10 to 50 in search space. The process is repeated one thousand times, and the results are averaged. The results show that MAE is the lowest for NTDOA compared to other methods.
DOA accurately captured the signal's propagating dynamics, estimated a more accurate target location, and performed the best among the evaluated methods.
#### V-B2 Forest Fire
To estimate the source and flow of the forest fire, we used satellite imagery that captured the images of the fire. In each frame, we use points on the boundary of the fire as indicators of progress and use these as anchor locations with the time of arrival given by the frame time-stamp. This simulates the placement of thermal sensors at these locations, which are activated as the fire reaches them.
Using these data points, we calculate the error between the estimated and actual location of the wildfire's origin. The results, shown in Figure 4 (middle), demonstrate that TDOA resulted in 100% of the estimates falling within a 35-pixel radius of the wildfire's origin, while mTDOA and NTDOA showed improved performance. With NTDOA, nearly 100% of the estimates were within a 15-pixel radius. Overall, the NTDOA algorithm significantly reduced the error by 50% or more compared to TDOA. Similarly, as seen in Figure 5 (middle), the mTDOA method, which considered the speed of propagation as an unknown factor, resulted in a mean absolute error (MAE) that was 50% lower than the MAE of TDOA which assumed a propagation speed of 10 km/h (\(\approx\) 6.2 miles per hour [39]). The NTDOA method, which continuously estimated the speed and direction of the signal as it calculated the target's location, had the lowest MAE than the other methods across all experiments.
#### V-B3 Tsunami
Similar to the approach in Section V-C2, we use satellite imagery to simulate anchor locations in the experiments to evaluate the location of the tsunami's origin. The results presented in Figure 4 (right) indicate that TDOA had 100% of its estimates within a 60 pixel radius from the actual origin, while mTDOA and NTDOA demonstrated improved performance. NTDOA was able to place almost 100% of its estimates within a 20 pixel radius. The NTDOA algorithm showed a significant improvement of 50% or more compared to TDOA. In a complementary experiment (as shown in Figure 5 (right)), it was observed that the TDOA method, which assumed a constant signal propagation speed of 1000 km/h [40], had the highest MAE. However, when the speed of propagation was considered as an unknown in mTDOA, the MAE decreased by nearly 50%. The NTDOA method, which continuously determines the speed and direction of the signal as it calculates the target's location, had the lowest MAE across experiments.
In this study, we proposed a novel approach to source localization with transmissions over non-homogeneous media leading to unknown and varying propagation speeds. Our approach involved reformulating non-linear signal dynamics as an inverse problem and developing a nonlinear decomposition of the in-homogeneous velocity field. With our new algorithms, we were able to achieve more accurate source localization when compared to TDOA. This enhanced accuracy can have significant implications in various fields, enabling faster response, predictive monitoring, and management. Across all experiments and applications, the results suggest that mTDOA and NTDOA can be valuable tools for precisely determining source locations and propagation speeds, potentially enabling more targeted and timely interventions.
## VI Conclusions
In this paper, we presented two new algorithms for determining the origin and speed of propagation of a signal propagating over non-homogenous media, namely, modified time difference of arrival (TDOA) and non-linear TDOA (NTDOA). The algorithms have applications in various settings including in healthcare (atrial fibrillation) and geo-hazards (forest fires, tsunamis). The algorithms were validated using simulated and real-world data and the results indicate that the NTDOA algorithm was the most effective, particularly when compared to classical approaches such as TDOA.
In future work, we plan to explore the use of these algorithms in situations where the velocity of spiral waves is not constant, or when there are obstacles that block or alter the waves or multiple cores are present. We also intend to modify the assumptions and parameters of the NTDOA algorithm to better suit environmental conditions.
## Acknowledgment
We acknowledge the use of imagery from the NASA Worldview application ([https://worldview.earthdata.nasa.gov/](https://worldview.earthdata.nasa.gov/)), part of the NASA Earth Observing System Data and Information System (EOSDIS).
|
2304.08010 | Mock X-ray observations of hot gas with L-Galaxies semi-analytic models
of galaxy formation | We create mock X-ray observations of hot gas in galaxy clusters with a new
extension of L-Galaxies semi-analytic model of galaxy formation, which includes
the radial distribution of hot gas in each halo. Based on the model outputs, we
first build some mock light cones, then generate mock spectra with SOXS package
and derive the mock images in the light cones. Using the mock data, we simulate
the mock X-ray spectra for ROSAT all-sky survey, and compare the mock spectra
with the observational results. Then, we consider the design parameters of HUBS
mission and simulate the observation of the halo hot gas for HUBS as an
important application of our mock work. We find: (1) Our mock data match the
observations by current X-ray telescopes. (2) The survey of hot baryons in
resolved clusters by HUBS is effective below redshift 0.5, and the observations
of the emission lines in point-like sources at z>0.5 by HUBS help us understand
the hot baryons in the early universe. (3) By taking the advantage of the large
simulation box and flexibility in semi-analytic models, our mock X-ray
observations provide the opportunity to make target selection and observation
strategies for forthcoming X-ray facilities. | Wenxin Zhong, Jian Fu, Shiyin Shen, Feng Yuan | 2023-04-17T06:19:56Z | http://arxiv.org/abs/2304.08010v3 | # Mock X-ray observations of hot gas with L-Galaxies semi-analytic models of galaxy formation
###### Abstract
We create mock X-ray observations of hot gas in galaxy clusters with a new extension of L-Galaxies semi-analytic model of galaxy formation, which includes the radial distribution of hot gas in each halo. Based on the model outputs, we first build some mock light cones, then generate mock spectra with SOXS package and derive the mock images in the light cones. Using the mock data, we simulate the mock X-ray spectra for _ROSAT_ all-sky survey, and compare the mock spectra with the observational results. Then, we consider the design parameters of _HUBS_ mission and simulate the observation of the halo hot gas for _HUBS_ as an important application of our mock work. We find: (1) Our mock data match the observations by current X-ray telescopes. (2) The survey of hot baryons in resolved clusters by _HUBS_ is effective below redshift 0.5, and the observations of the emission lines in point-like sources at \(z>0.5\) by _HUBS_ help us understand the hot baryons in the early universe. (3) By taking the advantage of the large simulation box and flexibility in semi-analytic models, our mock X-ray observations provide the opportunity to make target selection and observation strategies for forthcoming X-ray facilities.
X-rays: galaxies: clusters - galaxies: clusters: intracluster medium - galaxies: groups: general - galaxies: haloes - (galaxies:) intergalactic medium Vol.0 (2023) No.0, 000-000
## 1 Introduction
According to the \(\Lambda\)CDM cosmological models and the results from _Planck_, baryonic matter contributes about 4.9% of the total mass in the universe (Planck Collaboration et al. 2020), which contains cold baryons locked in galaxies (star, interstellar medium (ISM), black hole etc. Kravtsov & Borgani 2012) and hot baryons in diffuse and ionized phase in circumgalactic medium (CGM) and intracluster medium (ICM). According to observations (Shull et al. 2012) and simulation work (e.g Cen & Ostriker 2006), the cold baryons contribute less than 15% of the baryon budgets and hot gas dominates the baryon content in the low-redshift universe.
The X-ray emitted by hot baryons can test cosmological models and provide important information on the baryon and energy cycles of galaxies and clusters, as well as traces how the dark matter structures assembled in large scale. In the past two decades, a number of surveys by X-ray telescopes _XMM-Newton_ and _Chandra_ have detected the X-ray emission from hot haloes around galaxies (e.g Li & Wang 2013; Li et al. 2017; Babyk et al. 2018). _ROSAT_ completes the first X-ray imaging all-sky survey in soft X-ray band (RASS, Voges et al. 1999) and provide catalogues for thousands of galaxy clusters (e.g Piffaretti et al. 2011; Finoguenov et al. 2020). The new X-ray telescope, _eROSITA_, have completed the Final Equatorial-Depth Survey (eFEDS) by the end of 2019 (Brunner et al. 2022), which is a verification of the _eROSITA_ all-sky survey (eRASS). The catalogue from eFEDS, which includes 542 candidates of galaxy clusters detected as the extended X-ray sources in the 140 \(\rm deg^{2}\) sky area, helps in the study of CGM and ICM properties (Liu et al. 2022; Bahar et al. 2022).
A number of large scale X-ray surveys are proposed to improve our understanding of the hot baryons in the foreseeable future. _eROSITA_ will completes eight all-sky surveys in the soft X-ray band by the end of 2023 (eRASS:8), yielding a sample of over \(10^{5}\) galaxy clusters (Merloni et al. 2012; Predehl et al. 2021). The X-ray survey by _Athena_ Phase B will extend the study of hot baryon distributions in ICM by mapping the properties of low-mass groups up to \(z\sim 2\) (Ettori et al. 2013; Kaastra et al. 2013). The Wide
Field Imager (WFI) survey, during the first four years of operation, is predicted to detect over 10 000 groups and clusters with \(z>0.5\), including 20 groups with a mass of \(M_{500}\geq 5\times 10^{13}M_{\odot}\) at around \(z\sim 2\)(Zhang et al., 2020). The Chinese _HUBS_ mission (Cui et al., 2020) intends to conduct an all-sky survey of hot baryons in WHIM and CGM with its large field of view and high spectral resolution (see Tab. 2 in Sec. 3.2 of this paper).
On the other hand, recent cosmological hydrodynamic simulations such as EAGLE (Crain et al., 2015; Schaye et al., 2015) and Illustris-TNG (Springel et al., 2018; Nelson et al., 2018) predict hot haloes around groups and clusters. A lot of papers study the X-ray emission from ICM and CGM using the simulation results (e.g Stevens et al., 2017; Kovacs et al., 2019; Martizzi et al., 2019; Truong et al., 2021), and some works further make mock X-ray observations based on the plans of X-ray surveys. For example, Oppenheimer et al. (2020) makes predictions of resolved X-ray images for _eROSITA_ with EAGLE and Illustris-TNG. Wijers & Schaye (2022) discusses the detection prospects of X-ray emission lines for _Athena_ X-IFU and Lynx Main Array (Gaskin et al., 2019) using EAGLE. Zhang et al. (2022) creates mock observations for _HUBS_ with Illustris-TNG and assesses the scientific capabilities in detecting extended X-ray emission from hot gas. Vijayan et al. (2022) generates X-ray emission of ISM and CGM from MACER code (Yuan et al., 2018), and simulate _HUBS_ observation of elliptical galaxies in four sets of simulations.
The outputs of semi-analytic models of galaxy formation (hereafter SAMs) offer another choice to build mock observations, such as the mock observatory by Overzier et al. (2013), mock cones for SKA HI Surveys by Obreschkow & Meyer (2014), and mock galaxy catalogues in multiple bands by Merson et al. (2013). Due to the low-cost of running SAMs, the main advantage of the SAMs outputs is the large size of the simulation box (e.g the box size of L-Galaxies SAMs is 500 Mpc \(h^{-1}\) based on Millennium Simulation, Henriques et al., 2015). The large simulation box helps to construct mock observations in very large sky area without the effect of cosmic variance even at high redshift, i.e the 500 Mpc \(h^{-1}\) box corresponds to over \(50\deg^{2}\) sky area at \(z\sim 2.0\). The mock catalogue based on Millennium Simulation (Springel et al., 2005) can also contain the hot gas sample in very massive haloes (\(M_{200}\gtrsim 10^{15}M_{\odot}\)). On the other hand, the flexibility of SAMs makes it possible to generate multiple mock observations based on outputs with different model parameters and prescriptions, which investigates the effect of physical processes and model parameters on the properties of observational results (Somerville & Dave, 2015).
In our recent work (Zhong et al., 2023, hereafter Paper I), we develop a new extension of L-Galaxies 2015 SAMs (Henriques et al., 2015) to study the ionized hot gas in the haloes. In contrast to most previous SAMs work (e.g L-Galaxies 2020 by Henriques et al., 2020; DARK SAGE by Stevens et al., 2016; Shark by Lagos et al., 2018), which mainly focus on the stellar and cold gas components in galaxy disks and ISM, Paper I concentrates on the properties and spatial distribution of hot baryon components, as well as the corresponding X-ray emission from hot gaseous halo. Our model results successfully reproduce various of X-ray observations, like the radial profiles of hot gas temperature, the scaling relations of X-ray luminosity, and the baryon fraction in haloes with different mass.
In this paper, we will create mock X-ray observations of the halo hot gas based on the outputs of the SAMs in Paper I. First, we will build mock light cones using the results of spatial information, and then generate the mock spectra and images in soft X-ray band based on some physical properties. We will consider the device parameters of X-ray facilities to mimic the observations, particularly for the _HUBS_ mission. The mock results presented in this paper will aid in target selection and observation strategies optimization for future X-ray surveys of hot gas, and they can also be compared to the mock results from other simulations.
This paper is organized as follows. In Section 2, we will describe the methodology used to create mock X-ray observations for hot gas in the haloes, including the steps to build mock light cones, and the procedures to generate mock spectra and images. We will also show a few examples of the mock images and spectra of galaxy clusters. In Section 3, we will consider the device parameters of X-ray telescopes and simulate the observations based on the mock data. We will simulate mock spectra for _ROSAT_ as a benchmark and then focus on the mock observations for _HUBS_ mission. In Section 4, we will summarize this paper and look ahead to the future work.
## 2 Methods
In this section, we will describe how to create the mock X-ray observations of hot gas using the model outputs of L-Galaxies SAMs. We will first describe the steps to build the mock light cones, and then the procedures to generate the mock spectra and images of galaxy clusters in the light cones. It should be noted that we do not distinguish between the definitions of galaxy "group" and "cluster" in the following sections of this paper, which is a collection of galaxies embedded in the same dark matter halo, and we will use the term "cluster" for simplicity.
### Simulation and model samples
The mock observations in this paper is based on the outputs of the models in Paper I, in which we developed a new branch of the L-Galaxies 2015 (Henriques et al. 2015) SAMs to describe the radial distribution of hot ionized gas in ICM and CGM. In Paper I, we use a physical model that takes into account the local instabilities and thermal equilibrium processes for hot gas in the haloes to replace the isothermal sphere in previous models. The model outputs include one-dimensional radial profiles of hot gas density, gas temperature and the bolometric X-ray luminosity profiles around each dark matter halo. The model results successfully reproduce the X-ray observations, such as the radial profiles of hot gas density (e.g the electron density profile from REXCESS by Croston et al. 2008 and the gas temperature profile from _XMM-Newton_ and _Chandra_ by Bartalucci et al. 2017), scaling relations of X-ray luminosity and temperature (Goulding et al. 2016 and Babyk et al. 2018 from _Chandra_, Mulchaey 2003 and Anderson et al. 2015 from _ROSAT_, Li et al. 2016 from _XMM-Newton_), and the baryon fraction in different haloes (Gonzalez et al. 2013 from _XMM-Newton_, Vikhlinin et al. 2006 and Sun et al. 2009 from _Chandra_).
In this paper, the SAMs results used to build the mock observations are based on the dark matter haloes of Millennium Simulation (hereafter MS, Springel et al. 2005), which is rescaled to the Planck cosmological parameters (\(\Omega_{\Lambda}=0.685,\ \Omega_{m}=0.315,\ \Omega_{\rm baryon}=0.0487,\ \sigma_{8}=0.829\) and \(h=0.673\), Planck Collaboration et al. 2020). The comoving box size of the rescaled MS (Angulo & Hilbert 2015) is about 480 Mpc \(h^{-1}\) or 713 Mpc on a side, which is several times larger than recent cosmological hydrodynamical simulations, such as EAGLE (in a box of 100 Mpc) and Illustris-TNG (in a box of 100 Mpc or 300 Mpc). The minimum halo mass is about \(2.9\times 10^{10}M_{\odot}\), which is the mass of 20 simulated particles. The resolution of MS is high enough to mock the observations of hot gas component in most galaxy clusters, and the emission from hot gas in haloes below the resolution is usually undetectable in soft X-ray band. Based on the model results in Paper I, the gas temperature in haloes smaller than \(10^{11}M_{\odot}\) tends to be lower than 0.1 keV.
The SAMs results are saved as halo and galaxy catalogues in a series of discrete snapshots, each of which corresponds to a certain redshift \(z\). Based on the halo merger trees of MS rescaled to the Planck cosmological parameters, the model outputs include 59 snapshots from redshift \(z\sim 56\) to \(z=0\). The catalogues include details of the spatial positions and physical properties of each halo and galaxy.
Based on the model prescriptions in Paper I, the properties of hot gas in each halo, including the gas density \(\rho_{\rm hot}\) and bolometric X-ray emission profiles \(L_{\rm X}\) are stored in the form of "radial profiles", which correspond to the values in a set of spherically symmetrical shells with a certain radius around the halo center. To mimic the real observations, the X-ray luminosity profiles in concentric 3D shells are projected to the surface brightness in 2D rings with
\[I_{X,j}=\frac{1}{A_{j}}\sum_{i}f_{V,ij}L_{X,i}, \tag{1}\]
in which \(L_{X,i}\) (unit: erg s\({}^{-1}\)) is the bolometric X-ray luminosity in shell \(i\), and \(I_{X,j}\) (unit: erg s\({}^{-1}\) kpc\({}^{-2}\)) is the projected X-ray surface luminosity in ring \(j\). \(f_{V,ij}\) represents the volume fraction of shell \(i\) projected in ring \(j\), and \(A_{j}\) is the projected area of ring \(j\). The detailed formulae and discussions on the projection can be found in papers like McLaughlin (1999) and Ettori (2002).
In Fig. 1, we show an illustration of hot gas component in the model outputs at \(z=0\), which is one of the snapshots used to construct the light cones and mock observations. The illustration is in a subbox of the MS volume with around 60 Mpc \(h^{-1}\) on a side. In this figure, each dot represents one hot gaseous halo, and the size and the color of each dot represents the virial radius \(R_{200}\) and the bolometric X-ray luminosity of each halo.
In the framework of SAMs, we have "halo hot gas" in the model results and do not distinguish between the ionized hot gas in ICM or CGM, and we only concentrate on the X-ray emission from hot gas components inside the virial radius of each halo in this paper. We should
Figure 1: The illustration of hot gas component in the model outputs in a subbox with \(1/512\) of the MS volume at \(z=0\) (60 Mpc \(h^{-1}\) on a side), in which each dot represents a hot gaseous halo. The size represents the halo radius \(R_{200}\) and the color of each dot represents the bolometric X-ray luminosity.
also mention that SAMs do not consider the details of the non-spherical structures such as filaments, knots and cosmic webs. The baryons in these structures are thought to reside in the hot gas halo or the ejecta reservoir out of halo depending on whether they are bounded within the halo potential or not.
### Light cones
The model results of haloes and galaxies are in cubic simulation boxes at a finite number of redshifts. To mimic the real observation, we convert the cubic boxes into a virtual sky with the spatial information (the 3D positions and 3D velocities). We follow the methods (MoMaF) developed by Blaizot et al. (2005) and Kitzbichler & White (2007) to create mock catalogues and light cones based on the outputs of SAMs, the details of the methods can be found in the original papers and subsequent works (e.g Obreschkow et al. 2009; Zoldan et al. 2017). Here, we briefly describe the steps:
(i) We position the observer at the coordinate origin \((0,0,0)\) and randomly replicate the simulation boxes in a 3D grid. Firstly, we calculate the comoving distance from a box center to the observer and get the corresponding redshift, then we stack the box with the closest redshift.
Due to the relatively large size of the simulation box (\(L_{\rm box}\sim 710\) Mpc for MS), it is not necessary to use the model outputs in each snapshot at the low redshift. We truncate the 3D grid at \(z\sim 2\), which includes \(8^{3}=512\) MS boxes. According to the forthcoming plans of X-ray telescopes, \(z\sim 2\) corresponds to the redshift limit of massive cluster surveys by eRASS (Merloni et al. 2012), and also the redshift limit of the warm-hot baryons and clusters observation by _Athena_ (Nandra et al. 2013).
In our current work, we simply splice the boxes at different snapshots together to get a continuous cubic 3D grid and light cones, which is similar to the work by Zoldan et al. (2017) and Comparat et al. (2020). However, this simplified method may lead to discontinuities in the light cones because of the discrete redshift bins in model outputs. In some mock observation work, the authors interpolate the positions and velocities of the haloes and galaxies between snapshots (e.g Merson et al. 2013; Smith et al. 2022), and even the intrinsic properties (stellar mass, gas mass, SFR, etc.) of each galaxy (Barrera et al. 2022). According to the results and discussions in Merson et al. (2013) and Smith et al. (2022), the interpolation mainly affects the results of the galaxy clustering and colour assignment. In this paper, our mock observation mainly focus on the X-ray images and spectra of hot gaseous haloes, and the clustering and distribution in large scale does not affects our mock results. On the other hand, we adopt the energy band with continuous redshift in dealing with the mock images and spectra (see the details in Sec. 2.3 & 2.4) at high redshift, which avoids producing discrete colour distributions in the results.
(ii) To suppress spurious radial features caused by the repeated boxes, we assign the "random tiling" on the 3D grid, which includes the random operations of shift, rotation and inversion on the 3D coordinates and velocities.
(iii) In the stacked 3D grid, we calculate the comoving coordinates \((r_{x},r_{y},r_{z})\) of each object relative to the observer, and convert them to spherical coordinates \((\alpha,\delta,z)\). The right ascension \(\alpha\) and declination \(\delta\) are calculated by
\[\begin{split}&\alpha=\arctan\left(r_{x}/r_{z}\right)\\ &\delta=\arctan\left(r_{y}/\sqrt{r_{x}^{2}+r_{z}^{2}}\right). \end{split} \tag{2}\]
Since the mock samples should have a continuous redshift distribution instead of the discrete redshift in the model outputs, we calculate redshift \(z\) of each source with its comoving distance \(d_{c}=\left(r_{x}^{2}+r_{y}^{2}+r_{z}^{2}\right)^{1/2}\) by the equation
\[d_{c}\left(z\right)=\frac{c}{H_{0}}\int_{0}^{z}\frac{dz^{\prime}}{\sqrt{ \Omega_{\Lambda}+\Omega_{m}(1+z^{\prime})^{3}}}, \tag{3}\]
in which \(\Omega_{\Lambda}\) and \(\Omega_{m}\) are the cosmological parameters. The apparent redshift \(z_{v}\) with Doppler redshift is then calculated by,
\[z_{v}=z_{\rm cos}+\frac{v_{r}}{c}\left(1+z_{\rm cos}\right), \tag{4}\]
in which \(v_{r}\) is the peculiar velocity projected along line-of-sight, and \(z_{\rm cos}\) is the cosmological redshift in Eq. 3.
(iv) Based on the model outputs in spherical coordinates mentioned above, we create light cones to mimic real observations. Considering the of _HUBS_ (1 \(\deg^{2}\)) and _eROSITA_ (\(1.03^{\circ}\times 1.03^{\circ}\)), we choose \(1^{\circ}\times 1^{\circ}\) as the angular size of each light cone. The mock data are saved according to the light cones. We generate two sets of light cones: one deep light cone and several shallow light cones. The deep light cone is generated in random direction up to \(z\sim 2\). The 10 shallow light cones are generated up to \(z\sim 0.2\)1, and the center of each shallow light cone is a nearby cluster. Furthermore, It is quite easy to generate more light cones for further statistical analysis.
Footnote 1: We choose \(z\) up to 0.2 for the shallow light cones, because the comoving distance of \(z=0.2\) is just a bit larger than the box size of MS.
Based on the model results in Paper I, we focus on the mock data of haloes with \(M_{200}>10^{12}M_{\odot}\), since the hot gas temperature in haloes around \(10^{12}M_{\odot}\) is just above 0.1 keV, and the emission from lower mass haloes is nearly invisible in soft X-ray band. The deep light cone up to \(z\sim 2\) contains approximately 24 000 haloes above \(10^{12}M_{\odot}\), and the shallow light cones up to \(z\sim 0.2\) contains around 73 haloes above \(10^{12}M_{\odot}\) on average. In Fig.
2, we show redshift distribution per square degree of the haloes with \(M_{200}>10^{12}M_{\odot}\), averaged throughout the entire mock sky up to redshift \(z=2\). We can see that the halo number per square degree peaks at \(z\sim 1\) and changes little at higher redshift.
### Mock spectra
To generate the mock spectra of the X-ray emission from the halo hot gas, we use the package "Simulated Observations of X-ray Sources" (SOXS), whose details can be found in SOXS webpage ([https://hea-www.cfa.harvard.edu/soxs](https://hea-www.cfa.harvard.edu/soxs)). In the SOXS package, we apply the APEC spectrum generator based on hot plasmas in collisional ionization equilibrium (CIE) by Foster et al. (2012) and we also consider the Galactic foreground absorption in the spectrum.
In each mock light cone, we generate wide-band spectrum for the hot gas in each halo and also the narrow-band spectra around certain emission lines (e.g the O vii and Fe xvii lines). To generate these spectra, the following three properties from L-Galaxies model outputs are used as the input parameters for SOXS:
\(L_{X}/4\pi d_{c}^{2}\): bolometric X-ray flux of hot gas in a halo;
\(T_{X}\): luminosity-weighted mean gas temperature of a halo;
\(Z_{\rm gas}\): mean hot gas metallicity of a halo.
The gas metallicity \(Z_{\rm gas}\) is defined as the metallicity in hot gas relative to the solar value,
\[Z_{\rm gas}=\frac{1}{Z_{\odot}}\frac{M_{Z,{\rm hot}}}{M_{\rm hot}}, \tag{5}\]
in which \(M_{Z,{\rm hot}}\) is the mass of metal elements in hot phase and \(M_{\rm hot}\) is the mass of hot gaseous halo. The solar metallicity \(Z_{\odot}\) is set to be 0.02. We should note that the SAMs adopted in this paper does not contain the abundances of different elements but only one value of the total metallicity in hot gas.
For the haloes at high redshift, we make the redshift correction on the mock spectra. Considering \(f_{o}\) and \(f_{e}\) (unit: cnts s\({}^{-1}\) keV\({}^{-1}\) cm\({}^{-2}\)) are the spectra in the observed and emitted-frame, and the relation between \(f_{o}\) and \(f_{e}\) can be written as
\[f_{o}\left(\nu_{o}\right)=f_{e}\left(\nu_{o}\left(1+z\right)\right), \tag{6}\]
in which \(\nu_{o}\) is the frequency in the observed-frame. Then, we get the spectra in the band of the observed-frame.
### Mock images
To mimic the observations, generating mock images is another important task. Using the projected surface luminosity profile in Eq. 1 and the mock spectrum in Sec. 2.3, we obtain the mock X-ray image for each halo in the light cone.
For a nearby halo with comoving distance \(d_{c}\), the emissivity \(S_{\nu}\) (unit: erg s\({}^{-1}\) cm\({}^{-2}\) arcmin\({}^{-1}\)) in a given band \(\nu\) is
\[S_{\nu}=\frac{A_{i}}{4\pi d_{c}^{2}}\frac{E_{\nu}}{E_{\rm bol}}I_{X,i}, \tag{7}\]
in which \(E_{\nu}\) and \(E_{\rm bol}\) (unit: erg s\({}^{-1}\)) represent the X-ray emission energy in given band \(\nu\) and the bolometric energy from the mock spectrum respectively. \(I_{X,i}\) (unit: erg s\({}^{-1}\) kpc\({}^{-2}\)) from Eq. 1 is the projected surface brightness of the bolometric luminosity in ring \(i\) of the model halo, and \(A_{i}\) is the projected area of ring \(i\).
For high redshift haloes in the deep light cone, the redshift correction is made in the calculation of the surface brightness. Similar to the \(K\)-correction in the magnitude (e.g Hogg et al., 2002), the emissivity \(S_{\nu_{o}}\) in a given band \(\nu_{o}\) is
\[S_{\nu_{o}}=\frac{A_{i}}{4\pi d_{c}^{2}}\frac{E_{e,\nu_{o}(1+z)}}{E_{e,{\rm bol }}}\frac{I_{e,X,i}}{(1+z)^{4}}, \tag{8}\]
in which the subscripts \(e\) and \(o\) represent the quantities in emitted-frame and observed-frame respectively, and the item \((1+z)^{-4}\) represents the redshift correction of the surface brightness. On the other hand, due to the cosmological redshift of the emitter, the observer can detect the X-ray emission from gas with higher temperature in high redshift clusters, i.e
\[T_{\rm gas,e}=(1+z)\,T_{\rm gas,o}. \tag{9}\]
With the distribution of \(S_{\nu}\), we get the emissivity image for a cluster (see the examples in Sec. 2.5). Considering the device parameters of a specific X-ray telescope, such as ARF (ancillary response file), RMF (redistribution matrix file), PSF (point spread function) and exposure time, we can convert the emissivity to photon-count density (in unit: cnts arcmin\({}^{-1}\)) and generate the mock images for each cluster in the light cones (details can be found in the following sections).
Figure 2: The redshift distribution per redshift bin (\(\Delta z=0.2\)) per square degree of the haloes with \(M_{200}>10^{12}M_{\odot}\), averaged throughout the entire mock sky up to \(z=2\).
In summary, we adopt the model outputs from the SAMs in Paper I to create the mock X-ray observations of the hot gaseous haloes. In Fig. 3, we show a flowchart to describe the steps and procedures in this section. Here we briefly summarize the steps that we follow:
(i) We adopt the L-Galaxies model outputs running on MS halo merger trees, which are stored in cubic boxes in discrete redshift bins.
(ii) Based on the spatial information (3D positions and velocities) of each halo, we stack the simulation boxes in 3D grid and assign "random tiling" on the grid to suppress the spurious radial features. Then we convert the Cartesian coordinates of each halo to spherical coordinates with respect to the observer.
(iii) We generate light cones up to different redshift with the angular size of \(1^{\circ}\times 1^{\circ}\).
(iv) Using the physical properties (X-ray flux, gas temperature and gas metallicity) from the model outputs, we generate mock X-ray spectra of hot gas in each halo with SOXS packages.
(v) We project the X-ray luminosity profiles in 3D shells to 2D surface brightness \(I_{X,i}\) and derive the X-ray emissivity images with the mock spectra. For the haloes at high redshift, redshift corrections are made on the mock spectra and images.
(vi) Considering the device parameters, we simulate the observations for X-ray telescopes (see the following sections).
### Examples of mock images and spectra for clusters
In this subsection, we will show a few mock images and spectra of hot gas in clusters at different redshift. Considering the methods of contamination removal and member identification for clusters by cross-matching the X-ray sources with samples from multiple-wavelength (e.g Salvato et al., 2022), the mock images and spectra shown hereafter are based on the clusters in our mock data. We identify the members of a mock cluster through the halo merger tree in MS, i.e all the central (Type 0&1 galaxies) and satellite galaxies (Type 2 galaxies) in the subhaloes (the sub-structure within larger virialised halo) of a main FoF halo belongs to one cluster, and the detail definitions of the FoF halo and subhalo can be found in Springel et al. (2005) and Croton et al. (2006). We define the central galaxy of a cluster as the galaxy in the center of an FoF halo.
To mimic the observation of the hot gas in nearby and high redshift clusters, we show the examples of mock spectra and emissivity images from three model clusters at different redshift in Fig. 4. In the left column, we select a cluster with halo mass similar to the Milky Way (\(M_{200}\sim 4\times 10^{12}M_{\odot}\) at \(z\sim 0.03\)) from one of the shallow light cones to mimic the observation of a nearby cluster. For the results at higher redshift, the two mock clusters are from the deep light cone. The middle column is a cluster with \(M_{200}\sim 4\times 10^{14}M_{\odot}\) at \(z=0.51\), representing a cluster close to the redshift limit of _HUBS_ mission for observation of extended sources (see Sec. 3.2 for details). In the right column, we select a cluster with \(M_{200}\sim 1.6\times 10^{14}M_{\odot}\) at \(z=2.07\), which is around the redshift limit of clusters detection for _eROSITA_ and _Athena_. To show the satellite structures more clearly, we also select a cluster with \(M_{200}\sim 5\times 10^{14}M_{\odot}\) at \(z\sim 0.047\) and show its emissivity image in Fig. 5, which represents a nearby rich cluster with a lot of substructures and satellite galaxies around the central galaxy. In each panel of the emissivity images in Fig. 4 & 5, the largest source represents the X-ray emission from the hot gas around the central galaxy and other sources are from the satellite galaxies.
The X-ray emissivity images of these clusters are in 0.1-2 keV band, and the field of view (hereafter FoV) of the images in Fig. 4 is \(0.5^{\circ}\times 0.5^{\circ}\), while that of the image in Fig. 5 is \(1^{\circ}\times 1^{\circ}\). We can see that all the X-ray sources are in spherical shape because the L-Galaxies model assumes a spherically symmetrical profile for each hot gaseous halo. The outer boundary of each emission profile is located at the virial radius \(R_{200}\) of each subhalo, and the satellite beyond the outer boundary of the central galaxy belongs to another subhalo. The current version of the L-Galaxies SAMs include the hot baryons beyond the halo potential of a cluster (a.k.a the ejected reservoir). However, the model does not consider the structure and distribution of the unbounded reservoir, so all the mock X-ray emission is from gas inside halo boundary. Although the baryons outside the halo potential are significant, they are very difficult to probe (Nicastro et al., 2022, Walker et al., 2019), and future model work on the spatial distribution of the unbounded gas in SAMs should be meaningful (Ayromlou et al., 2022).
The emissivity images of nearby clusters in Fig. 4 and 5 indicates that many X-ray facilities are capable of detecting the structures like the spatial distribution of satellites, and the spatially-resolved spectrum of the entire cluster. For the cluster at \(z\sim 2\) in the right column of Fig. 4, the angular size is around 2 arcmin, which is around the limit of _HUBS_ (\(1.0^{\prime}\) angular resolution, Cui et al., 2020), while _eROSITA_ (\(15^{\prime\prime}\) angular resolution, Merloni et al., 2012) and _Athena_ (\(5^{\prime\prime}\) angular resolution, Kaastra et al., 2013) have the ability to resolve the hot gas in the central and large satellite galaxies.
The bottom three panels of Fig. 4 present the mock X-ray spectra of the same clusters shown in the top panels.
In each panel, we stack all the spectra from the central and satellite galaxies together to get a single spectrum for each cluster. In the left panel of the nearby cluster, we can see bumps in 0.5-1.0 keV band in the spectrum, which are the emission lines of the elements O, Fe, Ne, Mg etc, and we will show the detail of these emission lines in the narrow-band spectra in Sec. 3.2. In the right panel, the relative high gas temperature (mean \(T_{\rm gas}\sim 2.5\) keV) in this massive halo leads to high ionization fraction for some elements and weak plasma emission lines in the spectrum.
On the other hand, the high-redshift clusters extend the spectrum of the emitted-frame to the band of high-energy processes, like the AGN and black hole accretion. The current L-Galaxies SAMs do include the prescriptions of gas accretion and AGN feedback processes by central black holes (a.k.a the radio-mode accretion), but the X-ray emission from AGN and black holes is not included. Some works suggest that AGN feedback affects the X-ray luminosity of haloes to some extent. Gaspari et al. (2014) shows that the action of purely AGN feedback is to lower the luminosity and heat the gas. Puchwein et al. (2008) ob
Figure 4: Top panels: The X-ray emissivity images in 0.1-2 keV band with \(0.5^{\circ}\times 0.5^{\circ}\) FoV for model clusters at different redshift. The halo mass and redshift of the 3 clusters are \(M_{200}=10^{12.6},10^{14.6},10^{14.2}M_{\odot}\) and \(z=0.03,0.51,2.07\) respectively. Bottom panels: The mock spectra of the same clusters in the top panels, generated by SOXS package.
Figure 3: The brief flowchart of the steps involved in creating the mock observations in this paper.
tains that AGN feedback significantly reduces the X-ray luminosities of poor clusters and groups. Thus, to get more accurate mock X-ray observations for high redshift clusters, it is important to do future work on the prescriptions of the X-ray emission from black hole accretion and AGN feedback in SAMs.
## 3 Mock Observations for X-ray Telescopes
In this section, we will consider the device parameters of real X-ray facilities and simulate the observations of hot gas based on our mock data. As a benchmark, we will first simulate the X-ray spectra of _ROSAT_ all-sky survey and compare the mock results with the observations. Then, we will focus on the mock observations for the future _HUBS_ mission.
### Mock spectra of clusters for all-sky survey
In this subsection, we will simulate the spectra of the clusters in the first X-ray all-sky survey (RASS) by _ROSAT_(Voges et al., 1999) as an application of our mock spectra.
Following the procedures in Dai et al. (2007), we select clusters from the mock sky up to \(z\sim 0.2\) and put them to a common distance of 100 Mpc to normalize the apparent luminosity. In Dai et al. (2007), the clusters of RASS are divided into several groups according to the optical richness, and the richness parameter \(N_{*666}\) has a fitting relation with the bolometric X-ray luminosity \(L_{X}\) of a cluster
\[N_{*666}=10^{0.43\pm 0.03}\bigg{(}\frac{L_{X}}{10^{43}{\rm erg~{}s^{-1}}h^{-2} }\bigg{)}^{0.63\pm 0.04}. \tag{10}\]
Similarly, our mock clusters are divided into four groups based on \(L_{X}\), and the parameters of each group are listed in Tab. 1
Since the RASS images have already corrected the exposure times for the effects of vignetting, we use the on-axis effective area \(A_{\rm eff}\) from Table 5.3 in the _ROSAT_ handbook2 to generate the mock spectra comparable with RASS results, then the power \(F(\nu)\) (unit: cnts s\({}^{-1}\) keV\({}^{-1}\)) received by _ROSAT_ at frequency \(\nu\) is
Footnote 2: [https://heasarc.gsfc.nasa.gov/docs/rosat/ruh/handbook/node122.html](https://heasarc.gsfc.nasa.gov/docs/rosat/ruh/handbook/node122.html)
\[F(\nu)=f(\nu)*A_{\rm eff}(\nu), \tag{11}\]
in which \(f(\nu)\) (unit: cnts s\({}^{-1}\) cm\({}^{-2}\) keV\({}^{-1}\)) is the flux of a mock spectrum generated by SOXS package. In addition, the Galactic foreground absorption is considered when we calculate the \(f(\nu)\) in Eq. 11, and the column densities \(N_{\rm H}\) of the foreground absorption are listed in the right column of Tab. 1, which are same as the values used in Dai et al. (2007).
Fig. 6 shows the X-ray spectra in 0.1-2 keV band derived from the mock clusters together with the observational spectra from RASS by Dai et al. (2007), and the samples are divided into four groups according to \(L_{X}\) and \(N_{*666}\) in Tab. 1. In each panel, the troughs in the spectra at around 0.5 keV is caused by the drop in the sensitivity of _ROSAT_ between 0.3 and 0.6 keV, and the drop at \(E\lesssim 0.2\) keV is caused by the foreground absorption.
As shown in Fig. 6, the mock spectra can roughly match the results from RASS in 0.1-2 keV band, and the main difference exists in Group 4 (clusters with \(L_{X}\gtrsim 10^{43.5}~{}{\rm erg~{}s^{-1}}\)). In these bright clusters, the flux of the mock sample at \(E<0.5\) keV is slightly higher than that of RASS, which means the model predicts lower gas temperature \(T_{X}\) than observations in massive haloes. For the clusters in Group 4, the average gas temperature from RASS cluster is \(T_{X}=4.7^{+1.4}_{-0.7}\) keV, while \(T_{X}=2.7\) keV for the mock sample. According to the scaling relations of the hot gas in Paper I (Detailed discussions on the scaling relations of hot gas can be found in Section 3.2 of Paper I.), the inconsistency in the bright clusters is primarily caused by the too steep slope of \(L_{X}-T_{X}\) relation in the mock clusters. In order to fit the relation \(L_{X}\propto T_{X}{}^{4.5\pm 0.2}\) for early
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Group & Richness & X-ray luminosity & Galactic absorption \\ & \(N_{*666}\) & (\(L_{X}/10^{42}\) erg s\({}^{-1}\)) & (\(N_{\rm H}/10^{20}\)cm\({}^{-2}\)) \\ \hline
1 & 0.3-1.0 & 0.32-0.97 & 0.7 \\
2 & 1.0-3.0 & 0.97-5.57 & 1.4 \\
3 & 3.0-10 & 5.57-37.7 & 1.8 \\
4 & 10-50 & 37.7-484 & 2.8 \\ \hline \end{tabular}
\end{table}
Table 1: The richness parameter mentioned in Dai et al. (2007), the corresponding bolometric X-ray luminosity and the Galactic foreground absorption column density for each group of the clusters in Fig. 6.
Figure 5: The X-ray emissivity image in 0.1-2 keV band of a nearby rich cluster with \(M_{200}=10^{14.7}M_{\odot}\) at \(z=0.047\), and the FoV of this image is \(1^{\circ}\times 1^{\circ}\). The largest source in the center represents the hot gas around the cD galaxy in the main subhalo, and other emission sources represent the hot gas around the satellite galaxies in subhaloes.
type galaxies in the range \(L_{X}\sim 10^{38}\)-10\({}^{43}\) erg s\({}^{-1}\) from _Chandra_ by Babyk et al. (2018), the model result in Paper I gives \(L_{X,\rm bol}\sim T_{X}^{4.5}\), which is steeper than the slopes of the clusters in RASS (\(L_{X}\propto T_{X}{}^{2.7\pm 0.7}\), Dai et al. 2007) and eFEDS (\(L_{X,\rm bol}\propto T_{X}{}^{3.01}\), Bahar et al. 2022). On the other hand, according to the discussion in the end of Sec. 2.5, the AGN feedback suppresses the X-ray luminosity to some extent. Since the current model does not contain the X-ray emission from AGN, it should be another cause of the discrepancy of the \(L_{X}-T_{X}\) relation in massive clusters. Future work is necessary to improve the model prescriptions in the bright clusters with \(L_{X}\gtrsim 10^{43}\) erg s\({}^{-1}\).
### Mock observations for _Hubs_
_HUBS_ (The Hot Universe Baryon Surveyor) is a mission scheduled to launch around 2030 in China. Thanks to its large \(1\) deg\({}^{2}\) FoV, _HUBS_ is at least an order of magnitude more capable of detecting diffuse emission from hot gas than small-FoV X-ray telescopes, which is thought to hide in CGM and IGM. According to the observing strategy (Cui et al. 2020), _HUBS_ plans to observe nearby galaxies and clusters with quite long exposure time (\(\sim 1\) Ms), and the selection of targets is a very important task to achieve the science objects. On the other hand, the main advantages of SAMs are the large simulation box and the flexibility to investigate the effect of physical processes. In this section, we will simulate the images and the spectra observations of _HUBS_ using our mock data of hot gas based on SAMs. This is an important application of our mock work, which may aid in optimizing future observations for _HUBS_ mission.
To simulate the observations of _HUBS_ mission, we adopt the key design parameters in Cui et al. (2020), which are shown in Tab. 2. It should be noted that the effective area \(A_{\rm eff}\) in Tab. 2 is a function of energy from the ancil
\begin{table}
\begin{tabular}{c|c} \hline \hline Parameter & Value \\ \hline Effective area (cm\({}^{2}\)) & 0-500 \\ \hline Field of view (deg\({}^{2}\)) & 1.0 \\ \hline Spectral band (keV) & 0.1-2.0 \\ \hline Energy resolution (eV) & \\ Regular & 2.0 \\ Central & 0.6 \\ \hline Angular resolution (arcmin) & 1.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The key design parameters of _HUBS_ mission used to simulate the observations in Sec. 3.2.
Figure 6: The comparison of the X-ray spectra from the mock clusters and the results from RASS. The mock and observational samples are divided into four panels according to the values of \(L_{X}\) and \(N_{*666}\) in Tab. 1 respectively. In each panel, the shaded area is the mock spectrum within \(\pm 1\sigma\) deviations around the mean values for the mock samples, and the red curve is the average spectrum stacked from RASS data by Dai et al. (2007).
larly response file (ARF) by Zhang et al. (2022)3, which is used to convolve with the flux \(f(\nu)\) to get the photon counts. In each light cone, we generate the wide-band and narrow-band mock spectra for each cluster, in which the wide-band spectra are in 0.1-2 keV band with a regular energy resolution of 2 eV while the narrow-band spectra are in the bands around emission lines with a resolution of 0.6 eV. Considering the effective area, angular resolution, and exposure time of _HUBS_, we derive the photon-count images in both wide and narrow bands using the emissivity map and the spectra of each cluster.
Footnote 3: We obtain the ARF file from Zhang et al. through private communication, and the current mock work in Sec. 3.2 does not include the effect of the RMF (redistribution matrix file).
In the generation of mock images and spectra, the foreground and background are also included. According to the results from _XMM-Newton_ by Lumb et al. (2002), the cosmic unresolved X-ray background (hereafter XRB) emission is modeled with a power-law spectrum
\[S_{\rm b}=(9.03\pm 0.24)\left(\frac{E}{\rm keV}\right)^{-1.42\pm 0.03}, \tag{12}\]
in which the XRB flux density \(S_{\rm b}\) is in the unit of \(\rm cnts~{}cm^{-2}~{}s^{-1}~{}keV^{-1}~{}sr^{-1}\). Considering the effective area \(A_{\rm eff}\) of _HUBS_, the background count rate can be calculated by
\[n_{\rm b}=\int S_{\rm b}\left(E\right)A_{\rm eff}\left(E\right)dE \tag{13}\]
in which \(A_{\rm eff}\) is a function of energy from ARF by Zhang et al. (2022). Then, we get the value of \(n_{\rm b}\) in 0.1-2.0 keV band
\[n_{\rm b}=6.3\times 10^{-4}~{}\rm cnts~{}arcmin^{-2}~{}s^{-1}. \tag{14}\]
For the foreground, we assume a constant column density \(N_{\rm H}=2\times 10^{20}\rm cm^{-2}\)(Willingale et al., 2013) for the Galactic foreground absorption, which mainly affects the band below 0.3 keV.
In Paper I, our model results predict that the haloes between \(10^{12}\) and \(10^{13}M_{\odot}\) tend to contain a large fraction of hot gas with temperature below 0.5 keV, which is hard-to-detect by many X-ray facilities (Paerels et al., 2008). Thus, we select a nearby cluster with \(M_{\rm 200}\sim 5\times 10^{12}M_{\odot}\) at \(z\sim 0.014\) from one of the shallow light cones, and show the X-ray mock images in Fig. 7. This is a typical cluster in our mock sample, representing a local group sized halo in the nearby universe, which is similar to a potential target of _HUBS_ mission (Cui et al., 2020).
The four panels of Fig. 7 show the emissivity image, wide-band image, and narrow-band images around the O viii and Mg xi emission lines respectively. The emissivity and wide-band images are in 0.1-2 keV band, and the wide-band and narrow-band images are calculated with an exposure time of \(10^{6}\) s. In addition to the cluster in the center of the light cone, the images also include the emission from other sources in the \(1\,\rm deg^{2}\) FoV. We should note that all the sources in the mock images only contain the X-ray photons from hot gas, and our current SAMs outputs do not contain X-ray emission from other sources, such as AGN and X-ray binaries.
To improve the contrast, the photons from XRB are not shown in Fig. 7. Considering the background count rate of _HUBS_ in Eq. 14, the images in Fig. 7 show that _HUBS_ is capable of detecting the X-ray emission from most of the hot gas inside the virial radius \(R_{\rm 200}\) of the nearby cluster with an exposure time of \(10^{6}\) s.
Fig. 8 shows the X-ray spectra of the cluster in the center of Fig. 7. To mimic the contamination sources in the spectra, we include the contributions of all the sources in the solid angle of \(R_{\rm 200}\) to the cluster center in the light cone, and superpose the redshifted spectra from the contamination sources to the spectrum of the central cluster. The top panel shows the wide-band spectrum in 0.1-2 keV and the bottom panels show the zoomed-in narrow-band spectra around the emission lines of C vi, O vii, O viii, Fe xvii, Ne x and Mg xi. In each panel, the dashed curve shows the spectrum of the XRB, which represents the background noise in the spectrum.
With the help of the central array with high spectral resolution (0.6 eV resolution for the \(12\times 12\) small-pixel subarray in the center), Fig. 8 indicates that _HUBS_ has the ability to resolve the typical emission lines in nearby clusters, which can be used to study the properties of hot gas, such as the gas temperature, chemical abundance, as well as to trace the baryon cycles in the cluster environment.
Based on the mock data, we can also predict the number of the sources detectable by _HUBS_ at different redshift. We assume that most baryons in a cluster can be detected if the signal-to-noise ratio (\(\rm S/N\)) is greater than 10 inside the radius \(R_{\rm 500}\) of the halo (\(R_{\rm 500}\) is the radius within which the density of a halo is 500 times the comic critical density at the halo's redshift). Assuming \(n_{\rm s}\) and \(n_{\rm b}\) are the count rates of source and background, the signal-to-noise ratio can be calculated by
\[\rm S/N=\frac{n_{\rm s}}{\sqrt{n_{s}+n_{b}}}. \tag{15}\]
Considering the criterion \(\rm S/N>10\) and the background count rate of _HUBS_ in Eq. 14, most of the hot baryons of a cluster can be detected if the source count rate in 0.1-2 keV band at \(R_{\rm 500}\) meets
\[n_{\rm s}>3\times 10^{-4}~{}\rm cnts~{}arcmin^{-2}~{}s^{-1}. \tag{16}\]
On the other hand, before the PSF (point spread function) of _HUBS_ is finally determined, we assume that a clus
ter can be resolved as an extended source if it exhibits variation in the radial profile of the X-ray luminosity. Based on the \(1~{}\mathrm{arcmin^{2}}\) pixel size of _HUBS_ and the gas density profiles in the model results (see the results in Section 2.1 of Paper I and also in the paper Sharma et al. 2012), we assume a cluster to be an extended source if the angular diameter of its \(R_{500}\) is greater than \(3~{}\mathrm{arcmin}\). For clusters with smaller angular size but \(\mathrm{S/N}>10\) inside \(R_{500}\), they are point-like sources that _HUBS_ cannot resolve, but it is still possible for _HUBS_ to detect the hot baryons in these clusters with long enough exposure time.
In Fig. 9, we show the redshift distribution per FoV of the clusters with \(\mathrm{S/N}>10\) inside \(R_{500}\), averaged through the entire mock sky up to \(z=1\). The red curve is the number of extended sources, and the blue curve is the number of point-like sources unresolvable by _HUBS_. We can see that the number density of resolved clusters at \(z=0\) is about \(40~{}\mathrm{deg^{-2}}\) in a redshift bin of \(\Delta z=0.2\). The values peaks at \(z\sim 0.4\) with almost \(80~{}\mathrm{deg^{-2}}\) per \(\Delta z\) and drops rapidly at \(z>0.5\) for the decrease in the angular size. Thus, the survey of hot baryons in resolved clusters by _HUBS_ should be effective below redshift 0.5 because of the angular size of the clusters in soft X-ray band at different redshift.
To test the redshift limit of resolved sources for _HUBS_, we select a massive bright cluster with \(L_{X}>10^{45}~{}\mathrm{erg~{}s^{-1}}\) at \(z\sim 0.5\) and show its mock observations in Fig. 10. Comparing the emissivity map of the cluster in Fig. 4 and the photon-count image in the left panel of Fig. 10, we can see that the selected cluster is close to the angular resolution limit of _HUBS_. In the middle panel of Fig. 10, the \(\mathrm{S/N}\) map indicates that the hot gas in the cluster at redshift around 0.5 can still be detected with an exposure time \(10^{6}\) s, which is consistent with the results in Zhang et al. (2022) that _HUBS_ can detect groups and clusters beyond \(z\sim 0.3\). In addition, the mock spectrum in the right panel indicates that it is also possible for _HUBS_ to resolve the strong emission lines in the bright cluster at \(z\sim 0.5\), and the flux rate of the XRB photons is below \(10^{-2}\) cnts s\({}^{-1}\) keV\({}^{-1}\) (not plotted in Fig. 10).
Figure 7: The X-ray mock images of a light cone up to redshift 0.2 with \(1^{\circ}\times 1^{\circ}\) FoV, and the center of the light cone is a cluster with \(M_{200}\sim 5\times 10^{12}M_{\odot}\) at \(z\sim 0.014\). The top left panel is an emissivity image in 0.1-2 keV band. The top right panel is a mock image for _HUBS_ in 0.1-2 keV band with an exposure time \(10^{6}\) s. The bottom two panels are the corresponding narrow-band images around the O viii and Mg xi emission lines with 5 eV bandwidth. The photons from XRB are not included in these images to improve the contrast.
On the other hand, Fig. 9 shows that the number of unresolved sources is around zero at \(z=0\) and increases with redshift. It exceeds the number of resolved clusters at \(z>0.3\) and reaches around \(1000\ \,{\rm deg}^{-2}\) per \(\Delta z\) at \(z>0.8\). These unresolved sources are the clusters with angular size below the angular resolution limit. Because of the large number of these point-like sources, the hot gas in these clusters contributes a significant fraction of baryons at \(z\gtrsim 0.3\). It is interesting to test the mock observations of the unresolved clusters.
We select an unresolved cluster at \(z\sim 1\) with high signal-to-noise ratio. The halo mass \(M_{200}\) of the cluster is around \(3\times 10^{13}M_{\odot}\), and its angular diameter of \(R_{500}\) is around \(1.2\) arcmin. After a \(10^{6}\) s of observation by _HUBS_, about \(8\times 10^{4}\) photons can be detected in 0.1-2 keV band. The mock spectrum of this cluster is in Fig. 11. Although the point-like source at \(z\sim 1\) is below the angular resolution limit, _HUBS_ still has the ability to detect strong emission lines from such kind of source, like the O viii and Ne x lines around 0.3 and 0.5 keV in the observed-frame. It should be valuable to observe some sky areas with long exposure time to get the signals from point-like sources of clusters at \(z>0.5\), which helps to study the properties and redshift evolution of hot baryons in the early universe.
In summary, by taking the advantage of the large simulation box in SAMs, the mock observation of _HUBS_ will help on the target selection and observation strategies for future survey. Considering the angular size of the clusters, the survey of hot baryons in resolved clusters by _HUBS_ is effective below redshift 0.5. _HUBS_ has the ability to detect
Figure 8: The X-ray mock spectra of the cluster in the center of each panel in Fig. 7. The top panel is the spectrum in 0.1-2 keV band with a spectrum resolution of 2 eV, and the bottom panels are the narrow-band spectra around C vi, O vii, O viii, Fe xvii, Ne x and Mg xi emission lines with a spectrum resolution of 0.6 eV. In each panel, the dashed curve shows the spectrum of the photons from XRB. The drop in the left end of the wide-band spectrum is caused by the Galactic foreground absorption and the decrease of effective area below 0.3 keV band.
the emission lines of hot gas in clusters at \(z>0.5\) and the observation of point-like sources with long exposure time can be used to study the hot baryons in the early universe.
## 4 Summary
In this paper, we create mock X-ray observations of hot gas in galaxy clusters based on the model outputs of a new extension of L-Galaxies SAMs in our recent work in Paper I. Firstly, we use the coordinates and velocities in the model outputs to build some mock light cones up to nearby and deep redshifts. In each light cone, we use the bolometric X-ray flux, gas temperature and gas metallicity to generate mock X-ray spectra for galaxy clusters with SOXS package, and then derive the mock X-ray images of each cluster based on the spectra and the projected X-ray luminosity profiles. Using the mock data, we simulate the X-ray spectra for _ROSAT_ all-sky survey, and compare them with the observational results. Then, we consider the design parameters of _HUBS_ mission and simulate the observation of hot gas for _HUBS_ to evaluate the results for future survey of hot baryons, which is an important application of our mock work.
The main conclusions of this paper are:
(i) Our mock X-ray observations of hot gas can approximately match the results from X-ray telescopes.
(ii) Due to the angular size of the clusters, the survey of hot baryons in resolved clusters by _HUBS_ is effective below redshift 0.5. _HUBS_ has the ability to detect the emission lines of hot gas in clusters at \(z>0.5\), and the observation of point-like sources with long exposure time can be used to study the hot baryons in the early universe.
(iii) The mock X-ray observations provide the opportunity to make target selection and optimize the observation strategies for forthcoming X-ray facilities by taking the advantage of the large simulation box and flexibility in SAMs.
This paper demonstrates a few applications to use our mock data of hot gas, and many upcoming studies can be carried out in the future. One possible work is the end to end simulation of all-sky hot gas survey of _HUBS_ and _eROSITA_ considering various systematic and instrumental effects, such as the background sources of AGN, point spread function, redistribution matrix file etc, which provides the sources selection and detection functions at different redshift. Another possible work is to create mock catalogues with SAMs outputs based on ELUCID (Wang et al., 2016), a constrained N-body simulation capable of reproducing the spatial distribution of nearby galaxies and clusters in the real universe, and to simulate the X-ray observations of clusters in given positions of the real sky.
In future SAMs work, it is also necessary to improve the physical prescriptions of the hot gas and X-ray emission, including the X-ray emission from AGN to improve the scaling relations in bright clusters, the cooling and feedback processes in inner haloes to improve the density profiles in the core regions of clusters, and also the distribution of hot baryons beyond the halo viral radius \(R_{200}\), which is proposed to be important for the missing baryons in hydrodynamic simulations (e.g Martizzi et al., 2019; Ayromlou et al., 2022).
The authors thank the anonymous referee for the helpful suggestions. We acknowledge the support from the National SKA Program of China No. 2020SKA0110102, the fund for key programs of Shanghai Astronomical Observatory E195121009, and Shanghai Committee of Science and Technology grant No.19ZR1466700. FY is supported in part by the Natural Science Foundation of China (grants 12133008, 12192220, and 12192223). We thank Dr. Zheng Yunliang in Shanghai Jiao Tong University for his help with the eFEDS data. We thank Prof. Cui Wei in Tsinghua University for his suggestion on carrying out the work in this paper.
|
2305.17219 | GVdoc: Graph-based Visual Document Classification | The robustness of a model for real-world deployment is decided by how well it
performs on unseen data and distinguishes between in-domain and out-of-domain
samples. Visual document classifiers have shown impressive performance on
in-distribution test sets. However, they tend to have a hard time correctly
classifying and differentiating out-of-distribution examples. Image-based
classifiers lack the text component, whereas multi-modality transformer-based
models face the token serialization problem in visual documents due to their
diverse layouts. They also require a lot of computing power during inference,
making them impractical for many real-world applications. We propose, GVdoc, a
graph-based document classification model that addresses both of these
challenges. Our approach generates a document graph based on its layout, and
then trains a graph neural network to learn node and graph embeddings. Through
experiments, we show that our model, even with fewer parameters, outperforms
state-of-the-art models on out-of-distribution data while retaining comparable
performance on the in-distribution test set. | Fnu Mohbat, Mohammed J. Zaki, Catherine Finegan-Dollak, Ashish Verma | 2023-05-26T19:23:20Z | http://arxiv.org/abs/2305.17219v1 | # GVdoc: Graph-based Visual Document Classification
###### Abstract
The robustness of a model for real-world deployment is decided by how well it performs on unseen data and distinguishes between in-domain and out-of-domain samples. Visual document classifiers have shown impressive performance on in-distribution test sets. However, they tend to have a hard time correctly classifying and differentiating out-of-distribution examples. Image-based classifiers lack the text component, whereas multi-modality transformer-based models face the token serialization problem in visual documents due to their diverse layouts. They also require a lot of computing power during inference, making them impractical for many real-world applications. We propose, GVdoc, a graph-based document classification model that addresses both of these challenges. Our approach generates a document graph based on its layout, and then trains a graph neural network to learn node and graph embeddings. Through experiments, we show that our model, even with fewer parameters, outperforms state-of-the-art models on out-of-distribution data while retaining comparable performance on the in-distribution test set.
## 1 Introduction
Documents digitization and their intelligent processing in various industries such as finance, insurance, and medicines has resulted in the rapid development of structured document understanding methods, a.k.a. document AI. Document classification is one of the essential tasks in document AI for labeling documents. A number of deep convolutional neural network (CNN) and Transformer-based models have achieved superior performance on many document-AI tasks (Xu et al., 2021; Lee et al., 2021, 2022). However, they tend to employ bigger models with hundreds of millions of parameters, subsequently increasing computational demand that can be a challenge in real-world applications. Yet many of them fail to perform well on out-of-distribution (OOD) data (Larson et al., 2021, 2022). This is because, in many cases, training and testing examples are from a fixed distribution \(-\) such as a particular language, time frame, and industry. However, the layout of the documents evolves over time, and the model should perform well on such out-of-distribution data. Further, the model is expected to be able to differentiate between known and unknown categories of documents, thus minimizing false-positive predictions during testing.
Initial work on document classification employed off-the-shelf image classifiers (Jain and Wigington, 2019; Bakkali et al., 2020) and models pre-trained on ImageNet (Deng et al., 2009) or similar datasets. These methods struggle to label documents having similar layouts but different text contexts. Later, focus shifted towards language models (Li et al., 2021; Lee et al., 2022) and multi-modality models (Bakkali et al., 2020; Xu et al., 2021; Lee et al., 2021; Wang et al., 2022). These models also incorporated layout information obtained from optical character recognition (OCR). Therefore, the performance of these methods, particularly transformer-like models, degrades due to the imperfection of the OCR engine, such as errors in parsed text or the order of tokens sequence. Almost all of these methods tried to improve the performance on the in-distribution test set, neglecting the generalization for real-world applications. To confirm, recently (Larson et al., 2022) collected an OOD version of RVLCDIP dataset (Harley et al., 2015) and evaluated several image and multi-modal classifiers. However, none of them performed well on the OOD dataset.
Our method, called GVdoc (for **G**raph-based **V**isual **DO**cument **C**lassification), studies docu
ment classification as a graph classification problem, where we take text words as nodes and the relationship between words as edges in a graph. We generate a document-level graph using that layout information from OCR (see Figure 1) and learn the embedding using graph neural networks (GNNs). GVdoc is more robust to changes in the test set; hence it shows improved performance on out-of-distribution data. We make the following contributions:
* We introduce graph-based document modeling that leverages both (potentially noisy) reading order and spatial layout in graph construction, and learns embeddings using GNNs.
* We empirically show that compared with other systems, our model is better able to generalize to test data drawn from a different distribution than the training data.
## 2 Related Work
Visual Document ClassificationCNNs have achieved excellent performance on natural scene images, so they became the first obvious choice for visual document classification Das et al. (2018); Jain and Wigington (2019); Bakkali et al. (2020). However, documents have overlapping intra-class visual and structural characteristics Bakkali et al. (2020), which makes visual features less discriminative for classification. The semantics of text in the document and the layout are essential to understand the visual documents.
A second line of work studies document classification as a sequence classification problem Lee et al. (2022); Li et al. (2021); Wang et al. (2022). They follow language modeling strategies, but aside from text, they also incorporate layout information. Such approaches parse text and layout information by applying OCR on document images. Then, they train transformer-like models. StructuralLM Li et al. (2021) adds text and layout embeddings and trains a transformer model (similar to BERT Devlin et al. (2018)) on specialized pre-training tasks. Some of the recent works employ multi-modal features including visual, text and layout Xu et al. (2021); Peng et al. (2022); Lee et al. (2021). These models train a single transformer on concatenations of text and visual tokens Xu et al. (2021) or train a separate transformer branch for both text and visual modalities Peng et al. (2022). The methods that utilize text consider serialized tokens from OCR as an input, so their performance varies with the correctness of the OCR engine. For examples, if we replace the proprietary Microsoft Azure OCR in LayoutLMv2 Xu et al. (2021) with Tesseract 1, an open source OCR, its performance drops for visual document classification Larson et al. (2022).
Footnote 1: [https://github.com/tesseract-ocr/tesseract](https://github.com/tesseract-ocr/tesseract)
Transformer-based models consider input sequence based on OCR reading order Xu et al. (2021); Li et al. (2021), which may not reflect tokens in their actual reading order Lee et al. (2021); Liu et al. (2021). Therefore, a few recent studies model the document as a graph by suggesting several possible edge types. Zhang et al. (2020) proposed k-Nearest Neighbors graphs, but these may contain connections with isolated tokens. Fully connected graphs employed by Liu et al. (2019); Yu et al. (2021) do not leverage the sparsity of the document, hence their approach is similar to transformers. On the other hand, Cheng et al. (2020) relied on a proprietary OCR technology to identify "text fields", then utilized a 360-degree line-of-sight (LoS) graph. We initially used LoS graphs but that did not show very good performance. FormNet Lee et al. (2022) models a document as a graph using a \(\beta\)-skeleton graph Kirkpatrick and Radke (1985) and tries to minimize the serialization error by learning localized Super-Token embeddings using graph convolutions before a transformer. However, they used ETC Transformer Ainslie et al. (2020) for schema learning from GCN-encoded structure-aware Super-Tokens.
Our approach differs from prior graph-based work in two important ways: graph generation and
Figure 1: Sample document graph where the bounding boxes of words are shown by black boxes, and paragraphs by blue boxes. Left side figure shows \(\beta\) skeleton edges with red lines and right side shows OCR-based paragraph-level edges with green color. The edges from left top corner connect the super node to some representative nodes. The final graph is combination of both of these graphs (see Figure 11 in Appendix).
learning embeddings. Our unique document-level sparse graph incorporates **both** spatial layout and OCR reading order, leveraging the document's sparsity and making our model less sensitive to common mistakes in OCR reading order. Moreover, we solely use a GNN to learn embeddings. Thus, we do not require a transformer component, making our approach more memory-efficient than models that incorporate a transformer Lee et al. (2022); Wei et al. (2020); Yu et al. (2021). Our approach also uses more expressive edge embeddings than that of Liu et al. (2019).
Feature fusionInitial research simply added together the text and layout embedding Xu et al. (2021); Hong et al. (2022), incorporated position bias in attention mechanism Garncarek et al. (2021); Powalski et al. (2021), designed cross-modality attention layers Wang et al. (2022); Peng et al. (2022); Li et al. (2021), and explored \(1D\) position and \(2D\) layout aware attention weights using a disentangled matrix Peng et al. (2022). LiLT Wang et al. (2022) adds attention weights from layout and text embeddings and updates both types of embeddings through two separate transformers. However, adding attention weights does not fully leverage the cross-domain features. SelfDoc Li et al. (2021) took the Value (V) of one modality as Key (K) for the other modality while computing cross-attention in transformer layers to learn dependency between language and vision features. Finally, it added features of both text and visual modalities.
## 3 GVdoc Document Graph
We now describe our approach for representing document using both textual and layout features. We represent a document \(D\) as a graph where each token is a node and edges reflect the spatial relationship between them.
NodesWe define vertices for all tokens as \(V=\{v_{1},v_{2},...,v_{N}\}\) where features of \(v_{i}\) are a fusion of the text and layout embeddings defined later in Equation (5). In addition, we define a virtual super node that summarizes the graph, similar to the \(CLS\) token in BERT.
EdgesToken sequence can be important in understanding text, but this information provided by OCR is often noisy. We therefore generate edges in the document graph reflecting two types of relationships between vertices: (a) "ball-of-sight" using \(\beta\)-skeleton graph Kirkpatrick and Radke (1985) and (b) paragraph-based neighborhood.
A \(\beta\)**-skeleton graph**Kirkpatrick and Radke (1985) defines an edge between two bounding boxes if both intersect a circle that does not intersect any other bounding box; the resulting "ball-of-sight" graph is sparser than one using _line_-of-sight edges Wang et al. (2022). Lee et al. (2021, 2022) found this useful for message passing in GNNs.
The **paragraph-based neighborhood** connects tokens within the same paragraph and connects paragraphs based on OCR's reading order predictions. While we could fully connect all tokens in the same paragraph, we aim to reduce computation by increasing sparsity; therefore, we add edges for each token with the \(k\) nearest neighbors within the same paragraph. Then, for each pair of paragraphs that are adjacent in the OCR's reading order, we define an edge between the last token of the prior paragraph and the first token of the following paragraph. Finally, we define a super-node and connect it with the first and last token of each paragraph, considering them as representative tokens of the paragraph.
To construct the final graph, we take the union of the edges from the \(\beta\)-skeleton graph and the paragraph-based neighborhood as shown in Figure 1. Thus, we generate a graph that is sparse but also has enough connections for learning node embeddings through message passing in the GNN (as evident in Table 7). For the edge between connected vertices \(v_{i}\) and \(v_{j}\), we define edge features by concatenating (a) distance between all four corners and centers of token bounding boxes of \(v_{i}\) and \(v_{j}\), (b) absolute distance on horizontal and vertical axes, and (c) ratio of height and width.
## 4 GVdoc Model
Our GVdoc model, shown in Figure 2, consists of input embeddings, feature fusion, and task-specific prediction modules. We learn node embeddings in an end-to-end fashion through various unsupervised pre-training tasks. Then, we fine-tune the model for downstream tasks.
### Input embedding
Text embedding:Our text embedding module is similar to BERT's Devlin et al. (2018). To get embeddings of text (T), we add token embeddings, token type embeddings, and position embeddings,
given as
\[e_{t}=e_{token}(T)+e_{type}(T)+e_{1p}(T) \tag{1}\]
where, \(e_{token}\), \(e_{type}\), \(e_{1p}\) are token, token type and position embedding layers, respectively, and \(e_{t}\in R^{d}\) are text embeddings.
Layout embedding:OCR provides text tokens (T), their bounding boxes \(T_{box}\), and paragraph-level bounding boxes \(P_{box}\). A bounding box contains coordinates of top left corner and bottom right corner, given as \([(x_{1},y_{1}),(x_{2},y_{2})]\), of a box that covers the token or paragraph. Most document AI models employ token-level bounding boxes for layout embedding that allows the models to localize the text in the layout. StructuralLM Li et al. (2021) divides the images into fixed-size grids and uses cell bounding boxes instead of token bounding boxes. They show that the model can encode better contextual information using cell bounding boxes. However, dividing the image into cells might put irrelevant tokens in the same cell or might put a token in two cells. To improve reading order in layout-rich documents, some of the recent approaches Peng et al. (2022) first detect different text components in the document image and then serialize the tokens from OCR per text component. Motivated by Peng et al. (2022), we employ text component (paragraph) level layout information for learning layout embeddings. We concatenate the embeddings of paragraph level bounding boxes and token level bounding boxes. Then, we use one fully connected layer to map back to the hidden dimension, given as:
\[e_{l}=\textit{fc}(e_{tl}(T_{box})\mid\mid e_{pl}(P_{box}),\theta) \tag{2}\]
where \(\mid\mid\) denotes concatenation, \(e_{tl}\) is a layout embedding layer that encodes token bounding boxes in dimension \(R^{d}\), \(e_{pl}\) is a layout embedding layer that encodes paragraph bounding boxes in dimension \(R^{d}\). Finally, both layout embeddings are concatenated to yield a \(R^{2d}\) embedding which is mapped into \(R^{d}\) through a fully connected layer. Thus, our layout embeddings \(e_{l}\) contain the coarse and fine-grained location of the tokens based on the document layout.
### Feature Fusion Module
Our cross-attention module is similar to the cross-attention layer in Li et al. (2021), except that we explicitly compute the value representation (V) for both modalities (text and layout) by linear mappings, as shown in Figure 3. Thus, our cross-attention module tries to find the most relevant layout embeddings based on text attention weights and vice versa. Formally we define our cross-attention module in Equation (5).
\[\alpha_{t}^{ij} =(e_{t}^{i}W_{t}^{hQ})(e_{t}^{j}W_{t}^{hK})/\sqrt{d_{k}} \tag{3}\] \[\alpha_{l}^{ij} =(e_{l}^{i}W_{l}^{hQ})(e_{l}^{j}W_{l}^{hK})/\sqrt{d_{k}}\] (4) \[v_{i}^{h} =\sum_{j\in N_{i}}\alpha_{t}^{ij}(e_{l}^{i}W_{l}^{hV})+\alpha_{l} ^{ij}(e_{t}^{i}W_{t}^{hV}) \tag{5}\]
where the superscript \(h\) represents an attention head, \(d_{k}=d/H\) is the projection dimension (with \(H\) being the number of the attention heads), \(e_{t}^{i}\) and
Figure 3: Feature fusion module: Computes cross attention between text embeddings and layout embeddings.
Figure 2: GVdoc overview: OCR returns text tokens, their bounding boxes, and paragraph-level bounding boxes, which are then fed into respective embedding layers. Token and paragraph bounding box embeddings are merged by a fully connected layer and fused with token embeddings through a fusion module. The model is pre-trained on Masked Language Modeling (MLM), Masked Position Modeling (MPM), and Cell Position Prediction (CPP) tasks. Finally, the pre-trained model is fine-tuned for the classification task.
\(e_{l}^{l}\) are text and layout embedding vectors fused into node embeddings \(v_{i}^{h}\in R^{d_{k}}\) for head \(h\). \(W^{hQ}\), \(W^{hK}\), and \(W^{hV}\) are \(R^{d\times d_{k}}\) learnable weights that linearly transform embeddings into queries (Q), keys (K) and values (V), respectively. Node embeddings from all attention heads are concatenated to yield final node embeddings of dimension \(d\).
### Graph Learning
The generation of document graph results in node features, adjacency matrix and edge features as discussed in Section 3. We chose Graph Attention Network (GAT) (Velickovic et al., 2017) as a message passing network for learning node embeddings. The super-node is used to predict the graph (document) label. Our model is first pre-trained in a similar fashion to most of the transformer-based document AI models. We pre-train the model on the following three tasks.
#### 4.3.1 Masked Language Modeling (MLM)
Mask Language Modeling (MLM) is a widely adopted pre-training task in language modeling, involving the masking of random tokens in a text with the special token \(MASK\), which the model then aims to predict. Consistent with previous studies (Xu et al., 2021; Li et al., 2021; Lee et al., 2022), we adopt a masking strategy in which \(15\%\) of the tokens are masked. Subsequently, the model learns to estimate the masked tokens based on the information provided by their neighboring tokens.
#### 4.3.2 Masked Position Modeling (MPM)
Each token in the document has its associated location information, represented by a bounding box, which aids in understanding the document's layout. Inspired by the approach presented in Saha et al. (Saha et al., 2021), we randomly replace \(15\%\) of the bounding boxes with a fixed bounding box \([0,0,0,]\). Subsequently, the model is tasked with predicting the masked token-level bounding boxes through a regression task. It is important to note that we do not mask the bounding boxes at the paragraph level, allowing the model to retain access to coarse-grained layout information. As a result, the model's predictions focus solely on the fine-grained layout details while utilizing the provided coarse-grained layout information.
#### 4.3.3 Cell Position Prediction (CPP)
Motivated by (Li et al., 2021), we divide the document image into a \(K\times K\) grid. A token is assigned a cell number in which the center of its bounding box lies. Then, for each token, the model is trained to predict the specific cell within the grid to which it belongs. This task helps the model to narrow down location of tokens within the layout.
## 5 Experiments
We hypothesize that our GVdoc model will be more robust to changes in the test distribution than other models. We therefore designed experiments to measure how our model performed on two tasks: (a) classifying in-domain but out-of-distribution documents, and (b) distinguishing out-of-domain documents from in-domain documents.
### Baseline methods
For baseline comparison, we chose models that cover different architectures including CNNs (VGG-16 (Simonyan and Zisserman, 2015), GoogLeNet (Szegedy et al., 2015)), image transformers (DiT) (Li et al., 2022), and models that use language modeling (LayoutLMv2 (Xu et al., 2021), LayoutLMv3 (Huang et al., 2022)). Following (Larson et al., 2022), we compare GVdoc with above mentioned models.
### Datasets
We use the RVLCDIP (Harley et al., 2015) dataset as our in-distribution and in-domain data, then use RN and RO (Larson et al., 2022) as our out-of-distribution and out-of-domain datasets, respectively.
**RvLCDIP** (Harley et al., 2015) is a subset of IITCIDIP (Lewis et al., 2006), consisting of scanned and noisy document images from litigation involving the American tobacco industry. The images are labeled for 16 categories including _forms_, _newspaper_, _scientific publication_ and so on. The dataset has \(320,000\) training samples, and \(40,000\) validation and testing examples, each. We fine-tune all models in this work on RVLCDIP's training set. We will use RT to refer to RVLCDIP's test set.
**RvLCDIP-N (RN)** (Larson et al., 2022) is an out-of-distribution but in-domain set. It contains \(1,002\) documents belonging to the 12 categories of RVLCDIP dataset, making it in-domain. However, they not taken from the American tobacco industry or IIT-CDIP, so the samples are from a different distribution.
**RVLCDIP-O (RO)** (Larson et al., 2022) was collected from Google and Bing searches and the public Document Cloud 2 repository. It has \(3,415\) samples, and those documents do not match with any class in RVLCDIP, i.e., they are both out-of-distribution and out-of-domain.
Footnote 2: [https://www.documentcloud.org](https://www.documentcloud.org)
### Metrics
Robustness to out-of-distribution data.To test how robust each model is to a change in distribution, we compare the model's accuracy on the RVLCDIP test set (RT) and the OOD but in-domain RN. We report both micro-accuracy, calculated as ratio of true positives to total number of samples, and macro-accuracy, calculated by averaging per-class accuracy. A robust model will maintain micro- and macro- accuracy on RN that is close to what it achieved on RT.
Identifying out-of-domain data.To test models' effectiveness at identifying out-of-domain data, we follow Larson et al. (2022) in using metrics that describe the separability of confidence scores for in- and out-of- domain examples. A classifier that is good at identifying out-of-domain data should assign high confidence scores to its predictions for in-domain data and low confidence scores to its predictions for out-of-domain data. If we chose a confidence threshold \(t\), we could make a binary classifier that labels all examples with confidence \(\geq t\) in-domain and all examples with confidence \(<t\) out-of-domain; we could then calculate its accuracy, but that accuracy would depend upon our choice of \(t\). False positive rate at \(95\%\) true positive rate (FPR95) sets \(t\) at a level that gives \(95\%\) true positives and then measures how many negative examples (out-of-distribution) are classified as positive (in-distribution). A model with a lower FPR95 value model is better at differentiating in- versus out-of-distribution data.
Area under the ROC curve (AUC), similarly, describes how different the confidences are for the in- and out-of-domain examples, but, as a threshold-free measure, is considered as a better option Larson et al. (2022). A high AUC score (close to 1.0) means the model assigns a higher confidence score to in-domain data and a lower confidence score to out-of-domain data. An AUC score of 0.5 means the model assigns similar confidence scores to in- and out-of-domain samples.
We calculate FPR95 and AUC using two confidence measures: maximum softmax probability and energy score.
Maximum Softmax Probability (MSP):Given a model, we compute logits for an example \(x\) as \(z=f(x)\) and then apply softmax to compute the confidence score per class. For \(i^{th}\) class, the confidence score \(c_{i}\) can be calculated as: \(c_{i}=\frac{e^{x_{i}}}{\sum_{j}^{C}e^{z_{j}}}\), where \(C\) is total number of classes. MSP is the maximum confidence score out of these \(C\) scores as: \(MSP=max\{c_{i}\}\).
Energy Score:Energy score (Liu et al., 2020) is defined as: \(E(z,T)=-T\log\sum_{j=1}^{C}e^{(z_{j}/T)}\) where \(T\) is a temperature parameter. For fairness, following (Larson et al., 2022), we use \(T=1\).
### Experimental Setup
Given a document, we use OCR to extract text tokens, their bounding boxes, and paragraph (text entity) level bounding boxes. Proprietary OCR engines such as Microsoft Azure OCR used by LayoutLMv2 (Xu et al., 2021), or CLOVA OCR API3 used by BROS (Hong et al., 2022) are meticulous, but not all users have access to these tools. Thus, following (Larson et al., 2022), we use Tesseract 4, an open source OCR engine, for parsing words and their locations from document images, and then tokenized them using BERT tokenizer. For a better start of training, we initialize text embedding layers with weights from pre-trained BERT.
Footnote 3: [https://clova.ai/ocr](https://clova.ai/ocr)
Footnote 4: [https://github.com/tesseract-ocr/tesseract](https://github.com/tesseract-ocr/tesseract)
GVdoc uses an embedding dimension \(d=768\). That is, the dimension for our token embeddings, token bounding-box embeddings and paragraph bounding-box embeddings is \(d=768\). Token and paragraph bounding-box embeddings are concatenated and mapped to final layout embeddings of dimension \(d=768\). Similarly, text and layout embeddings are fused using feature fusion module to result in node embeddings of dimension \(d=768\). Our feature fusion module contains 4 attention heads. We use input edge features of dimension \(21\), which are also linearly transformed to \(d=768\). We use Graph Attention Network (GAT) (Velickovic et al., 2017) with 4 layers and 4 heads. We normalized edge features and input them to GAT.
In our implementation of the \(\beta\)-skeleton graph (Kirkpatrick and Radke, 1985), we set \(\beta=1\)
and consider a maximum of \(25\) neighbors. For the paragraph-level graph, we connect each node to a maximum of \(10\) nearest neighbors within the same paragraph or text entity, utilizing OCR reading order as the distance metric. We experimented with different numbers of neighbors per text entity, including \(5,10,15\), and \(20\), but found that selecting \(10\) neighbors yielded the best performance in terms of accuracy and computational efficiency. Therefore, for all our experiments, we randomly select between \(2\) to \(10\) neighbors for each token during training, while during testing, we fix the number of neighbors to \(10\). The code for GVdoc is publicly available at [https://github.com/mohbattharani/GVdoc](https://github.com/mohbattharani/GVdoc).
### OOD but in-domain performance on RN
Table 1 compares the number of parameters, accuracy on RT reported by their original papers achieved by (Larson et al., 2022), and accuracy on RN (the OOD but in-domain) dataset. Based on the analysis of different models shown in Table 1, almost all previous works reported more than \(90\%\) accuracy on the RT except GoogLeNet. More importantly, when these models were tested on the out-of-distribution, in-domain dataset (RN), all the models substantially dropped in accuracy. The original LayoutLMv2 (Xu et al., 2021) utilized the proprietary Microsoft Azur OCR. As a result, when it was evaluated on text parsed using Tesseract OCR, its accuracy on the test set decreased by almost \(7\%\). Furthermore, it performed poorly on the out-of-distribution (OOD) dataset, experiencing a drop of \(33\%\) on RN. Notably, the more recent LayoutLMv3 (Huang et al., 2022) exhibited improved performance compared to LayoutLMv2, but it still experienced a drop of nearly \(10\%\) on the OOD dataset. DiT appears to have the highest accuracy than the rest on the RT, yet failed to generalize. The drop in accuracy on RN by these models imply that these models might be over-fitting on in-distribution data.
Compared to the top-performing models on the test set, our GVdoc model demonstrates robust performance on RN, indicating its ability to generalize well to out-of-distribution data. Table 2 showcases the per-class accuracy on RN, where GVdoc consistently achieves higher accuracy and accurately categorizes the majority of examples. Notably, our model exhibits high consistency, outperforming or matching the leading results across all classes. In contrast, the other models shows inconsistency, with accuracy dropping below \(50\%\) on at least one class. Specifically, for the "Specification" class, our model outperforms all models except LayoutLMv3 (Huang et al., 2022). Moreover, our model achieves nearly \(20\%\) higher accuracy than DiT, despite DiT having almost twice the number of parameters as GVdoc. This highlights the effectiveness and efficiency of our model in achieving superior performance.
### OOD and out-of-domain results on RO
Here, we compare AUC scores on RT versus RO (T-O), and RN versus RO (N-O) using three metrics: (a) AUC using Maximum Softmax Probability (MSP), (2) AUC using Energy function, and (3) FPR95. These metrics investigate the ability of a model to differentiate between in- and out-distribution data.
RN vs RO (N-O):Table 3 compares AUC scores on the out-of-distribution dataset RN versus RO using MSP and energy metrics. The models are trained on the RVLCDIP training set and tested on out-of-distribution datasets \(-\) RN and RO. Then their maximum soft-max probability (MSP) and energy function based AUC scores are compared. Ideally, N-O should be more challenging as it compares in-distribution and out-of-distribution datasets (Larson et al., 2022). Among previous approaches, DiT (Li et al., 2022) has the highest test accuracy, and its micro and macro AUC scores using MSP are higher than those of VGG-16, GoogLeNet, and LayoutLMv2. However, our GVdoc model outperforms DiT by \(24\) points on micro AUC and almost \(17\) points on macro AUC with MSP. Furthermore, although LayoutLMv3 (Huang et al., 2022) exhibits a test accuracy similar to that of DiT, our model surpasses it. Specifically, GVdoc outperforms LayoutLMv3 by almost \(13\) points on micro AUC and \(9\) points on macro AUC with MSP.
Micro- and macro-AUC scores using the En
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{\# param} & \multicolumn{3}{c|}{RT} & \multicolumn{3}{c|}{RN} \\ & & Reported & Achieved & Micro & Macro & \(\Delta\) RF-RN \\ \hline VGG-16 & 138M & 91.0 & 90.5* & 66.8 & 69.1 & -23.7 \\ GoogLeNet & 60 M & 88.4 & 87.1* & 60.2 & 61.3 & -26.9 \\ DIT & 87 M & 92.1 & 93.3* & 78.6 & 80.5 & -14.7 \\ LayoutLMv2 & 200 M & 95.3 & 88.7* & 55.6 & 60.0 & -33.1 \\ LayoutLMv3 & 133 M & 95.93 & 93.11 & 82.45 & 83.85 & -10.66 \\ \hline GVdoc & 34 M & - & 87.6 & **89.90** & **89.12** & + 2.3 \\ \hline \end{tabular}
\end{table}
Table 1: Classification accuracy scores on RT (Test data) reported by original papers, achieved by (Larson et al., 2022) (indicated by *) compared to RN. \(\Delta\) RT-RN is the difference in accuracy between RT and RN.
ergy function do not follow the trend. GoogLeNet achieved the lowest test accuracy and has the lowest Energy AUC scores. Although VGG-16 has higher test accuracy than LayoutLMv2, it is almost \(2\) points lower on the Micro AUC energy score. Nevertheless, VGG-16 is almost \(2\) points better on the Macro AUC energy score. DiT and LayoutLMv3 have similar micro and macro scores. GV-doc achieves the highest micro- and macro-AUC scores using energy suggesting that it can effectively differentiate between the in-distribution and out-distribution datasets.
Table 4 compares FPR95 scores where a model with lower score is considered better. Micro FPR95 with MSP is in low \(0.90\)'s for all the models except LayoutLMv3, DiT and ours. Unlike rest of the models, energy-based FPR95 scores for our model are almost perfect i.e., close to zero. This is evident from the distribution of energy scores in Figure 8 (see Appendix). Overall, GVdoc has lower FPR95 scores compared to the other models. Furthermore, the ROC curves in Figure 5 (see Appendix) confirm that our model can effectively differentiate negative (out-of-distribution) from positive (in-distribution) data. More details are discussed in Appendix A.3.
RT vs RO (T-O):Table 5 analyzes the AUC scores of the RT versus out-domain RO data. All models in the study have MSP-based AUC scores ranging from 0.8 to 0.9. While DiT has the highest test accuracy among baselines, its MSP AUC scores are slightly lower than our model. Additionally, DiT falls behind in terms of energy-based AUC scores. Although LayoutLMv3 outperforms its predecessor, LayoutLMv2, in terms of macro MSP and energy scores, it is still unable to surpass DiT. However, GVdoc consistently outperforms all others in the study.
Table 6 presents the FPR95 scores on RT versus RO. In terms of MSP-based FPR95, there is no fixed trend, yet our GVdoc model achieves the second-best FPR95 score based on Macro MSP. In terms of energy-based FPR95, GVdoc outper
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{MSP} & \multicolumn{2}{c|}{Energy} \\ & Micro & Macro & Micro & Macro \\ \hline VGG-16 & 0.649 & 0.533 & 0.465 & 0.391 \\ GoogLeNet & 0.748 & 0.620 & 0.665 & 0.560 \\ DiT & 0.587 & **0.463** & 0.499 & 0.417 \\ LayoutLMv2 & 0.717 & 0.592 & 0.753 & 0.574 \\ LayoutLMv3 & **0.578** & 0.531 & 0.576 & 0.528 \\ \hline GVdoc & 0.593 & 0.488 & **0.250** & **0.233** \\ \hline \end{tabular}
\end{table}
Table 6: FPR95 scores (lower better): RT versus RO.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{MSP} & \multicolumn{2}{c|}{Energy} \\ & Micro & Macro & Micro & Macro & Macro \\ \hline VGG-16 & 0.649 & 0.706 & 0.648 & 0.707 \\ GoogLeNet & 0.592 & 0.679 & 0.587 & 0.689 \\ DiT & 0.728 & 0.780 & 0.753 & 0.792 \\ LayoutLMv2 & 0.620 & 0.717 & 0.643 & 0.716 \\ LayoutLMv3 & 0.755 & 0.807 & 0.755 & 0.807 \\ \hline GVdoc & **0.865** & **0.888** & **0.997** & **0.999** \\ \hline \end{tabular}
\end{table}
Table 3: AUC scores (higher better): RN versus RO.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{MSP} & \multicolumn{2}{c|}{Energy} \\ & Micro & Macro & Macro & Macro & Macro \\ \hline VGG-16 & 0.649 & 0.706 & 0.648 & 0.707 \\ GoogLeNet & 0.592 & 0.679 & 0.587 & 0.689 \\ DiT & 0.728 & 0.780 & 0.753 & 0.792 \\ LayoutLMv2 & 0.620 & 0.717 & 0.643 & 0.716 \\ LayoutLMv3 & 0.755 & 0.807 & 0.755 & 0.807 \\ \hline GVdoc & **0.865** & **0.888** & **0.997** & **0.999** \\ \hline \end{tabular}
\end{table}
Table 2: The per-class accuracy scores on RN (OOD but in-domain dataset) for each document classification model demonstrate the superior performance of GVdoc across various classes. Our model consistently achieves higher accuracy, outperforming or matching the best model on 10 classes and ranking as the second-best on 3 classes.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{MSP} & \multicolumn{2}{c|}{Energy} \\ & Micro & Macro & Micro & Macro \\ \hline VGG-16 & 0.649 & 0.706 & 0.648 & 0.707 \\ GoogLeNet & 0.592 & 0.679 & 0.587 & 0.689 \\ DiT & 0.728 & 0.780 & 0.753 & 0.792 \\ LayoutLMv2 & 0.620 & 0.717 & 0.643 & 0.716 \\ LayoutLMv3 & 0.755 & 0.807 & 0.755 & 0.807 \\ \hline GVdoc & **0.865** & **0.888** & **0.997** & **0.999** \\ \hline \end{tabular}
\end{table}
Table 4: AUC scores (higher better): RN versus RO.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{MSP} & \multicolumn{2}{c|}{Energy} \\ & Micro & Macro & Micro & Macro \\ \hline VGG-16 & 0.649 & 0.706 & 0.648 & 0.707 \\ GoogLeNet & 0.592 & 0.679 & 0.587 & 0.689 \\ DiT & 0.728 & 0.780 & 0.753 & 0.792 \\ LayoutLMv2 & 0.620 & 0.717 & 0.643 & 0.716 \\ LayoutLMv3 & 0.755 & 0.807 & 0.755 & 0.807 \\ \hline GVdoc & **0.865** & **0.888** & **0.997** & **0.999** \\ \hline \end{tabular}
\end{table}
Table 3: AUC scores (higher better): RN versus RO.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{MSP} & \multicolumn{2}{c|}{Energy} \\ & Micro & Macro & Micro & Macro \\ \hline VGG-16 & 0.649 & 0.533 & 0.465 & 0.391 \\ GoogLeNet & 0.748 & 0.620 & 0.665 & 0.560 \\ DiT & 0.587 & **0.463** & 0.499 & 0.417 \\ LayoutLMv2 & 0.717 & 0.592 & 0.753 & 0.574 \\ LayoutLMv3 & **0.578** & 0.531 & 0.576 & 0.528 \\ \hline GVdoc & 0.593 & 0.488 & **0.250** & **0.233** \\ \hline \end{tabular}
\end{table}
Table 6: FPR95 scores (lower better): RT versus RO.
forms the rest. VGG-16 achieves a better Micro FPR95 score, whereas GVdoc is \(0.146\) points better than VGG-16 in terms of Macro FPR95. Although VGG-16 has lower test accuracy than DiT, its energy-based AUC and FPR95 scores are better than DiT. Overall, GVdoc consistently performs the best in terms of AUC scores and energy-based FPR95, but it is the second-best in MSP-based Macro FPR95.
To further investigate this, we plot MSP scores on RO for different models in Figure 4. We can see that our GVdoc model predicts lower confidence scores for out-domain data samples. Figure 6 (see Appendix) demonstrates that the predicted confidence scores for RN and RT are close to \(1.0\) for most of the examples. By selecting the proper threshold on confidence scores, we can correctly differentiate between in-domain versus out-domain, and in-distribution versus out-of-distribution data with our model. ROC curves in Figure 5 (see Appendix A.3) show that GVdoc is equivalent or even better than the other models.
### Ablation Study
Effect of graph generation methodsAs an ablation study, we compare the effect of different graph generation methods for visual documents. Table 7 demonstrates the importance of the \(\beta\) skeleton graph for document classification. Regardless graph generation method, classification accuracy on RT is almost the same. But, using only paragraph-level graphs (based on OCR reading order), the methods struggle to perform well on RN. However, our global graph, which combines both \(\beta\) skeleton and paragraph-level-graph, achieves the best accuracy on RT and RN.
Number of the maximum neighbors per token in graphAs discussed in Section 5.4, we discard neighbors from the paragraph-level graph to make it sparse. We constraint maximum degree per node during training. For testing, we select a fixed number of neighbors per token (degree per node). Table 8 demonstrates that reducing the edges during training makes the model robust to the number of neighbors per token. Therefore, our GVdoc model shows the best performance on OOD data.
## 6 Conclusion
In this paper, we address the limitation of existing visual document classification models by modeling a document as a graph and learning its embeddings using a graph attention network. By defining two types of edges (\(\beta\) skeleton and paragraph-based), we leverage the benefit of layout information while minimizing the effects of the errors from OCR reading order. Thus, effectively embracing coarse and fine-grained layout information, GVdoc generalizes better for different layouts. While most visual document classifiers tend to perform well on in-distribution data, they fail or struggle on out-of-distribution data; our model does not drop its performance on OOD data. Through experiments, we demonstrate the generalization of our model on out-of-distribution data.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{RT Acc} & \multicolumn{2}{c|}{RN Acc} \\ & Micro & Macro & Micro & Macro \\ \hline \(\beta\) skeleton & 87.40 & 87.36 & 87.07 & 86.67 \\ Paragraph-level & 87.15 & 87.11 & 84.90 & 85.53 \\ Both & 87.54 & 87.50 & 89.90 & 89.12 \\ \hline \end{tabular}
\end{table}
Table 7: Comparison of graph generation methods: All the models achieved almost similar classification accuracy on RT. On RN, most of learning is coming from \(\beta\) skeleton graph but OCR-based paragraph-level graph helps to improve on out-of-domain RN.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Dataset} & \multicolumn{4}{c|}{Number of neighbors} \\ & 5 & 10 & 15 & 20 \\ \hline RN & 89.47 & **89.90** & 89.25 & 89.25 \\ RT & 87.30 & 87.60 & 87.26 & 87.32 \\ \hline \end{tabular}
\end{table}
Table 8: The accuracy of the model varies with the numbers of maximum neighbors per token during testing.
Figure 4: Distribution of confidence scores on RO for different models. GVdoc consistently demonstrates lower confidence scores on out-of-domain data points, indicating its cautious approach towards assigning class labels to unseen classes.
## 7 Limitations
* We employed Tesseract OCR, an open-source OCR system, which can sometimes make errors in text detection and recognition. However, commercially available OCR engines such as Microsoft Azure OCR are more proficient in detecting text and layout from visual documents. OCR errors can propagate during training and affect the model's performance. For instance, we observed that when Tesseract OCR was used instead of Microsoft Azure OCR, LayoutLMv2 Xu et al. (2021) experienced a \(7\%\) decrease in performance.
* Our model relies on textual and layout features, neglecting the visual component. Various works Li et al. (2021); Xu et al. (2021) have already witnessed improvements by utilizing visual features along with textual and layout features. We plan to investigate integration of visual features.
|
2308.11694 | Modular curves $X_0(N)$ with infinitely many quartic points | We determine all modular curves $X_0(N)$ with infinitely many quartic points.
To do this, we define a pairing that induces a quadratic form representing all
possible degrees of a rational morphism from $X_0(N)$ to a positive rank
elliptic curve. | Maarten Derickx, Petar Orlić | 2023-08-22T17:11:03Z | http://arxiv.org/abs/2308.11694v3 | # Modular curves \(X_{0}(N)\) with infinitely many quartic points
###### Abstract.
We determine all modular curves \(X_{0}(N)\) with infinitely many quartic points. To do this, we define a pairing that induces a quadratic form representing all possible degrees of a morphism from \(X_{0}(N)\) to a positive rank elliptic curve.
Key words and phrases:Modular curves, Tetragonal, Tetraelliptic, Quartic point 2020 Mathematics Subject Classification: 11G18, 14G35, 14K02 The second author was supported by the QuantiXLie Centre of Excellence, a project co-financed by the Croatian Government and European Union through the European Regional Development Fund - the Competitiveness and Cohesion Operational Programme (Grant KK.01.1.1.01.0004) and by the Croatian Science Foundation under the project no. IP-2018-01-1313.
which they proved for \(d=2,3\). However, Debarre and Fahlaoui constructed counterexamples for \(d\geq 4\)[7]. One way to characterize when there are infinitely many points of degree \(d\) on \(C\) is the following theorem.
**Theorem 1.3** ([5, Theorem 4.2. (1)]).: _Let \(C\) be a curve over a number field. There are infinitely many degree \(d\) points on \(C\) if and only if there exists a map \(C\mapsto\mathbb{P}^{1}\) of degree \(d\) or the image of \(C^{(d)}\) in \(\operatorname{Pic}^{d}C\) contains a translate of a positive rank abelian variety._
It can be hard to check, however, whether the image of \(C^{(d)}\) in \(\operatorname{Pic}^{d}C\) contains a translate of a positive rank abelian variety. Kadets and Vogt gave a simpler characterization for \(d=2,3\), which encompasses the previous results of Harris-Silverman [13] and Abramovich-Harris [2].
**Theorem 1.4** ([20], Theorem 1.3).: _Suppose \(X/k\) is a nice curve. Then the following statements hold:_
1. _If_ \(\operatorname{a.irr}_{k}X=2\)_, then_ \(X\) _is a double cover of_ \(\mathbb{P}^{1}\) _or an elliptic curve of positive rank over_ \(k\)_._
2. _If_ \(\operatorname{a.irr}_{k}X=3\)_, then one of the following three cases holds:_ 1. \(X\) _is a triple cover of_ \(\mathbb{P}^{1}\) _or an elliptic curve of positive rank over_ \(k\)_._ 2. \(X\) _is a smooth plane quartic with no rational points, positive rank Jacobian, and at least one cubic point._ 3. \(X\) _is a genus_ \(4\) _Debarre-Fahlaoui curve._
3. _If_ \(\operatorname{a.irr}_{\overline{k}}X=d\leq 3\)_, then_ \(X_{\overline{k}}\) _is a degree_ \(d\) _cover of_ \(\mathbb{P}^{1}\) _or an elliptic curve._
4. _If_ \(\operatorname{a.irr}_{\overline{k}}X=d=4,5\)_, then either_ \(X_{\overline{k}}\) _is a Debarre-Fahlaoui curve, or_ \(X_{\overline{k}}\) _is a degree_ \(d\) _cover of_ \(\mathbb{P}^{1}\) _or an elliptic curve._
**Definition 1.5**.: For a curve \(C\) defined over a field \(k\), the \(k\)-gonality \(\operatorname{gon}_{k}C\) is the smallest integer \(d\) such that there exists a morphism of degree \(d\) from \(C\) to \(\mathbb{P}^{1}\) defined over \(k\).
The question of determining \(\operatorname{a.irr}_{k}C\) is closely related to the \(k\)-gonality of \(C\) and degree \(k\) maps to elliptic curves. Frey [12] proved that if a curve \(C\) defined over a number field \(k\) has infinitely many points of degree \(\leq d\) over \(k\), then \(\operatorname{gon}_{k}C\leq 2d\).
Regarding the curves \(C=X_{1}(M,N)\), all cases when \(C\) has infinitely many points of degree \(d\leq 6\) were determined by Mazur [29] (for \(d=1\)), Kenku, Momose, and Kamienny [26, 21] (for \(d=2\)), Jeon, Kim, and Schweizer [19] (for \(d=3\)), Jeon, Kim, and Park [18] (for \(d=4\)), and Derickx and Sutherland [8] (for \(d=5,6\)). Additionally, Derickx and van Hoeij [9] determined all curves \(X_{1}(N)\) which have infinitely many points of degree \(d=7,8\) and Jeon determined all trielliptic [16] and tetraelliptic [17] curves \(X_{1}(N)\) over \(\mathbb{Q}\).
In this paper, we will study the curve \(C=X_{0}(N)\) and \(k=\mathbb{Q}\). The curve \(X_{0}(N)\) has infinitely many rational points if and only if \(N\in\{1-10,12,13,16,18,25\}\) (i.e. when \(g(X_{0}(N))=0\)). This was proved by Mazur [30] and Kenku [22, 24, 23, 25].
Ogg [34] determined all hyperelliptic curves \(X_{0}(N)\), Bars [3] determined all bielliptic curves \(X_{0}(N)\), as well as all curves \(X_{0}(N)\) with infinitely many quadratic points, and Jeon [15] determined all curves \(X_{0}(N)\) with infinitely many cubic points.
**Theorem 1.6** (Bars).: _The modular curve \(X_{0}(N)\) has infinitely many points of degree \(2\) over \(\mathbb{Q}\) if and only if_
\[N\in\{1-33,35-37,39-41,43,46-50,53,59,61,65,71,79,83,89,101,131\}.\]
**Theorem 1.7** (Jeon).: _The modular curve \(X_{0}(N)\) has infinitely many points of degree \(3\) over \(\mathbb{Q}\) if and only if_
\[N\in\{1-29,31,32,34,36,37,43,45,49,50,54,64,81\}.\]
We here determine all curves \(X_{0}(N)\) with infinitely many quartic points. Our main result is the following theorem.
**Theorem 1.8**.: _The modular curve \(X_{0}(N)\) has infinitely many points of degree \(4\) over \(\mathbb{Q}\) if and only if_
\[N\in\{1-75,77-83,85-89,91,92,94-96,98-101,103,104,107,111,\] \[\qquad\qquad 118,119,121,123,125,128,131,141-143,145,155,159,167,191\}.\]
For \(N\) in the above set, we prove in Section 3 that \(X_{0}(N)\) has infinitely many quartic points. The harder part of the proof is proving that for the other \(N\), there are only finitely many quartic points on \(X_{0}(N)\).
Interestingly, Jeon's methods ([15], Lemmas 1.3-1.5) can be used to solve all \(N\) except \(122,129,158,166\). Furthermore, we can solve \(158\) and \(166\) via the Castelnuovo-Severi inequality (together with [4], used to determine that the quotients \(X_{0}(158)/\left\langle w_{79}\right\rangle\) and \(X_{0}(166)/\left\langle w_{83}\right\rangle\) are not bielliptic) leaving only \(122\) and \(129\) unsolved.
In Section 2, we prove that any curve \(C/\mathbb{Q}\) of genus \(g\geq 8\) with infinitely many quartic points and finitely many cubic points has a degree \(4\) morphism to \(\mathbb{P}^{1}\) or a positive rank elliptic curve. Using this result for \(C=X_{0}(N)\), along with the fact that all curves \(X_{0}(N)\) with infinitely many cubic points are tetragonal over \(\mathbb{Q}\) (since all of them either have genus \(0\) or \(1\), or are hyper or bielliptic), we get that any \(X_{0}(N)\) of genus \(g\geq 8\) with infinitely many quartic points must admit a degree \(4\) morphism to a positive rank elliptic curve (we will call such curves positive rank tetraelliptic). Therefore, only finitely many \(N\) need to be solved separately. In Section 4, we solve the case \(N=97\), the only \(N\) that needs to be solved separately.
Section 5 contains the technical results used in Section 6, where we determine all positive rank tetraelliptic curves \(X_{0}(N)\). Our main tool for proving that a curve \(C\) over \(\mathbb{Q}\) does not admit a degree \(4\) morphism to an elliptic curve \(E\) over \(\mathbb{Q}\) is the representation of rational morphisms from \(J(C)\) to \(E\) by a quadratic form.
**Proposition 1.9** (Part of the proof of Theorem 6.15).: _Let \(C\) be a curve over \(\mathbb{Q}\) with at least one rational point and \(E\) an elliptic curve over \(\mathbb{Q}\) that occurs as an isogeny factor of \(J(C)\) with multiplicity \(n\geq 1\). Then the degree map \(\deg:\operatorname{Hom}_{\mathbb{Q}}(C,E)\mapsto\mathbb{Z}\) can be extended to a positive definite quadratic form on \(\operatorname{Hom}_{\mathbb{Q}}(J(C),E)\cong\mathbb{Z}^{n}\)._
This statement is a generalization of [35, Corollary III.6.3], which deals with the case when \(C\) is an elliptic curve. The proof uses optimal \(E\)-isogenous quotients defined in Section 6 to prove that \(\operatorname{Hom}_{\mathbb{Q}}(J(C),E)\equiv\operatorname{Hom}_{\mathbb{Q}}( E^{n},E)\equiv\mathbb{Z}^{n}\). We do not give the proof here because we do not need this generalized statement to prove Theorem 6.15, but are instead able to manually construct a desired quadratic form.
In Section 5.2 we define a pairing on \(\operatorname{Hom}_{\mathbb{Q}}(J(C),E)\) which is an extension of the degree map. Proposition 5.5 tells us that it is a positive definite symmetric bilinear quadratic form. It turns out that, in all our cases, the degeneracy maps \(\tau_{d,N,M}\) (defined in Section 5.3) form a basis for \(\operatorname{Hom}_{\mathbb{Q}}(J(C),E)\). More precisely, we have the following result.
**Proposition 1.10**.: _Let \(E\) be an elliptic curve of positive \(\mathbb{Q}\)-rank and conductor \(M\mid N<408\), and let \(f:X_{0}(M)\mapsto E\) be a modular parametrization of \(E\). Then (with the natural embedding \(X_{0}(N)\mapsto J_{0}(N)\)), the maps \(f\circ\tau_{d,N,M}\) form a basis for \(\operatorname{Hom}_{\mathbb{Q}}(J(C),E)\), where \(d\) ranges over all divisors of \(\frac{N}{M}\)._
Therefore, the coefficients of the quadratic form are the values of the pairing on the base elements \(f\circ\tau_{d,N,M}\). The main result of Section 5, Theorem 5.13, allows us to explicitly compute these coefficients in terms of the \(q\)-expansion of the modular form associated with \(E\). Finally, we show that these quadratic forms do not take \(4\) as a value and conclude that there are no degree \(4\) rational morphisms from \(X_{0}(N)\) to \(E\). Quadratic forms considered in the proof of Theorem 6.15 are listed in Table 1.
The reason why we only solved the case \(d=4\) is the following. Although we could get a similar result as in Theorem 6.15 for \(d\geq 5\), there is a large number of small genus curves \(X_{0}(N)\) for which we cannot use Theorem 2.2 to connect the degree \(d\) maps with the points of degree \(\leq d\).
For example, when \(d=5\), Theorem 2.2 can only be used for curves of genus \(g\geq 12\). Although we can use the Jacobi inversion theorem to deal with the cases \(g\leq 5\), there are \(34\) curves \(X_{0}(N)\) with genus \(g\in[6,11]\) such that \(J_{0}(N)\) has positive rank over \(\mathbb{Q}\) (for \(d=4\) we were lucky to have only one such small genus case, \(N=97\)). Furthermore, there exists only one pentagonal curve \(X_{0}(N)\), namely \(X_{0}(109)\)[33, Theorem 1.3], and we did not find any degree \(5\) maps from \(X_{0}(N)\) to an elliptic curve. Therefore, we expect that in most \(g\geq 6\) cases the curve \(X_{0}(N)\) will have only finitely many degree \(5\) points.
The Sage codes used in the proofs can be found on: [https://github.com/koffie/mdsage/tree/develop](https://github.com/koffie/mdsage/tree/develop).
## Acknowledgements
We are grateful to Filip Najman for his helpful comments and Kenneth A. Ribet for providing some very useful references to the literature.
## 2. Preliminaries
In this paper, we will use two methods for obtaining quartic points on a curve \(C\) defined over a number field \(k\). Both methods obtain quartic points as pullback via rational maps from \(C\). The first method uses a degree \(4\) map to a curve \(C^{\prime}\) with infinitely many rational points (recall that Faltings' theorem implies that the only such curves \(C^{\prime}\) are of genus \(0\) or genus \(1\) with positive \(k\)-rank), and the second method uses a degree \(2\) map to a curve \(C^{\prime}\) with infinitely many quadratic points. The following proposition verifies these methods.
**Proposition 2.1**.: _Let \(k\) be a number field and let \(d\) be a positive integer. Suppose \(C\) and \(C^{\prime}\) are nice curves defined over \(k\) and let \(f:C\mapsto C^{\prime}\) be a morphism of degree \(d^{\prime}\mid d\) defined over \(k\). If \(C^{\prime}\) has infinitely many points of degree \(\frac{d}{d^{\prime}}\) over \(k\), then \(C\) has infinitely many points of degree \(\leq d\) over \(k\)._
Proof.: Let \(P\) be a point on \(C^{\prime}\) of degree \(\frac{d}{d^{\prime}}\) over \(k\) and let \(K\supset k\) be its field of definition. Then the preimage \(f^{-1}(P)\) has size \(\leq d^{\prime}\). Let \(Q\in C(\overline{K})\) be an element of \(f^{-1}(P)\). For every automorphism \(\sigma\in G_{K}\), where \(G_{K}\) is an absolute Galois group over \(K\), we have
\[f(\sigma(Q))=\sigma(f(Q))=\sigma(P)=P=f(Q).\]
Therefore, \(\sigma(Q)\in f^{-1}(P)\) for every \(\sigma\in G_{K}\). This means that, since \(\#f^{-1}(P)=d^{\prime}\), \(Q\) must be defined over some field \(L\) such that \([L:K]\leq d^{\prime}\), or equivalently \([L:k]\leq d\).
As we can see, this pullback method gives points of degree \(\leq d\) over \(\mathbb{Q}\). Therefore, if there are infinitely many points of degree \(\leq d-1\) on \(C\), we cannot immediately conclude that \(C\) has infinitely many points of degree \(d\). This can be resolved, however, using Theorem 4.2 of [5] which tells us that, as soon as one of the points in the pullback has degree \(d\), there will be infinitely many points of degree \(d\) on \(C\).
We will use this proposition to find infinitely many quartic points \(X_{0}(N)\) by taking \(d=4\) and \(d^{\prime}=1\) or \(2\). Interestingly, from Theorems 1.6 and 1.7 it follows that if \(X_{0}(N)\) has infinitely many cubic points, then \(X_{0}(N)\) also has infinitely many quadratic points. This means that, in our case, we do not need to use Theorem 4.2 of [5].
When the genus of \(C\) is high enough, the following theorem by Kadets and Vogt tells us that this pullback method is the only way to obtain points of a certain degree.
**Theorem 2.2** ([20], Theorem 1.4).: _Suppose \(X/k\) is a curve of genus \(g\) and \(\mathrm{a.irr}_{k}X=d\). Let \(m:=\lceil d/2\rceil-1\) and let \(\epsilon:=3d-1-6m<6\). Then one of the following holds:_
1. _There exists a nonconstant morphism of curves_ \(\phi:X\mapsto Y\) _of degree at least_ \(2\) _such that_ \(d=\mathrm{a.irr}_{k}Y\cdot\mathrm{deg}\phi\)_._
2. \(g\leq\max\Big{(}\frac{d(d-1)}{2}+1,3m(m-1)+m\epsilon\Big{)}\)_._
**Corollary 2.3**.: _Suppose \(C/\mathbb{Q}\) is a curve of genus \(g\geq 8\) and \(\mathrm{a.irr}_{\mathbb{Q}}C=4\). Then there exists a nonconstant morphism of degree \(4\) from \(C\) to \(\mathbb{P}^{1}\) or an elliptic curve defined over \(\mathbb{Q}\) with a positive \(\mathbb{Q}\)-rank._
Proof.: We compute \(m=1\) and \(\epsilon=5\). Therefore, case (2) of the previous theorem is impossible and we have a morphism \(f:C\mapsto Y\) of degree \(2\) or \(4\).
If the degree of \(f\) is \(2\), then we have \(\mathrm{a.irr}_{\mathbb{Q}}Y=2\) and \(Y\) is a double cover of \(\mathbb{P}^{1}\) or an elliptic curve with a positive \(\mathbb{Q}\)-rank by Theorem 1.4. If the degree of \(f\) is \(4\), then we have \(\mathrm{a.irr}_{\mathbb{Q}}Y=1\) and \(Y\) is isomorphic to \(\mathbb{P}^{1}\) or an elliptic curve with a positive \(\mathbb{Q}\)-rank by Faltings' theorem.
This means that for \(N\) such that the genus of the curve \(X_{0}(N)\) is at least \(8\) the existence of infinitely many quartic points is equivalent with the existence of a degree \(4\) map to \(\mathbb{P}^{1}\) or to an elliptic curve with a positive \(\mathbb{Q}\)-rank.
## 3. Curves \(X_{0}(n)\) with infinitely many quartic points
In this section, we will prove that for \(N\) listed in the Theorem 1.8 the curve \(X_{0}(N)\) has infinitely many quartic points. When \(X_{0}(N)\) already has infinitely many quadratic points (these \(N\) are listed in Theorem 1.6), this is trivial. Now we consider the other cases.
**Proposition 3.1**.: _The curve \(X_{0}(N)\) has infinitely many quartic points for_
\[N\in\{34,43,45,54,64,81\}.\]
Proof.: For each of these \(N\), the quotient \(X_{0}(N)/\left\langle w_{N}\right\rangle\) is an elliptic curve and therefore has infinitely many quadratic points. Now we use Proposition 2.1 for the degree \(2\) quotient map from \(X_{0}(N)\) to \(X_{0}(N)/\left\langle w_{N}\right\rangle\)
**Proposition 3.2**.: _The curve \(X_{0}(N)\) has infinitely many quartic points for_
\[N\in\{38,42,44,51-53,55-58,60-63,65-70,72-75,77-80,83,85,87-89,\] \[\qquad 91,92,94-96,98,100,101,103,104,107,111,119,121,125,131,142,143,16 7,191\}.\]
Proof.: For each of these \(N\) the curve \(X_{0}(N)\) has \(\mathbb{Q}\)-gonality equal to \(4\) by [33, Tables 1,2,3]. Using Proposition 2.1 we now conclude that there are infinitely many points of degree \(\leq 4\) on \(X_{0}(N)\) for these \(N\). Furthermore, all of these curves \(X_{0}(N)\) have only finitely many cubic points by [15] which means that they have infinitely many quartic points.
**Proposition 3.3**.: _The curve \(X_{0}(N)\) has infinitely many quartic points for_
\[N\in\{57,58,65,74,77,82,86,91,99,111,118,123,141,142,143,145,155,159\}.\]
Proof.: For each of these \(N\) the quotient \(X_{0}^{*}(N)\) is an elliptic curve of rank \(1\) over \(\mathbb{Q}\). Now we use Proposition 2.1 for the degree \(4\) quotient map from \(X_{0}(N)\) to \(X_{0}^{*}(N)\).
**Proposition 3.4**.: _The curve \(X_{0}(121)\) has infinitely many quartic points._
Proof.: The curve \(X_{ns}^{+}(11)\) is an elliptic curve of conductor \(121\), modular degree \(4\), and rank \(1\) over \(\mathbb{Q}\). This curve has Cremona label \(121\)b\(1\)[6] and LMFDB label \(121.\)b\(2\). Now we use Proposition 2.1 for the degree \(4\) map from \(X_{0}(121)\) to this elliptic curve.
**Proposition 3.5**.: _The curve \(X_{0}(128)\) has infinitely many quartic points._
Proof.: The elliptic curve \(y^{2}=x^{3}+x^{2}+x+1\) has conductor \(128\), modular degree \(4\), and rank \(1\) over \(\mathbb{Q}\). This curve has Cremona label \(128\)a\(1\)[6] and LMFDB label\(128.\)a\(2\). Now we use Proposition 2.1 for the degree \(4\) map from \(X_{0}(128)\) to this elliptic curve.
## 4. Curves \(X_{0}(N)\) with finitely many quartic points
In this section, we will prove that for \(N\) not listed in the Theorem 1.8 the curve \(X_{0}(N)\) has only finitely many quartic points. The first step to do that is to reduce this problem to a finite problem by giving the upper bound for \(N\) such that \(X_{0}(N)\) has infinitely many quartic points.
As we mentioned in the Introduction, Frey's result [12] gives us that any curve defined over \(\mathbb{Q}\) with infinitely many quartic points must have \(\mathbb{Q}\)-gonality \(\leq 8\). Furthermore, the theorem of Abramovich [1] gives us the lower bound on the \(\mathbb{C}\)-gonality of any modular curve. In our case, we get \(\mathrm{gon}_{\mathbb{C}}X_{0}(N)\geq\frac{7}{800}N\) which means that for \(N>\frac{8\cdot 800}{7}\) the curve \(X_{0}(N)\) has only finitely many quartic points (here we used a trivial fact that \(\mathrm{gon}_{\mathbb{Q}}C\geq\mathrm{gon}_{\mathbb{C}}C\) for any curve \(C\) defined over \(\mathbb{Q}\)). However, this bound is impractical.
Since for all \(N\) not listed in the Theorem 1.8 the curve \(X_{0}(N)\) has \(\mathbb{Q}\)-gonality \(>4\) and since \(g(X_{0}(N))>7\) for all \(N>100\), Theorem 1.4 gives us that any potential \(X_{0}(N)\) with infinitely many quartic points must be tetraelliptic. Now we can get a much better bound for \(N\) using Ogg's inequality.
**Proposition 4.1** ([14], Lemma 3.1, original source [34]).: _For a prime \(p\nmid N\), put_
\[L_{p}(N):=\frac{p-1}{12}\psi(N)+2^{\omega(N)},\]
_where \(\psi(N)=N\prod_{q\mid N}(1+\frac{1}{q})\) and \(\omega(N)\) is the number of distinct prime divisors of \(N\). Then_
\[\#X_{0}(N)(\mathbb{F}_{p^{2}})\geq L_{p}(N).\]
**Corollary 4.2**.: _If the curve \(X_{0}(N)\) is tetraelliptic, then for every prime \(p\nmid N\) we must have_
\[4(p+1)^{2}\geq L_{p}(N).\]
Proof.: Let us fix a prime \(p\nmid N\). We have a morphism \(f:X_{0}(N)\mapsto E\) of degree \(4\), where \(E\) is an elliptic curve. We know that the curve \(X_{0}(N)\) has good reduction at \(p\) since \(p\nmid N\). The curve \(E\) also has good reduction at \(p\) because the only primes of bad reduction for \(E\) are those dividing the conductor of \(E\) (we will denote it as \(\operatorname{Cond}(E)\)) which in turn divides \(N\). Therefore, we have a morphism \(\tilde{f}:\tilde{X}_{0}(N)\mapsto E_{p}\) of degree \(4\) defined over \(\mathbb{F}_{p}\).
Hasse's theorem gives us that \(\#E_{p}(\mathbb{F}_{p^{2}})\leq(p+1)^{2}\). Moreover, every point in \(\tilde{X}_{0}(N)(\mathbb{F}_{p^{2}})\) maps to \(E_{p}(\mathbb{F}_{p^{2}})\) meaning that \(\#\tilde{X}_{0}(N)(\mathbb{F}_{p^{2}})\leq 4(p+1)^{2}\) which gives us the desired result.
Now, applying Corollary 4.2 in the same way as in Lemma 3.2 of [14], we get
**Corollary 4.3**.: _The curve \(X_{0}(N)\) is not tetraelliptic for all \(N>407\) and_
\[N\in \{154,174,190,198,202,204,212,222,224,228,231,232,234,236,244,246,2 48,\] \[256,258,260,262,270,272,273,276,279,282,284-287,290,296,301,\] \[303-306,308,310,312,316,318,320-322,324-328,330,332-336,338-340,\] \[342,344-346,348,350-352,354-358,360,362-366,368-372,\] \[374-378,380-382,384-388,390-396,398-400,402-407\}.\]
This means that we only need to check a reasonably small number of \(N\) for tetraellipticity. First, though, we separately solve the cases when \(g(X_{0}(N))\leq 7\) and we cannot use Theorem 1.4. The only \(N\) not discussed already for which \(g(X_{0}(N))\leq 7\) is \(N=97\).
**Proposition 4.4**.: _The curve \(X_{0}(97)\) has only finitely many quartic points._
Proof.: Suppose that there are infinitely many quartic points on \(X_{0}(97)\). Since the curve \(X_{0}(97)\) has \(\mathbb{Q}\)-gonality \(6\) by [33, Table 2] and genus \(7\), we can apply [18], Proposition 1.7 and get that the Jacobian \(J_{0}(97)\) must contain an elliptic curve with a positive \(\mathbb{Q}\)-rank. However, up to isogeny, \(J_{0}(97)\) only contains abelian varieties of dimension \(3\) and \(4\) and we get a contradiction.
It is worth mentioning here that for \(N\) in the following table there exist morphisms of degree \(4\) to elliptic curves. For \(N\neq 109\), these morphisms are quotient maps which are, for \(N\) divisible by \(4\), composed with degree \(2\) degeneracy maps from \(X_{0}(N)\) to \(X_{0}(\frac{N}{2})\). However, these elliptic curves are of rank \(0\) over \(\mathbb{Q}\) and therefore generate only finitely many quartic points.
\begin{tabular}{|c|c|} \hline \(N\) & \(E\) \\ \hline \(76\) & \(X_{0}^{+}(38)\), \(X_{0}(38)/\left\langle w_{19}\right\rangle\) \\ \(105\) & \(X_{0}(105)/\left\langle w_{3},w_{35}\right\rangle\) \\ \(108\) & \(X_{0}^{+}(54)\), \(X_{0}(54)/\left\langle w_{27}\right\rangle\) \\ \(109\) & \(y^{2}+xy=x^{3}-x^{2}-8x-7\) (LMFDB label109.a1) \\ \(110\) & \(X_{0}(110)/\left\langle w_{2},w_{55}\right\rangle\) \\ \(112\) & \(X_{0}^{+}(56)\), \(X_{0}(56)/\left\langle w_{7}\right\rangle\) \\ \(124\) & \(X_{0}(62)/\left\langle w_{31}\right\rangle\) \\ \(184\) & \(X_{0}(92)/\left\langle w_{23}\right\rangle\) \\ \(188\) & \(X_{0}(94)/\left\langle w_{47}\right\rangle\) \\ \hline \end{tabular}
## 5. Jacobians
### Notation and definitions
Let \(C,C^{\prime}\) be curves over a field \(k\). A morphism \(f:C\to C^{\prime}\) induces maps \(f_{*}:J(C)\to J(C^{\prime})\) and \(f^{*}:J(C^{\prime})\to J(C)\) which are defined as follows. If \(D=\sum n_{i}P_{i}\) and \(D^{\prime}=\sum n_{i}^{\prime}P_{i}^{\prime}\) are divisors on \(C\) and \(C^{\prime}\) respectively then \(f_{*}([D])=[\sum n_{i}f(P_{i})]\) and \(f^{*}([D^{\prime}])=[\sum n_{i}^{\prime}f^{-1}(P_{i}^{\prime})]\). When seeing \(J(C)\) not as divisors modulo principal divisors but as \(\operatorname{Pic}^{0}(C)\) the map \(f^{*}\) is sometimes also denoted as \(f^{\vee}\) of \(\operatorname{Pic}(f)\).
**Lemma 5.1**.: \(f_{*}\circ f^{*}=[\deg f]\)_._
Proof.: \(f_{*}(f^{*}([D^{\prime}]))=f_{*}(f^{*}([\sum n_{i}^{\prime}P_{i}^{\prime}]))= f_{*}([\sum n_{i}^{\prime}f^{-1}(P_{i}^{\prime})])=[\sum n_{i}^{\prime}\cdot( \deg f)P_{i}^{\prime}]=[(\deg f)D^{\prime}]\).
By [32], Theorem 6.6, the abelian variety \(J(C)\) comes with a canonical principal polarization
\[\phi_{\Theta_{C}}:J(C)\to J(C)^{\vee}\]
induced by the theta divisor of \(C\). This map is an isomorphism.
If \(P\in C(k)\) then we can define the embedding morphism
\[f_{P}:C \to J(C),\] \[x \mapsto[x-P].\]
If \(A\) is an abelian variety over \(k\) we can use this point to define
\[\operatorname{Hom}_{P}(C,A):=\left\{f\in\operatorname{Hom}(C,A)\mid f(P)=0 \right\}.\]
With this definition, the universal property of the Jacobian ([32], Theorem 6.1) states that the map
\[\iota_{P}:\operatorname{Hom}(J(C),A) \to\operatorname{Hom}_{P}(C,A),\] \[h \mapsto h\circ f_{P}\]
is an isomorphism.
The map
\[s_{P}:\operatorname{Hom}(C,A) \to\operatorname{Hom}_{P}(C,A),\] \[f \mapsto t_{-f(P)}\circ f,\]
where \(t_{-f(P)}\) denotes the translation by \(-f(P)\) map, is a retraction of the canonical inclusion \(\operatorname{Hom}_{P}(C,A)\to\operatorname{Hom}(C,A)\) whose kernel are the constant maps. Since the constant maps can be identified with \(A(k)\) we have a direct sum decomposition
\[\operatorname{Hom}(C,A)\cong\operatorname{Hom}_{P}(C,A)\times A(k).\]
If \(A\) is an elliptic curve then \(f\) and \(s_{P}(f)\) have the same degree because \(t_{-f(p)}\) is an isomorphism. In particular if one wants to study the possible degrees that occur for elements in \(\operatorname{Hom}(C,A)\), it suffices to restrict to those in \(\operatorname{Hom}_{P}(C,A)\).
Note that the maps \(f_{P}^{\vee}:J(C)^{\vee}\to J(C)\) and \(\phi_{\Theta_{C}}:J(C)\to J(C)^{\vee}\) are closely related to each other, namely \(f_{P}^{\vee}\circ\phi_{\Theta_{C}}=-\operatorname{Id}_{J(C)}\). For elliptic curves, one often takes \(P=0_{E}\) to be the zero section of the elliptic curve, and then the map \(f_{0_{E}}:E\to J(E)\) is used to identify \(E\) with its Jacobian/dual. So the above means that this identification differs by the one coming from the polarization \(\phi_{\Theta_{E}}:J(E)\to J(E)^{\vee}=J^{\vee\vee}\cong E\) by a minus sign.
### Degree pairing
We already saw in the previous section that if \(f:C\to C^{\prime}\) is a map of curves then \(f_{*}\circ f^{*}=[\deg f]\). This motivates the following definition:
**Definition 5.2**.: Let \(C,E\) be curves over a field \(k\) with \(E\) being an elliptic curve. The degree pairing is defined on \(\operatorname{Hom}(C,E)\) as
\[\langle\underline{\,\,\,},\underline{\,\,}\rangle:\operatorname{ Hom}(C,E)\times\operatorname{Hom}(C,E) \mapsto\operatorname{End}(J(E))\] \[f,g \mapsto f_{*}\circ g^{*}.\]
If \(P\in C(k)\) then we can define the degree pairing on \(\operatorname{Hom}(J(C),E)\) as
\[\langle\underline{\,\,\,},\underline{\,\,}\rangle:\operatorname{ Hom}(J(C),E)\times\operatorname{Hom}(J(C),E) \mapsto\operatorname{End}(J(E)),\] \[f,g \mapsto(f\circ f_{P})_{*}\circ(g\circ f_{P})^{*}.\]
We will also write \(\langle f,g\rangle:=f_{*}\circ g^{*}\) for \(f,g\in\operatorname{Hom}(C,C^{\prime})\) (this is not a pairing when \(C^{\prime}\) is not elliptic since \(\operatorname{Hom}(C,C^{\prime})\) is not an abelian group in that case). With this notation we have \(\langle f,f\rangle=[\deg f]\) for \(f\in\operatorname{Hom}(C,C^{\prime})\).
Note that the definition on \(\operatorname{Hom}(J(C),E)\) is slightly unsatisfactory since a priori it seems to depend on the base point \(P\). Additionally, it is not defined in terms of intrinsic properties of the abelian variety \(J(C)\), but instead just defined by using \(f_{P}:C\to J(C)\) to transport the definition on \(\operatorname{Hom}(C,E)\) to that on \(\operatorname{Hom}(J(C),E)\). So let's try to give a more intrinsic definition.
Let \(A\) and \(B\) be two polarized abelian varieties over \(k\) with polarizations \(\phi_{A}\) and \(\phi_{B}\) respectively and assume the polarization \(\phi_{A}\) is principal. Then one can define the map
\[\underline{\,\,\,}^{\dagger}:\operatorname{Hom}(A,B) \to\operatorname{Hom}(B,A),\] \[f \mapsto\phi_{A}^{-1}f^{\vee}\phi_{B}.\]
When \(A=B\) this is just the Rosati involution, defined in Section 17 of [31].
**Definition 5.3**.: Let \((A,\phi_{A})\) and \((B,\phi_{B})\) be two polarized abelian varieties over \(k\) with \(\phi_{A}\) a principal polarization then the dagger pairing on \(\operatorname{Hom}(A,B)\) is defined as
\[\langle\underline{\,\,\,},\underline{\,\,\,}\rangle_{\dagger}: \operatorname{Hom}(A,B)\times\operatorname{Hom}(B,A) \to\operatorname{End}(B),\] \[f,g \mapsto f\circ g^{\dagger}.\]
The following lemma shows how the dagger pairing relates to the degree pairing.
**Lemma 5.4**.: _Let \(C,E\) be curves over a field \(k\) with \(E\) being an elliptic curve, and let \(P\in C(k)\). Then for \(f,g\in\operatorname{Hom}(J(C),J(E))\) we have_
\[\langle f,g\rangle_{\dagger}=\langle\,f_{0_{E}}^{-1}\circ f\circ f_{P},\ f_{0_{E}}^{-1} \circ g\circ f_{P}\,\rangle,\]
_where the principal polarizations on \(J(C)\) and \(J(E)\) needed for the definition of \(\langle\underline{\,\,\,},\underline{\,\,\,}\rangle_{\dagger}\) are taken to be those coming from the theta divisors on \(C\) and \(E\)._
Proof.: We prove this by showing \((f_{0_{E}}^{-1}\circ f\circ f_{P})_{*}=f\) and \((f_{0_{E}}^{-1}\circ g\circ f_{P})^{*}=g^{\dagger}\).
For the equality \((f_{0_{E}}^{-1}\circ f\circ f_{P})_{*}=f\) it suffices to show equality on \(\overline{k}\) points. So let \(D=\sum n_{i}P_{i}\) be a degree zero divisor representing a point in \(J(C)(\overline{k})\). Then
\[(f_{0_{E}}^{-1}\circ f\circ f_{P})_{*}(\sum n_{i}P_{i})=\sum n_{i }(f_{0_{E}}^{-1}\circ f)(P_{i}-P)=(f_{0_{E}}^{-1}\circ f)(\sum n_{i}(P_{i}-P)) =\ldots\] \[\ldots=f_{0_{E}}^{-1}(f(\sum n_{i}P_{i})-f(\sum n_{i}P))=f_{0_{E} }^{-1}(f(\sum n_{i}P_{i})-f(0))=f(\sum n_{i}P_{i}).\]
The equality \((f_{0_{E}}^{-1}\circ g\circ f_{P})^{*}=g^{\dagger}\) follows since \({}^{*}\) and \({}^{\vee}\) denote the same operation and
\[(f_{0_{E}}^{-1}\circ g\circ f_{P})^{\vee}=f_{P}^{\vee}\circ g^{\vee}\circ(f_{0 _{E}}^{-1})^{\vee}=(-\phi_{\Theta_{C}})^{-1}\circ g^{\vee}\circ(-\phi_{\Theta_ {E}})=\phi_{\Theta_{C}}^{-1}\circ g^{\vee}\circ\phi_{\Theta_{E}}=g^{\dagger},\]
where the second equality follows by applying Lemma 6.8 of [32] twice.
**Proposition 5.5**.: _Let \(C,E\) be curves over a field \(k\) with \(E\) being an elliptic curve. Then the dagger pairing is a positive definite symmetric bilinear form on \(Hom_{\mathbb{Q}}(J(C),E)\)._
Proof.: The dagger pairing is obviously bilinear. It is symmetric because
\[\left\langle f,g\right\rangle_{\dagger}=f\circ g^{\dagger}=(g\circ f^{\dagger} )^{\dagger}=g\circ f^{\dagger}.\]
Here the last equality holds because \(g\circ f^{\dagger}\in\operatorname{End}(E)\) is of the form \([n]\) for some \(n\) and \([n]^{\dagger}=[n]\). The positive definiteness follows from Lemma 5.4 since we can compute \(\left\langle f,f\right\rangle_{\dagger}\) over \(\overline{k}\) by choosing a \(P\in C(\overline{k})\) as follows:
\[\left\langle f,f\right\rangle_{\dagger}=\left\langle\,f_{0_{E}}^{-1}\circ f \circ f_{P},\ f_{0_{E}}^{-1}\circ f\circ f_{P}\,\right\rangle=[\deg f_{0_{E}} ^{-1}\circ f\circ f_{P}],\]
and \(\deg f_{0_{E}}^{-1}\circ f\circ f_{P}>0\) if \(f\neq 0\).
**Remark 5.6**.: If \(E\) is a CM elliptic curve over \(\mathbb{C}\), we could also consider the degree pairing \(\operatorname{Hom}_{\mathbb{C}}(J(C),E)\times\operatorname{Hom}_{\mathbb{C}}( J(C),E)\to\operatorname{End}_{\mathbb{C}}(E)\). This is a positive definite _hermitian_ form instead of a symmetric one since the Rosati involution \({}^{\dagger}\) acts as complex conjugation on \(\operatorname{End}_{\mathbb{C}}(E)\).
### Degeneracy maps
Let \(M\) and \(N\) be positive integers such that \(M\mid N\). For every divisor \(d\) of \(\frac{N}{M}\) there exists a degeneracy map
\[\iota_{d,N,M}:X_{0}(N)\mapsto X_{0}(M),(E,G)\mapsto(E/G[d],(G/G[d])[M]).\]
The degeneracy map acts on \(\tau\in\mathbb{H}^{*}\) in the extended upper half plane as
\[\iota_{d,N,M}(\tau)=d\tau.\]
From this or directly from the definition, we can easily see that when \(dM\mid N\) and \(d^{\prime}N\mid N^{\prime}\) then
\[\iota_{d,N,M}\circ\iota_{d^{\prime},N^{\prime},N}=\iota_{dd^{\prime},N^{ \prime},M}. \tag{1}\]
We want to describe \(\left\langle\iota_{d_{1},N,M},\iota_{d_{2},N,M}\right\rangle\) for different divisors \(d_{1}\), \(d_{2}\) of \(\frac{N}{M}\) in terms of Hecke operators on \(J_{0}(M)\) (the case \(d_{1}=d_{2}\) is solved by Lemma 5.1). We recall from Section 7.3 of [10] that Hecke operators \(T_{n}\) act on \(Y_{0}(M)\) as
\[T_{n}(E,G)=\sum_{\begin{subarray}{c}\#C=n\\ C\cap G=\{0\}\end{subarray}}(E/C,(G+C)/C)\]
and have the following properties:
\[T_{p^{r}} =T_{p^{r-1}}T_{p}-pT_{p^{r-2}}\text{ if }p\nmid M,\] \[T_{p^{r}} =T_{p}^{r} \text{ if }p\mid M,\] \[T_{mn} =T_{m}T_{n} \text{ if }(m,n)=1.\]
We want to determine \(\left\langle\iota_{d_{1},N,M},t_{d_{2},N,M}\right\rangle\). When \(N=Mp\) for a prime \(p\), we already know from Section 7.3 of [10] that \(\left\langle\iota_{1,N,M},\iota_{p,N,M}\right\rangle=T_{p}\). The remaining case is when \(\frac{N}{M}\) is a composite number. Before we consider that case, we prove a technical group theory lemma.
**Lemma 5.7**.: _Let \(G\) be an abelian group of order \(N\) that has a cyclic subgroup \(G^{\prime}\) of order \(d\). If \(dG\cong\mathbb{Z}/(N/d)\mathbb{Z}\), then \(G\) is cyclic._
Proof.: We know that \(G\cong(\mathbb{Z}/(d_{1}\mathbb{Z}))\times\ldots\times(\mathbb{Z}/(d_{k} \mathbb{Z}))\), where \(d_{i}\) are integers such that \(d_{1}\mid\ldots\mid d_{k}\), \(d_{1}\ldots d_{k}=N\) and \(d\mid d_{k}\). Thus,
\[dG\leq(\mathbb{Z}/(d_{1}\mathbb{Z}))\times\ldots\times\left(\mathbb{Z}/\left( \frac{d_{k}}{d}\mathbb{Z}\right)\right).\]
However, \(dG\cong\mathbb{Z}/\left(\frac{N}{d}\mathbb{Z}\right)\) implying that \(d_{1}=\ldots=d_{k-1}=1\) and \(d_{k}=N\).
**Lemma 5.8**.: _If \(\frac{N}{M}\) is square-free, then \(\left\langle\iota_{1,N,M},\iota_{N/M,N,M}\right\rangle=T_{N/M}\)._
Proof.: Suppose that \((E,G)\) represents a point on \(Y_{0}(N)\). We compute
\[\left\langle\iota_{1,N,M},\iota_{N/M,N,M}\right\rangle(E,G) =\sum_{\begin{subarray}{c}E^{\prime}/G^{\prime}[N/M]=E\\ G^{\prime}/G^{\prime}[N/M]=G\end{subarray}}(E^{\prime},G^{\prime}[M]),\] \[T_{N/M}(E,G) =\sum_{\begin{subarray}{c}\#C=N/M\\ C\cap G=\{0\}\end{subarray}}(E/C,(G+C)/C)\]
In order to prove that these sums are equal, it is enough to find a bijection between the summands. We will now construct a map that sends \((E^{\prime},G^{\prime}[M])\) to \((E/C,(G+C)/C)\) (i.e. define \(C\) in terms of \(E^{\prime}\) and \(G^{\prime}\)) and prove that it is a bijection.
By definition, there is a map \(f:E^{\prime}\mapsto E\) such that \(\ker f=G^{\prime}[N/M]\). We set
\[C:=\ker f^{\vee}.\]
This means that for a map \(f^{\vee}:E\mapsto E^{\prime}\) we have \(E^{\prime}=E/C\). Further,
\[(G+C)/C=f^{\vee}(G)=f^{\vee}(f(G^{\prime}))=\frac{N}{M}G^{\prime}=G^{\prime}[M],\]
meaning that \(G\cap C=\{0\}\) (since \(f^{\vee}(G)\) is a group of order \(M\)) and \((E/C,(G+C)/C=(E^{\prime},G^{\prime}[M]))\). To prove bijectivity, we define the inverse map, i.e. we define \(E^{\prime}\) and \(G^{\prime}\) in terms of \(C\).
By definition, there is a map \(g:E\mapsto E/C\). We set
\[E^{\prime} :=E/C,\] \[G^{\prime} :=(g^{\vee})^{-1}(G).\]
First, we need to prove that \(G^{\prime}\) is a cyclic subgroup of order \(N\). It is obviously a group of order \(N\). We have
\[\frac{N}{M}G^{\prime}=g\circ g^{\vee}\circ(g^{\vee})^{-1}(G)=g(G)=(G+C)/C.\]
Also, \(\mathbb{Z}/\left(\frac{N}{M}\mathbb{Z}\right)\cong\ker g^{\vee}\leq G^{\prime}\) so we can use Lemma 5.7 to conclude that \(G^{\prime}\) is a cyclic subgroup of order \(N\). This further implies that \(G^{\prime}[M]=\frac{N}{M}G^{\prime}=(G+C)/C\).
To prove that these two maps are inverse to each other it is enough to prove \(g=f^{\vee}\). This holds because
\[\ker f=G^{\prime}[N/M]=(g^{\vee})^{-1}(G)[N/M]=M(g^{\vee})^{-1}(G)=(g^{\vee})^{-1 }(MG)=(g^{\vee})^{-1}(0)=\ker g^{\vee}.\]
**Lemma 5.9**.: _If \(\frac{N}{M}\) is square-free, then \(\left\langle\iota_{N/M,N,M},\iota_{1,N,M}\right\rangle=w_{M}\circ T_{N/M} \circ w_{M}\)._
Proof.: We will prove the equivalent statement \(w_{M}\circ\left\langle\iota_{N/M,N,M},\iota_{1,N,M}\right\rangle=T_{N/M} \circ w_{M}\). Suppose that \((E,G)\) represents a point on \(X_{0}(N)\). We compute (similarly as in the previous lemma)
\[\left\langle\iota_{N/M,N,M},\iota_{1,N,M}\right\rangle(E,G) =\sum_{\begin{subarray}{c}\#G^{\prime}=N\\ G^{\prime}[M]=G\end{subarray}}(E/G^{\prime}[N/M],G^{\prime}/G^{\prime}[N/M])= \sum(E^{\prime},G^{\prime\prime}),\] \[w_{M}\circ\left\langle\iota_{N/M,N,M},\iota_{1,N,M}\right\rangle (E,G) =\sum_{\begin{subarray}{c}\#G^{\prime}=N\\ G^{\prime}[M]=G\end{subarray}}(E^{\prime}/G^{\prime\prime},E^{\prime}[M]/G^{ \prime\prime}),\] \[T_{N/M}\circ w_{M}(E,G) =\sum_{\begin{subarray}{c}\#H=N/M\\ (E[M]/G)\cap H=\{0\}\end{subarray}}(E/G/H,((E[M]/G)+H)/H).\]
It remains to prove that there is a bijection between \(G^{\prime}\) and \(H\). We have the following situation:
\[E\mathop{\rightarrow}\limits^{f_{1}}E^{\prime}\mathop{\rightarrow}\limits^{f _{2}}E^{\prime}/G^{\prime\prime},\]
\[E\mathop{\rightarrow}\limits^{g_{1}}E/G\mathop{\rightarrow}\limits^{g_{2}}E /G/H\]
where we know that \(G^{\prime}=\ker(f_{2}\circ f_{1})\) because \(\ker f_{2}=G^{\prime}/G^{\prime}[N/M]=G^{\prime}/\ker f_{1}\).
We can express \(G^{\prime}\) in terms of \(H\) as \(G^{\prime}:=g_{1}^{-1}(H)=\ker(g_{2}\circ g_{1})\). By Lemma 5.7, this is a cyclic group of order \(N\) because \(\mathbb{Z}/(M\mathbb{Z})\cong G\leq G^{\prime}\) and
\[MG^{\prime}=g_{1}^{\vee}\circ g_{1}\circ g_{1}^{-1}(H)=g_{1}^{\vee}(H)\cong H \cong\mathbb{Z}/\left(\frac{N}{M}\mathbb{Z}\right).\]
Here the third equality holds because \((E[M]/G)\cap H=\{0\}\). Now, since \(G^{\prime}=g_{1}^{-1}(H)=\ker(g_{2}\circ g_{1})\), we get \(f_{2}\circ f_{1}=g_{2}\circ g_{1}\). Further, \(G=\ker g_{1}\subset G^{\prime}\) implying that \(G^{\prime}[M]=G\).
Let us now express \(H\) in terms of \(G^{\prime}\). Since \(G\) is a subgroup of \(G^{\prime}=\ker(f_{2}\circ f_{1})\), there exist isogenies \(g_{1}\) and \(g_{2}\) such that \(g_{2}\circ g_{1}=f_{2}\circ f_{1}\) and \(G=\ker G_{1}\). We set \(H:=\ker g_{2}\). It remains to prove that \((E[M]/G)\cap H=\{0\}\). This holds because
\[g_{2}(E[M]/G)=g_{2}(g_{1}(E[M]))=f_{2}(f_{1}(E[M]))=E[M]/(E[M]\cap G^{\prime}) \cong E[M]/G.\]
**Remark 5.10**.: When \(\frac{N}{M}\) is not square-free, in Lemma 5.8 and Lemma 5.9, instead of \(T_{N/M}\), we get a slightly different result using the Mobius inversion formula. Namely,
\[\left\langle\iota_{1,N,M},\iota_{N/M,N,M}\right\rangle=\sum_{m^{2}|N/M}\mu(m) T_{N/(Mm^{2})}\]
where \(\mu\) denotes the Mobius function.
**Proposition 5.11**.: _Let \(M,d_{1},d_{2}\) be integers with \(\gcd(d_{1},d_{2})=1\). Then_
\[\iota_{1,d_{1}d_{2}M,d_{1}M,*}\circ\iota_{1,d_{1}d_{2}M,d_{2}M}^{*} =\iota_{1,d_{1}M,M}^{*}\circ\iota_{1,d_{2}M,M,*}\quad\text{and}\] \[\left\langle\iota_{d_{1},d_{1}d_{2}M,M},\iota_{d_{2},d_{1}d_{2}M, M}\right\rangle =\left\langle\iota_{d_{1},d_{1}M,M},\iota_{1,d_{1}M,M}\right\rangle \circ\left\langle\iota_{1,d_{2}M,M},\iota_{d_{2},d_{2}M,M}\right\rangle.\]
Proof.: Let \(E\) be an elliptic curve with a cyclic subgroup \(G\) of order \(d_{2}M\). The first equality can be verified on a pair \((E,G)\) since
\[\iota_{1,d_{1}d_{2}M,d_{1}M,*}\circ\iota_{1,d_{1}d_{2}M,d_{2}M}^{*} (E,G) =\sum_{\begin{subarray}{c}H_{1}\supseteq G\text{ cyclic}\\ \#H_{1}=d_{1}d_{2}M\end{subarray}}(E,H_{1}[d_{1}M])\] \[=\sum_{\begin{subarray}{c}H_{2}\supseteq G[M]\text{ cyclic}\\ \#H_{2}=d_{1}M\end{subarray}}(E,H_{2})\] \[=\iota_{1,d_{1}M,M}^{*}\circ\iota_{1,d_{2}M,M,*}(E,G).\]
Furthermore, \(H_{1}\) and \(H_{2}\) are related to each other via \(H_{2}=H_{1}[d_{1}M]\) and \(H_{1}=H_{2}+G\). The second equality follows from the first because
\[\left\langle\iota_{d_{1},d_{1}d_{2}M,M},\iota_{d_{2},d_{1}d_{2}M, M}\right\rangle =\iota_{d_{1},d_{1}d_{2}M,M,*}\circ\iota_{d_{2},d_{1}d_{2}M,M}^{*}\] \[=\iota_{d_{1},d_{1}M,M,*}\circ\iota_{1,d_{1}d_{2}M,d_{1}M,*}\circ \iota_{1,d_{1}d_{2}M,d_{2}M}^{*}\circ\iota_{d_{2},d_{2}M,M}^{*}\] \[=\iota_{d_{1},d_{1}M,M,*}\circ\iota_{1,d_{1}M,M}^{*}\circ\iota_{1,d_{2}M,M,*}\circ\iota_{d_{2},d_{2}M,M}^{*}\] \[=\left\langle\iota_{d_{1},d_{1}M,M},\iota_{1,d_{1}M,M}\right\rangle \circ\left\langle\iota_{1,d_{2}M,M},\iota_{1,d_{2}M,M}\right\rangle.\]
Combining the previous results we get the following proposition.
**Proposition 5.12**.: _Assume that \(\frac{N}{M}\) is squarefree and let \(d_{1}\) and \(d_{2}\) be divisors of \(\frac{N}{M}\). We write just \(\gcd\) instead of \(\gcd(d_{1},d_{2})\) and \(\operatorname{lcm}\) instead of \(\operatorname{lcm}(d_{1},d_{2})\) for simplicity. Then_
\[\left\langle\iota_{d_{1},N,M},\iota_{d_{2},N,M}\right\rangle=w_{M}\circ T_{d_{ 1}/\gcd}\circ w_{M}\circ T_{d_{2}/\gcd}\circ[\deg\iota_{\gcd,N,M\operatorname {lcm}/\gcd}].\]
Proof.: Note that
\[\iota_{d_{1},N,M}=\iota_{d_{1}/\gcd,M\operatorname{lcm}/\gcd,M}\circ\iota_{ \gcd,N,M\operatorname{lcm}/\gcd}\]
and similarly for \(d_{2}\). This shows that \(\iota_{d_{1},N,M}\) and \(\iota_{d_{2},N,M}\) both factor through the map \(\iota_{\gcd,N,M\operatorname{lcm}/\gcd}\) allowing us to write
\[\left\langle\iota_{d_{1},N,M},\iota_{d_{2},N,M}\right\rangle =\iota_{d_{1}/\gcd,M\operatorname{lcm}/\gcd,M,*}\circ\iota_{ \gcd,N,M\operatorname{lcm}/\gcd,*}\circ\iota_{\gcd,N,M\operatorname{lcm}/ \gcd}^{*}\circ\iota_{d_{2}/\gcd,M\operatorname{lcm}/\gcd,M}^{*}\] \[=\iota_{d_{1}/\gcd,M\operatorname{lcm}/\gcd,M,*}\circ[\deg\iota_{ \gcd,N,M\operatorname{lcm}/\gcd}]\circ\iota_{d_{2}/\gcd,M\operatorname{lcm}/ \gcd,M}^{*}\] \[=\left\langle\iota_{d_{1}/\gcd,M\operatorname{lcm}/\gcd,M},\iota_ {d_{2}/\gcd,M\operatorname{lcm}/\gcd,M}\right\rangle\circ[\deg\iota_{\gcd,N,M \operatorname{lcm}/\gcd}].\]
Further, by Proposition 5.11 we have
\[\left\langle\iota_{d_{1}/\gcd,M\operatorname{lcm}/\gcd,M},\iota_{d_{2}/\gcd,M \operatorname{lcm}/\gcd,M}\right\rangle=\left\langle\iota_{d_{1}/\gcd,Md_{1}/ \gcd,M},\iota_{1,Md_{1}/\gcd,M}\right\rangle\circ\left\langle\iota_{1,Md_{2}/ \gcd,M},\iota_{d_{2}/\gcd\,M}\right\rangle.\]
Now we get the desired result by applying Lemma 5.8 and Lemma 5.9.
**Theorem 5.13**.: _Using the assumptions and notation of Proposition 5.12, let \(E\) be an elliptic curve of conductor \(M\) with corresponding newform \(\sum_{n=1}^{\infty}a_{n}q^{n}\) and let \(f:X_{0}(M)\mapsto E\) be the modular parametrization of \(E\). If we define_
\[a=\sum_{m^{2}|(d_{1}d_{2}/\gcd^{2})}\mu(m)a_{d_{1}d_{2}/(gcd^{2}m^{2})}\]
_(when \(\frac{d_{1}d_{2}}{\gcd^{2}}\) is squarefree, \(a\) is equal to \(a_{d_{1}d_{2}/\gcd^{2}}\), similarly as in Remark 5.10), then_
\[\langle f\circ\iota_{d_{1},N,M},f\circ\iota_{d_{2},N,M}\rangle=\left[a\cdot \psi\left(\frac{N\gcd}{M{\rm lcm}}\right)\cdot\deg f\right].\]
Proof.: For the sake of simplicity, we will assume that \(\frac{d_{1}d_{2}}{\gcd^{2}}\) is squarefree. We have
\[\langle f\circ\iota_{d_{1},N,M},f\circ\iota_{d_{2},N,M}\rangle=f_{*}\circ \iota_{d_{1},N,M,*}\circ\iota_{d_{2},N,M}^{*}\circ f^{*}=f_{*}\circ\langle \iota_{d_{1},N,M},\iota_{d_{2},N,M}\rangle\circ f^{*}.\]
Let \(E^{\prime}\) be \(f^{*}(E)\subset J_{0}(M)\). Then \(E^{\prime}\) is an elliptic curve isogenous to \(E\). Since, up to isogeny, \(E\) occurs with multiplicity one in the factorization of \(J_{0}(M)\) (because \(\operatorname{cond}(E)=M\)), it follows that \(\langle\iota_{d_{1},N,M},\iota_{d_{2},N,M}\rangle\) is a rational endomorphism of \(E^{\prime}\). Therefore, \(\langle\iota_{d_{1},N,M},\iota_{d_{2},N,M}\rangle\) is of the form \([k]\) for some \(k\in\mathbb{Z}\) and we get that
\[f_{*}\circ\langle\iota_{d_{1},N,M},\iota_{d_{2},N,M}\rangle\circ f^{*}=f_{*} \circ f^{*}\circ\langle\iota_{d_{1},N,M},\iota_{d_{2},N,M}\rangle=[\deg f] \circ\langle\iota_{d_{1},N,M},\iota_{d_{2},N,M}\rangle\,.\]
Proposition 5.12 now tells us that
\[\langle f\circ\iota_{d_{1},N,M},f\circ\iota_{d_{2},N,M}\rangle=w_{M}\circ T_{ d_{1}/\gcd}\circ w_{M}\circ T_{d_{2}/\gcd}\circ[\deg\tau_{\gcd,N,M{\rm lcm}/\gcd}] \circ[\deg f]. \tag{2}\]
We see that here both the Atkin-Lehner involution \(w_{M}\) and Hecke operators act on \(E\). As \(w_{M}\) acts as \(\pm 1\) on \(E\), the action of \(w_{M}\) cancels itself. Furthermore, the Hecke operators \(T_{n}\) act on \(E\) as multiplication by \(a_{n}\) (the coefficient in the corresponding newform). Therefore,
\[\langle f\circ\iota_{d_{1},N,M},f\circ\iota_{d_{2},N,M}\rangle =[a_{d_{1}/\gcd}]\circ[a_{d_{2}/\gcd}]\circ[\deg\tau_{\gcd,N,M{ \rm lcm}/\gcd}]\circ[\deg f]\] \[=[a_{d_{1}d_{2}/\gcd^{2}}\cdot\deg\tau_{\gcd,N,M{\rm lcm}/\gcd} \cdot\deg f].\]
The last equality holds due to the fact that \(a_{n}a_{m}=a_{nm}\) for relatively prime \(m,n\). Finally, since the degrees of all degeneracy maps from \(X_{0}(N)\) to \(X_{0}(M{\rm lcm}/\gcd)\) are equal to \(\psi(\frac{N\gcd}{M{\rm lcm}})\), we get the desired formula.
If \(\frac{d_{1}d_{2}}{\gcd^{2}}\) is not squarefree, in \(2\) we will get the Mobius sums from Remark 5.10 instead of \(T_{d_{i}/\gcd}\). We can then use the same argument to get the desired result since the sums \(\sum_{m^{2}|(d_{i}/\gcd)}\mu(m)T_{d_{i}/(\gcd m^{2})}\) act on \(E\) as \(\sum_{m^{2}|(d_{i}/\gcd)}\mu(m)a_{d_{i}/(\gcd m^{2})}\).
This result is useful because all items on the right-hand side are easily computable (\(\deg f\) is the modular degree of \(E\)), and in fact already have been computed for all elliptic curves of conductor \(\leq 500,000\) and \({\rm lcm}(d_{1},d_{2})/\gcd(d_{1},d_{2})\leq 1,000\). This data is available in the LMFDB [28].
**Remark 5.14**.: Alternatively, we can compute \(\langle f\circ\iota_{d_{1},N,M},f\circ\iota_{d_{2},N,M}\rangle\) using either Sage or Magma since \(\iota_{d,N,M,*}\) and \(\iota_{d_{2},N,M}^{*}\) are explicitly computable on modular symbols, see Proposition 8.26 of [36].
## 6. \(d\)-elliptic modular curves
**Definition 6.1**.: Let \(d\) be a positive integer. We call a curve \(C\) over a field \(k\)\(d\)-elliptic if there exists an elliptic curve \(E\) over \(k\) and a morphism \(C\to E\) of degree \(d\) defined over \(k\). If in addition \(k\) is a number field and \(E\) has positive Mordell-Weil rank, then we call \(C\), positive rank \(d\)-elliptic.
In this section, we will describe some ideas that allow one to determine for given integers \(N\) and \(d\) whether \(X_{0}(N)\) is \(d\)-elliptic over \(\mathbb{Q}\).
If we fix a point \(P\in X_{0}(N)(\mathbb{Q})\), then, as we have seen in Section 5.2, there exists an element of \(\operatorname{Hom}_{\mathbb{Q}}(X_{0}(N),E)\) of degree \(d\) if and only if there exists an element of \(\operatorname{Hom}_{\mathbb{Q},P}(X_{0}(N),E)\) of degree \(d\). Furthermore, by the universal property of \(J_{0}(N)\), every \(f\in\operatorname{Hom}_{\mathbb{Q},P}(X_{0}(N),E)\) factors uniquely through \(J_{0}(N)\) via the map \(f_{P}\).
We define a map \(\operatorname{Hom}_{\mathbb{Q}}(X_{0}(N),E)\mapsto\operatorname{Hom}_{ \mathbb{Q}}(J_{0}(N),E)\) as follows:
\(f\mapsto t_{-f(P)}\circ f\mapsto\operatorname{homomorphism\ induced\ from}\ t_{-f(P)}\circ f\) by the universal property of \(J_{0}(N)\).
In this section, to make the text more readable, we sometimes use a slight abuse of notation. We will sometimes work with maps defined on \(X_{0}(N)\) as if they were defined on \(J_{0}(N)\). For example, in the proof of Theorem 6.15, we will say that the maps \(f\circ d_{i}:X_{0}(N)\mapsto E\) form a basis for \(\operatorname{Hom}_{\mathbb{Q}}(J_{0}(N),E)\), but this will actually hold for the images of \(f\circ d_{i}\) via the above map.
**Definition 6.2**.: Let \(A\) and \(B\) be abelian varieties over a field \(k\) with \(B\) simple. An abelian variety \(A^{\prime}\) together with a quotient map \(\pi:A\to A^{\prime}\) is an optimal \(B\)-isogenous quotient if \(A^{\prime}\) is isogenous to \(B^{n}\) for some integer \(n\) and every morphism \(A\to B^{\prime}\) with \(B^{\prime}\) isogenous to \(B^{m}\) for some integer \(m\) uniquely factors via \(\pi\).
**Proposition 6.3**.: _Optimal \(B\)-isogenous quotients exist, and are unique up to a unique isomorphism._
Proof.: By the Poincare reducibility theorem ([32], Proposition 10.1) there exists an integer \(s\) and simple abelian subvarieties \(A_{1},\dots,A_{s}\) of \(A\) such that the sum map \(A_{1}\times\dots\times A_{s}\to A\) is an isogeny. By reordering the \(A_{i}\) if necessary we can let \(n\leq s\) be the integer such that \(A_{1},\dots,A_{n}\) are isogenous to \(B\) while \(A_{n+1},\dots,A_{s}\) are not. Define \(A^{\prime}=A/(A_{n+1}+\dots+A_{s})\) then \(A^{\prime}\) is isogenous to \(B^{n}\) since the composition of the maps \(A_{1}\times\dots\times A_{n}\to A\to A^{\prime}\) is an isogeny.
To show that the quotient \(\pi:A\to A^{\prime}\) is an optimal \(B\) isogenous quotient, let \(B^{\prime}\) be an abelian variety isogenous to \(B^{m}\) and let \(f:A\to B\) be a morphism. Since \(B^{\prime}\) is isogenous to \(B^{m}\) but all the \(A_{i}\) for \(i>n\) are not isogenous to \(B\), meaning that for \(i>n\), \(A_{i}\subset\ker f\). However, \(A^{\prime}\) was obtained by quotienting out the \(A_{i}\) with \(i>n\) meaning that \(f\) factors uniquely via \(\pi\) which is what we needed to prove.
The uniqueness up to unique isomorphism follows formally because optimal \(B\)-isogenous quotients are defined using a universal property.
**Remark 6.4**.: When \(E\) is the strong Weil curve over \(\mathbb{Q}\) of conductor \(M\) then the optimal \(E\)-isogenous quotient of \(J_{0}(M)\) is just the strong Weil parameterization of \(E\).
The dual notion of optimal \(B\)-isogenous quotient is the following:
**Definition 6.5**.: Let \(A\) and \(B\) be abelian varieties over a field \(k\) with \(B\) simple. An abelian variety \(A^{\prime}\) together with an isogeny \(\iota:A^{\prime}\to A\) is a maximal \(B\)-isogenous subvariety if \(A^{\prime}\) is isogenous to \(B^{n}\) for some integer \(n\) and every morphism \(B^{\prime}\to A\) with \(B^{\prime}\) isogenous to \(B^{m}\) for some integer \(m\) uniquely factors via \(\iota\).
The following follows formally from duality since we can just take \(\iota=\pi^{\vee}\) where \(\pi\) is an optimal \(B\)-isogenous quotient of \(A^{\vee}\). The reason for calling \(A^{\prime}\) a subvariety is because, by the universal property, \(\iota\) actually induces an isomorphism between \(A^{\prime}\) and \(\iota(A^{\prime})\).
**Proposition 6.6**.: _Maximal \(B\)-isogenous subvarieties exist, and are unique up to a unique isomorphism._
**Remark 6.7**.: The above proposition can also be proved constructively. Namely, if \(A_{1},\ldots,A_{s}\) are simpele abelian subvarieties of \(A\) such that the sum map \(A_{1}\times\cdots\times A_{s}\to A\) is an isogeny and additionally \(A_{1},\ldots,A_{n}\) are isogenous to \(B\) while \(A_{n+1},\ldots,A_{s}\) are not. Then \(A_{1}+\cdots+A_{n}\subseteq A\) is a maximal \(B\)-isogenous subvariety.
**Definition 6.8**.: Let \(N\) and \(M\) be positive integers with \(M\mid N\) and let \(n\) denote the number of divisors on \(N/M\). Then we define the map \(\tau_{N,M}:J_{0}(N)\mapsto J_{0}(M)^{n}\) as
\[\tau_{N,M}:=(\iota_{1,N,M,*},\ldots,\iota_{N/M,N,M,*})\]
where the first subscript of \(\iota\) runs over all divisors of \(N/M\). Further, let \(A\) be an abelian variety and \(f:J_{0}(M)\to A\) a morphism. Then we define the map \(\xi_{A,N}:J_{0}(N)\mapsto A^{n}\) as
\[\xi_{A,N}:=f^{n}\circ\tau_{N,M}.\]
With the above notation we have \(\tau_{N,M}=\xi_{J_{0}(M),N}\).
**Proposition 6.9**.: _Suppose \(N<408\) and let \(E\) be a strong Weil curve over \(\mathbb{Q}\) of positive rank and conductor \(M\mid N\), then \(\xi_{E,N}^{\vee}:E^{n}\to J_{0}(N)\) has a trivial kernel. Hence \(\xi_{E,N}^{\vee}:E^{n}\to J_{0}(N)\) is a maximal \(E\)-isogenous abelian subvariety and \(\xi_{E,N}:J_{0}(N)\to E^{n}\) is an optimal \(E\)-isogenous quotient of \(J_{0}(N)\)._
Proof.: The claim that \(\xi_{E,N}^{\vee}:E^{n}\to J_{0}(N)\) is injective for \(N<408\) was verified computationally using Sage. It is a finite computation since the restriction on \(N\) means there are only finitely pairs \((N,E)\) for which we need to verify that \(\xi_{E,N}^{\vee}:E^{n}\to J_{0}(N)\) is injective.
The second part follows from Atkin-Lehner-Li Theory. The decomposition
\[S_{2}(\Gamma_{0}(N))=\bigoplus_{M\mid N}\bigoplus_{d\mid N/M}\iota_{d,N,M}^{*} (S_{2}(\Gamma_{0}(M))_{new})\]
from Theorem 9.4 of [36] yields the isogeny decomposition
\[J_{0}(N)=\bigoplus_{M\mid N}\bigoplus_{d\mid N/M}\iota_{d,N,M}^{*}(J_{0}(M)_{ new}).\]
If \(E/\mathbb{Q}\) is an elliptic curve of conductor \(M\), then \(M\) is the only integer such that \(E\) occurs as an isogeny factor of \(J_{0}(M)_{new}\), and does so with multiplicity one. In particular, this decomposition implies that if \(E\) is an elliptic curve of conductor \(M\) and \(f:J_{0}(M)\to E\) is its modular parametrization, then the maps \((f\circ\iota_{d,N,M})^{\vee}:E\to J_{0}(N)\) give all the isogeny factors of \(J_{0}(N)\) that are isogenous to \(E\), where \(d\) ranges over all divisors of \(N/M\). From Remark 6.7 it then follows that the image of \(\xi_{E,N}^{\vee}\) inside \(J_{0}(N)\) is a maximal \(E\)-isogenous
subvariety of \(J_{0}(N)\). However, since we already verified that \(\xi^{\vee}_{E,N}\) has a trivial kernel, we have that \(\xi^{\vee}_{E,N}\) is an isomorphism onto its image. In particular, \(\xi^{\vee}_{E,N}\) is also a maximal \(E\)-isogenous subvariety of \(J_{0}(N)\).
The fact that the map \(\xi^{\vee}_{E,N}\) has a trivial kernel for the cases in which the above proposition is applicable makes it significantly easier to determine the positive rank tetra-elliptic \(X_{0}(N)\). All elliptic curves of positive \(\mathbb{Q}\)-rank and conductor at most \(408\) have rank \(1\), with the exception of the curve \(389.a1\) which has rank \(2\). The following proposition is not needed for the classification of the positive rank tetraelliptic curves \(X_{0}(N)\) in Theorem 6.15. Instead, it is an attempt to explain why we observed that the kernel of \(\xi^{\vee}_{E,N}\) was always trivial in Proposition 6.9.
**Proposition 6.10**.: _Let \(E\) be a strong Weil curve over \(\mathbb{Q}\) of conductor \(M\mid N\) and let us suppose that \(\frac{N}{M}\) is squarefree and coprime to \(M\). If \(E\) has an odd analytic rank, then the kernel of \(\xi^{\vee}_{E,N}:E^{n}\to J_{0}(N)\) is a \(2\)-group._
The main ingredient in the proof of this proposition is Theorem 6.12.
**Definition 6.11**.: Let \(M\) be a positive integer and let \(\pi:X_{1}(M)\mapsto X_{0}(M)\) be a natural map \((E,P)\mapsto(E,\langle P\rangle)\). The Shimura subgroup \(\Sigma(M)\) is a kernel of the map \(\pi^{*}:J_{0}(M)\mapsto J_{1}(M)\). For an abelian subvariety \(A\subseteq J_{0}(M)\) we define the Shimura subgroup of \(A\) to be \(A\cap\Sigma(M)\)
**Theorem 6.12** (Theorem 4 from [27]).: _Let \(N\) be a positive integer, and let \(M\) be a divisor of \(N\) such that \(\frac{N}{M}=q_{1}\dots q_{t}\) (distinct primes) and \(\left(M,\frac{N}{M}\right)=1\). We define_
\[\Sigma(M)_{0}^{2^{t}}:=\left\{(x_{i},\dots,x_{2^{t}}):x_{i}\in\Sigma(M),\sum_{ 1}^{2^{t}}x_{i}=0\right\}.\]
_We recall from Definition 6.8 a map \(\iota_{N,M}:=(\tau_{1,N,M},\dots,\tau_{N/M,N,M}):X_{0}(N)\mapsto X_{0}(M)^{2^ {t}}\)._
1. _If_ \(N\) _is odd or_ \(\frac{N}{M}\) _is a prime, then_ \(\ker\iota_{N,M}^{*}=\Sigma(M)_{0}^{2^{t}}\) _._
2. _If_ \(N\) _is even and_ \(\frac{N}{M}\) _is not a prime, then_ \(\ker\iota_{N,M}^{*}\) _and_ \(\Sigma(M)_{0}^{2^{t}}\) _are equal up to a_ \(2\)_-group._
Proof of Proposition 6.10.: Theorem 6.12 tells us that the kernel of \(\iota_{N,M}^{*}\) is equal to \(\Sigma(M)_{0}^{n}\) up to a \(2\)-group. Since \(E\) is a strong Weil curve, we have that \(f^{\vee}:E\mapsto J_{0}(M)\) actually turns \(E\) into a subvariety of \(J_{0}(M)\). Therefore, we have \(\ker\xi^{\vee}_{E,Mp}=\ker\iota_{N,M}^{*}\cap E^{n}\). This is, up to a \(2\)-group, equal to \(\Sigma(M)_{0}^{n}\cap E^{n}\), which is isomorphic to \((\Sigma(M)\cap E)^{n-1}\). Now it is enough to prove that \(\Sigma(M)\cap E\) is a \(2\)-group.
By ([29], Chapter II, Proposition 11.7), the Atkin-Lehner involution \(w_{M}\) acts as \(-1\) on \(\Sigma(M)\). Further, since \(E\) has an odd analytic rank, it follows by looking at the functional equation for \(L(E,s)\) that \(w_{M}\) acts as \(1\) on \(E\). Therefore, \(-1=1\) on \(\Sigma(M)\cap E\) meaning that \(\Sigma(M)\cap E\) must be a \(2\)-group.
Theorem 6.12 is not enough to prove that \(\xi^{\vee}_{E,N}\) is always injective for strong Weil curves of odd analytic rank. However, the computational evidence of Proposition 6.9 seems to indicate that possibility of the \(2\)-group admitted by Theorem 6.12 cannot actually occur. We, therefore, make the following conjecture.
**Conjecture 6.13**.: _Let \(E\) be a strong Weil curve over \(\mathbb{Q}\) of conductor \(M\mid N\). If \(E\) has an odd analytic rank, then \(\xi^{\vee}_{E,N}\) is injective._
All but one of the strong Weil curves \(E\) considered in the proof of Proposition 6.9 have analytic rank \(1\) which, the exception being the curve \(389.a1\) with analytic rank \(2\). Therefore, if this conjecture turns out to be correct, in the first part of Proposition 6.9, Sage will only be needed to prove that \(\xi_{E,N}\) has a trivial kernel for the elliptic curve \(389.a1\).
Note that the above conjecture, if true, makes the determination of all positive rank \(d\)-elliptic \(X_{0}(N)\) significantly easier. Since the above conjecture together with theorem 5.13 implies the following:
**Corollary 6.14**.: _Assume Conjecture 6.13. Let \(E\) be a strong Weil curve over \(\mathbb{Q}\) of odd analytic rank, conductor \(M\), and parametrization \(f:X_{0}(M)\to E\). If \(N\) is a multiple of \(M\) and \(g:X_{0}(N)\to E^{\prime}\) is a map with \(E^{\prime}\) isogenous to \(E\), then \(\deg f\mid\deg g\)._
This would give us a lower bound on \(\deg g\), allowing us to consider significantly fewer elliptic curves \(E\) in the determination of positive rank \(d\)-elliptic \(X_{0}(N)\).
**Theorem 6.15**.: _The curve \(X_{0}(N)\) is positive rank tetraelliptic over \(\mathbb{Q}\) if and only if_
\[N\in\{57,58,65,74,77,82,86,91,99,111,118,121,123,128,141,142,143,145,155,159\}\]
Proof.: The proofs of Propositions 3.3, 3.4, 3.5 tell us that \(X_{0}(N)\) is positive rank tetraelliptic for all \(N\) in the above set. Now we prove that for the other \(N\) the curve \(X_{0}(N)\) is not positive rank tetraelliptic. We only need to consider \(N<408\) not already eliminated in Corollary 4.3 for which there exists an elliptic curve of conductor \(M\mid N\) of positive \(\mathbb{Q}\)-rank. Further, if \(M=N\), then any morphism from \(X_{0}(N)\) factors through the modular parametrization of a strong Weil curve in the corresponding isogeny class. However, the modular degree is strictly greater than \(4\) in all such cases. Therefore, we may suppose \(M<N\).
Let \(E\) be a strong Weil curve of conductor \(M\mid N\) and positive \(\mathbb{Q}\)-rank, and \(f:X_{0}(M)\to E\) its modular parametrization. Since \(\xi_{E,N}:J_{0}(N)\to E^{n}\) for \(N<408\) is an \(E\)-isogenous optimal quotient by Proposition 6.9, every map from \(J_{0}(N)\) to \(E\) uniquely factors through \(E^{n}\). Therefore, we get that the maps \(f\circ d_{i}\), where \(d_{i}\) runs over the degeneracy maps \(X_{0}(N)\to X_{0}(M)\), form a basis for \(\operatorname{Hom}_{\mathbb{Q}}(J_{0}(N),E)\cong\operatorname{Hom}_{\mathbb{Q }}(E^{n},E)\cong\mathbb{Z}^{n}\). Theorem 5.13 and Remark 5.14 allow us to compute the degree pairing on this basis. Now, the degree of a map \(\sum_{i\mid N/M}x_{i}(f\circ d_{i})\) is given by a positive definite quadratic form
\[\sum_{i\mid N/M}\sum_{j\mid N/M}x_{i}x_{j}\left\langle f\circ d_{i},f\circ d_{ j}\right\rangle.\]
All \(N\) and strong Weil curves \(E\) which were considered in this proof are given in the Table 1.
Using the norm induced by this quadratic form, we can use the Fincke-Pohst algorithm for enumerating integer vectors of small norm [11] to determine that there are no nonconstant elements of \(\operatorname{Hom}_{\mathbb{Q}}(J_{0}(N),E)\) of degree \(\leq 4\), and hence no elements of \(\operatorname{Hom}_{\mathbb{Q}}(X_{0}(N),E)\) of degree \(\leq 4\). So this proves the statement for strong Weil curves.
If \(E\) is not a strong Weil curve, let \(E^{\prime}\) be a strong Weil curve in the isogeny class of \(E\). Then by the \(E\)-isogenous optimality of \(\xi_{E^{\prime},N}:J_{0}(N)\to(E^{\prime})^{n}\) we have that any \(g\in\operatorname{Hom}_{\mathbb{Q}}(J_{0}(N),E)\) factors as \(h\circ\xi_{E^{\prime},N}\) for some \(h\in\operatorname{Hom}_{\mathbb{Q}}((E^{\prime})^{n},E)\).
Also, the map
\[\pi:\operatorname{Hom}_{\mathbb{Q}}((E^{\prime})^{n},E)\mapsto\operatorname{ Hom}_{\mathbb{Q}}(E^{\prime},E)^{n},\ \pi(f)=(f\upharpoonright_{E^{\prime}_{1}},\ldots,f\upharpoonright_{E^{\prime}_{n}}),\]
where \(E^{\prime}_{i}\) is the \(i\)-th component of \((E^{\prime})^{n}\), is an isomorphism with an inverse map
\[\pi^{-1}((f_{1}\ldots,f_{n}))(x_{1},\ldots,x_{n})=f_{1}(x_{1})+\ldots+f_{n}(x_{ n}).\]
Furthermore, we have that \(\operatorname{Hom}_{\mathbb{Q}}(E^{\prime},E)\) is a free \(\operatorname{Hom}_{\mathbb{Q}}(E^{\prime},E^{\prime})(\cong\mathbb{Z})\)-module of rank \(1\), generated by a single element \(g_{2}\). In particular, any \(f\in\operatorname{Hom}_{\mathbb{Q}}(E^{\prime},E)\) can be written as \(g_{2}\circ[m]\) for some \(m\in\mathbb{Z}\).
Therefore, we have \(\pi(h)=(f_{1},\ldots,f_{n})=g_{2}\circ([m_{1}],\ldots,[m_{n}])\) and \(h(x_{1},\ldots,x_{n})=g_{2}(m_{1}x_{1}+\ldots+m_{n}x_{n})\). This means that \(h=g_{2}\circ m\) for some \(m\in\operatorname{Hom}_{\mathbb{Q}}((E^{\prime})^{n},E^{\prime})\). Returning back to our \(g\in\operatorname{Hom}_{\mathbb{Q}}(J_{0}(N),E)\), we see that it factors as \(g_{2}\circ m\circ\xi_{E^{\prime},N}\). It follows that
\[\deg g=\deg g_{2}\cdot\deg(m\circ\xi_{E^{\prime},N})\geq\deg(m\circ\xi_{E^{ \prime},N})>4\]
since \(E^{\prime}\) is a strong Weil curve and \(m\circ\xi_{E^{\prime},N}\) is a rational map.
In most cases, especially when we have only \(2\) degeneracy maps, we do not actually need the Fincke-Pohst algorithm to prove that there are no nonconstant elements of \(\operatorname{Hom}_{\mathbb{Q}}(J_{0}(N),E)\) of degree \(\leq 4\). We show two examples where we prove that with elementary methods.
**Example 1**.: We take \(N=122\). There exists only one elliptic curve \(E\) of positive \(\mathbb{Q}\)-rank and \(\operatorname{cond}(E)\mid N\), namely \(X_{0}^{+}(61)\). Its modular parametrization \(f\) is the quotient map \(X_{0}(61)\mapsto X_{0}^{+}(61)\).
By the proof of Theorem 6.15, the basis for \(\operatorname{Hom}_{\mathbb{Q}}(J_{0}(122),E)\) is \(\{f\circ d_{1},f\circ d_{2}\}\) and both of these maps have degree \(2\cdot 3=6\). Further, by Theorem 5.13, we have
\[\langle f\circ d_{1},f\circ d_{2}\rangle=[a_{2}\cdot 1\cdot 2]=[-2].\]
This means that any map \(J_{0}(122)\mapsto E\) must have degree equal to \(6x^{2}-4xy+6y^{2}\) for some integers \(x,y\). It remains to prove that this expression can never be equal to \(4\).
Let us suppose the contrary. If both \(x\) and \(y\) are not \(0\), then \(6x^{2}-4xy+6y^{2}=4x^{2}+2(x-y)^{2}+4y^{2}\geq 8\). Therefore, we may without loss of generality set \(y=0\). However, the expression now becomes \(6x^{2}\) which can never be equal to \(4\), contradiction.
**Example 2**.: We take \(N=129\). There exists only one elliptic curve \(E\) of positive \(\mathbb{Q}\)-rank and \(\operatorname{cond}(E)\mid N\), namely \(X_{0}^{+}(43)\). Its modular parametrization \(f\) is the quotient map \(X_{0}(43)\mapsto X_{0}^{+}(43)\).
By the proof of Theorem 6.15, the basis for \(\operatorname{Hom}_{\mathbb{Q}}(J_{0}(129),E)\) is \(\{f\circ d_{1},f\circ d_{3}\}\) and both of these maps have degree \(2\cdot 4=8\). Further, by Theorem 5.13, we have
\[\langle f\circ d_{1},f\circ d_{3}\rangle=[a_{3}\cdot 1\cdot 2]=[-4].\]
This means that any map \(J_{0}(129)\mapsto E\) must have degree equal to \(8x^{2}-8xy+8y^{2}\) for some integers \(x,y\). This expression is divisible by \(8\) and can therefore never be equal to \(4\).
Proof of Theorem 1.8.: The results in Section 3 give us the cases when \(X_{0}(N)\) has infinitely many quartic points and Proposition 4.4 tells us that \(X_{0}(97)\) has only finitely many quartic points.
For the other \(N\), we have \(g(X_{0}(N))\geq 8\), a.irr\({}_{\mathbb{Q}}(X_{0}(N))>3\), \(\operatorname{gon}_{\mathbb{Q}}(X_{0}(N))>4\) by [33, Tables 1,2,3], and \(X_{0}(N)\) is not positive rank tetraelliptic over \(\mathbb{Q}\) by Theorem 6.15. Therefore, Corollary 2.3 tells us that \(X_{0}(N)\) has only finitely many quartic points for these \(N\). |
2307.16018 | Moment indeterminateness: the Marcel Riesz variational principle | The discrete data encoded in the power moments of a positive measure, fast
decaying at infinity on euclidean space, is incomplete for recovery, leading to
the concept of moment indeterminateness. On the other hand, classical integral
transforms (Fourier-Laplace, Fantappi\`e, Poisson) of such measures are
complete, often invertible via an effective inverse operation. The gap between
the two non-uniqueness/ uniqueness phenomena is manifest in the dual picture,
when trying to extend the measure, regarded as a positive linear functional,
from the polynomial algebra to the full space of continuous functions. This
point of view was advocated by Marcel Riesz a century ago, in the single real
variable setting. Notable advances in functional analysis have root in Riesz'
celebrated four notes devoted to the moment problem. A key technical ingredient
being there the monotone approximation by polynomials of kernels of integral
transforms. With inherent new obstacles we reappraise in the context of several
real variables M. Riesz' variational principle. The result is an array of
necessary and sufficient moment indeterminateness criteria, some raising real
algebra questions, others involving intriguing analytic problems, all
gravitating around the concept of moment separating function. | David P. Kimsey, Mihai Putinar | 2023-07-29T16:15:28Z | http://arxiv.org/abs/2307.16018v1 | # Moment indeterminateness: The Marcel Riesz variational principle
###### Abstract.
The discrete data encoded in the power moments of a positive measure, fast decaying at infinity on euclidean space, is incomplete for recovery, leading to the concept of moment indeterminateness. On the other hand, classical integral transforms (Fourier-Laplace, Fantappie, Poisson) of such measures are complete, often invertible via an effective inverse operation. The gap between the two non-uniqueness/ uniqueness phenomena is manifest in the dual picture, when trying to extend the measure, regarded as a positive linear functional, from the polynomial algebra to the full space of continuous functions. This point of view was advocated by Marcel Riesz a century ago, in the single real variable setting. Notable advances in functional analysis have root in Riesz' celebrated four notes devoted to the moment problem. A key technical ingredient being there the monotone approximation by polynomials of kernels of integral transforms. With inherent new obstacles we reappraise in the context of several real variables M. Riesz' variational principle. The result is an array of necessary and sufficient moment indeterminateness criteria, some raising real algebra questions, others involving intriguing analytic problems, all gravitating around the concept of moment separating function.
Key words and phrases:Moment problem, determinacy, indeterminacy, integral transform, weighted polynomial approximation, analytic bounded point evaluations, real algebraic curve 2020 Mathematics Subject Classification: 44A60, 32A26, 14P05
## 1. Introduction
Traditionally, the inverse problem of estimating a mass distribution from power moment data on the real line emerged during the second half of XIX-th Century from best uniform approximation questions and continued fraction expansions. The advance of integration theory and functional analysis, both partially motivated by the very moment problem we refer to, revealed new facets of it, and more importantly forged novel mathematical tools. In this respect, the four notes Marcel Riesz wrote between 1922 and 1923, devoted to the moment problem on the line, burst of original ideas and far reaching new results [24]. One of the widely circulated innovations contained in these notes is M. Riesz' extension lemma of positive linear functionals, leading to the construction of the integral of discontinuous (measurable)
Introduction
The study of the problem of finding a solution
In this case, we shall write \(\mu_{1}\sim_{X}\mu_{2}\), or more simply \(\mu_{1}\sim\mu_{2}\) when no confusion can possibly arise. If there exist two distinct measures \(\mu_{1}\) and \(\mu_{2}\) such that \(\mu_{1}\sim_{X}\mu_{2}\), then we will say that \(\mu_{1}\) is \(X\)_-indeterminate_. If \(X=\mathbb{R}^{d}\), then we will simply say that \(\mu_{1}\) is _indeterminate_.
We translate below, in the modern setting, the core of section 6 ("La question d'unicite. Criteres preliminaires") of [23].
**Theorem 1.1**.: _[_M. Riesz, 1923_]_ _Let \(X\) denote a closed subset of \(\mathbb{R}^{d}\) and let \(L:\mathbb{R}[x]\longrightarrow\mathbb{R}\) be a linear functional. Then_
_1) The functional \(L\) is representable by an admissible measure \(\mu\) supported on \(X\) if and only if:_
\[(f\in\mathbb{R}[x],\ \ f\geq 0)\ \ \Longrightarrow\ (L(f)\geq 0). \tag{1.1}\]
_2) There exists another admissible measure \(\nu\), moment equivalent and distinct to the representing measure \(\mu\) if and only if there exists a function \(\phi\in C_{p}(X)\setminus\mathbb{R}[x]\) satisfying_
\[\sup_{p\leq\phi}L(p)<\inf_{q\geq\phi}L(q). \tag{1.2}\]
The proof relies on the cited extension lemma. Namely, start with a continuous function of polynomial growth on \(X\), \(\phi\in C_{p}(X)\). There are non-trivial polynomial functions \(p,q\in\mathbb{R}[x]\), such that \(p\leq\phi\leq q\) on \(X\). In particular, if the functional \(L\) is positive, then \(L(p)\leq L(q)\). Choose _any_ value \(t\in[\sup_{p\leq\phi}L(p),\inf_{q\geq\phi}L(q)].\) The linear extension of the functional
\[\tilde{L}(p+\lambda\phi)=L(p)+\lambda t,\ \ \lambda\in\mathbb{R},\]
turns out to be positive on the vector space \(\mathbb{R}[x]+\mathbb{R}\phi\). A maximal element in a chain of such extensions, ordered by inclusion, exists by Zorn's Lemma, and it is necessarily equal to the full space \(C_{p}[X]\). If the interval \([\sup_{p\leq\phi}L(p),\inf_{q\geq\phi}L(q)]\) does not reduce to a point, then one can choose different extension of the functional \(L\), and hence different representing measures. Conversely, if there are two distinct measures \(\mu\) and \(\nu\) possessing the same values on polynomial functions, then there exists a continuous function \(\phi\in C_{p}[X]\) satisfying
\[\int_{X}\phi d\nu\neq\int_{X}\phi d\mu.\]
Then these two distinct values belong to the interval \([\sup_{p\leq\phi}L(p),\inf_{q\geq\phi}L(q)]\).
Marcel Riesz' extension technique was exploited (and sometimes reinvented over decades) in relation to the construction of various integrals and measures. Notable in this respect is Daniell's integral [10, 8]. A general account on extensions of positive functionals and relations to specific integrals is contained in Metzler's articles [16, 17]. For applications to moment problems we refer to Akhiezer's book [3] Chapter 6, Section 6, and also [4], pg. 137, or [5, 9].
**Definition 1.2**.: A function \(\phi\) appearing in condition (1.2) is called _separating_ with respect to the positive functional \(L\).
_Remark 1.3_.: The subtitle of section 6 in [23] very accurately labels the above theorem as only containing "preliminary criteria" of existence and uniqueness. A central object, well studied in connection with moment problems on the real line (\(d=1\)) by at least two generations of mathematicians before Riesz is the so called Christoffel function; in our notation
\[\rho(\alpha)=\inf_{p(\alpha)=1}L(p^{2}),\ \ \alpha\in\mathbb{C}.\]
The determinateness criterion \(\rho(\alpha)=0\) for at least one (and then, all) non-real value \(\alpha\) appeared implicitly in the works of Stieltjes and Hamburger, see [3] for details. By a true tour de force, exposed in section 24 ("Etude aprofondie de fonction \(\rho(\alpha)\)") of [23], M. Riesz proves, in the indeterminate case on the line, that the function \(\frac{1}{\rho(\alpha)}\) has a sub-exponential growth, and the integral
\[\int_{-\infty}^{\infty}\frac{\log\frac{1}{\rho(t)}}{1+t^{2}}dt\]
is absolutely convergent. His deep results preview a full solution to S. Bernstein's weighted, uniform approximation of continuous functions on the entire line, obtained only a few decades later, cf. [2]. Returning to the indeterminate case of the moment problem on \(\mathbb{R}\), a corollary of Riesz' calculations is that _any_ non-polynomial function is separating for the respective functional. Based on this observation, effective determinateness criteria for the moment problem on the line are deduced, including the well known Carleman criterion, cf. [23] pg. 48.
Regardless to say that, allowing the extension of the positive functional \(L:\mathbb{R}[x]\longrightarrow\mathbb{R}\) to more general functions, such as semi-continuous ones will enlarge the pool of separating elements in condition (1.2), and possibly simplify it. We will return to this observation in the next section.
The main theme of this article is to extract from various faithful transforms of measures some elementary separating functions for indeterminate, multivariate moment problems. Positivity certificates making our necessary and sufficient conditions for moment indeterminateness more effective are not available at this stage.
A traditional approach, complementary to Riesz' variational principle, relates the multivariate moment problem to the spectral decomposition of strongly commuting tuples of symmetric, generally unbounded, Hilbert space operators. While the construction in this setting of the joint spectral measure goes hand in hand with the extension of positive linear functionals, the monotonic approximation of separating functions we propose opens a new vista towards real algebra. For the Hilbert space interpretation of the moment problem we refer to Akhiezer's monograph [3] and the recent book by Schmudgen [31]. Equally relevant, and neglected in our article, is the link between moment indeterminateness and the still mysterious topics of quasi-analytic classes in several variables, see for instance [26].
The contents of the article are as follows. Section 2 brings trigonometric separating functions to the forefront via the Fourier-Laplace transforms. Section 3 deals with discontinuous separating functions. Section 4 relies on Poisson's transform to render a quantitative criteria for indeterminateness. Various operations preserving indeterminateness are analyzed in Section 5. In Section 6, we relate indeterminateness criteria to the existence of bounded point evaluations, a traditional theme in the one variable setting. In Section 7 we focus on moment problems supported by real algebraic, affine curves, with special emphasis on rational curves.
The present article does not refer to the spectral analysis interpretation of the multivariate moment problem. This subject is amply exposed in the monograph [31].
## 2. Fourier-Laplace transform
We start with a full space scenario. Let \(\mu\) be an admissible measure on \(\mathbb{R}^{d}\). For \(x=(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}\) and \(\xi=(\xi_{1},\ldots,\xi_{d})\in\mathbb{R}^{d}\), we denote \(x\cdot\xi=\prod_{j=1}^{d}x_{j}\xi_{j}\) and \(\|x\|=\sqrt{x\cdot x}\). The _Fourier transform of \(\mu\)_,
\[\hat{\mu}(\xi):=\int_{\mathbb{R}^{d}}e^{-ix\cdot\xi}d\mu(x)\qquad\text{for} \quad\xi\in\mathbb{R}^{d}\]
is a smooth function. Indeed, since all power moments of \(\mu\) are finite,
\[\left|\left(\frac{\partial^{\lambda}}{d\xi^{\lambda}}\,\hat{\mu}\right)(\xi) \right|=\left|\int_{\mathbb{R}^{d}}x^{\lambda}e^{-ix\cdot\xi}d\mu(x)\right| \leq\int_{\mathbb{R}^{d}}|x^{\lambda}|d\mu(x)<\infty.\]
In addition, according to Bochner's theorem the function \(\hat{\mu}\) is positive definite, i.e., the continuous kernel function is positive semi-definite:
\[(\hat{\mu}(\xi_{i}-\xi_{j}))_{i,j=1}^{n}\succeq 0\]
for any choice of \(\xi_{1},\ldots,\xi_{n}\in\mathbb{R}^{d}\).
**Definition 2.1**.: For \(\xi\in\mathbb{R}^{d}\setminus\{0\}\), we define the _push-forward measure_\(\mu_{\xi}\) as
\[\int_{\mathbb{R}}\varphi(t)d\mu_{\xi}(t)=\int_{\mathbb{R}^{d}}\varphi(x\cdot \xi)d\mu(x)\]
for every continuous function \(\varphi\) of polynomial growth at infinity.
Assume \(\mu\) and \(\nu\) are two distinct, moment equivalent measures, in which case the Fourier transforms of \(\mu\) and \(\nu\) are distinct, see for instance [20]. Therefore there exists \(\xi\neq 0\) with the property
\[\int e^{-ix\cdot\xi}d\mu(x)\neq\int e^{-ix\cdot\xi}d\nu(x).\]
Moreover, the continuity of the Fourier transform implies that there exists \(\delta>0\) so that
\[\int e^{-ix\cdot\eta}d\mu(x)\neq\int e^{-ix\cdot\eta}d\nu(x),\]
for all \(\eta,|\eta-\xi|<\delta\). Riesz' Theorem 1.1 implies the following theorem.
**Theorem 2.2**.: _Let \(\mu\) be a positive measure on \(\mathbb{R}^{d}\) with finite moments of any order. There exists a different positive measure on \(\mathbb{R}^{d}\) with the same power moments if and only if there exists \(\xi\in\mathbb{R}^{d}\setminus\{0\}\) and \(\epsilon\in\{0,-\pi/2\}\) with the property_
\[\sup_{p(x)\leq\cos(x\cdot\xi+\epsilon)}\int pd\mu<\inf_{q(x)\geq\cos(x\cdot\xi+ \epsilon)}\int qd\mu,\]
_where \(p,q\in\mathbb{R}[x_{1},\dots,x_{d}]\) are polynomials._
_Moreover, the above separation condition is open in \((\xi,\epsilon)\)._
_Remark 2.3_.: By pushing forward the competing polynomials in the above variational inequality on the line spanned by the vector \(\xi\), with \(\|\xi\|=1\), we find a much weaker condition
\[\sup_{r(x\cdot\xi)\leq\cos(x\cdot\xi+\epsilon)}\int rd\mu<\inf_{s(x\cdot\xi) \geq\cos(x\cdot\xi+\epsilon)}\int sd\mu,\]
with \(r,s\in\mathbb{R}[t]\) univariate polynomials. In both cases, we encounter the largely open task of certifying inequalities involving polynomials and trigonometric functions.
Potentially a more accessible context is offered by a bilateral, or traditional, Laplace transform, with output complex variables. To this aim we recall the _Fantappie transform_ of an admissible measure \(\mu\):
\[F_{\mu}(z,\xi)=\int_{\mathbb{R}^{d}}\frac{d\mu(x)}{x\cdot\xi-z},\ \ z\in\mathbb{C},\ \mathrm{Im}\,z>0,\xi\in\mathbb{R}^{d}. \tag{2.1}\]
Note that the above integral is convergent since \(|x\cdot\xi-z|\geq\mathrm{Im}\,z>0\). Fantappie's transform is an iterated Fourier-Laplace transform, hence invertible. Indeed,
\[\int_{0}^{\infty}e^{-ip[x\cdot\xi-z]}dp=\frac{1}{i(x\cdot\xi-z)},\]
and Fubini's theorem yield
\[F_{\mu}(z,\xi)=i\int_{0}^{\infty}\int_{-\infty}^{\infty}e^{-ipx\cdot\xi}e^{ipz }d\mu(x)dp=i\int_{0}^{\infty}e^{ipz}\hat{\mu}(p\xi)dp.\]
The Fantappie transform \(F_{\mu}(z,\xi)\) is homogeneous of degree \(-1\), i.e.,
\[F_{\mu}(tz,t\xi)=t^{-1}F_{\mu}(z,\xi)\qquad\text{for}\quad t\in\mathbb{R} \setminus\{0\}.\]
Therefore, the values \(F_{\mu}(z,w)\), where \(\|w\|=1\) and \(\mathrm{Im}\,z>0\) determine \(F_{\mu}\), and \(\mu\). In complete analogy to Theorem 2.2 we state the following indeterminateness criterion.
**Theorem 2.4**.: _Let \(\mu\) be an admissible measure on \(\mathbb{R}^{d}\). There exists a moment equivalent, admissible measure, distinct of \(\mu\) if and only if there exists
\(w\in\mathbb{R}^{d},\ \|w\|=1,\) and \(\epsilon\in\{0,1\}\), such that_
\[\sup_{p(x)\leq\frac{(1+\epsilon w\cdot x)}{(w\cdot x)^{2}+1}}\int pd\mu<\inf_{q( x)\geq\frac{(1+\epsilon w\cdot x)}{(w\cdot x)^{2}+1}}\int qd\mu, \tag{2.2}\]
_where \(p,q\) are polynomials._
Proof.: Assume \(\mu\sim\nu\). Then \(\mu\neq\nu\) if and only if there exists \(w\in S^{d-1}\) such that
\[\int_{\mathbb{R}^{d}}\frac{d\mu_{1}(x)}{x\cdot w-z}\neq\int_{\mathbb{R}^{d}} \frac{d\mu_{2}(x)}{x\cdot w-z}\]
where \(w\in S^{d-1}\) and \(z\in\mathbb{C}\) with \(\operatorname{Im}z>0\). In other terms, the push-forward measures have distinct Cauchy transforms at the specific point \(z\):
\[\int_{\mathbb{R}}\frac{d\mu_{w}(t)}{t-z}\neq\int_{\mathbb{R}}\frac{d\nu_{w}(t )}{t-z}.\]
But the measures \(\mu_{w},\nu_{w}\) are moment equivalent on the line. By well known results of one variable theory (specifically the parametrization of Weyl's circle by values of Cauchy transforms of representing measures), one also finds
\[\int_{\mathbb{R}}\frac{d\mu_{w}(t)}{t-i}\neq\int_{\mathbb{R}}\frac{d\nu_{w}(t )}{t-i},\]
see Section 2.1.2 in [3].
By taking real, respectively imaginary, parts of \(\frac{1}{w\cdot x-i}\) one recovers the separating functions in the statement.
Fantappie's transform is particularly useful for measures supported on a convex cone. In this context, complex analyticity and complete monotonicity properties enhance the characterization of the range of the transform and provide efficient inversion formulae. We refer to [13] for full details. Next we extract a few relevant observation from the theory of Fantappie transform on convex cones. We can regard the analysis below as an analogue of Stieltjes moment problem on the real semi-axis.
Let \(\Gamma\subseteq\mathbb{R}^{d}\) be an acute, convex and solid cone and \(\Gamma^{*}=\{\eta\in\mathbb{R}^{d}:\eta\cdot x\geq 0\ \ \ \text{for}\ \ \text{all}\ \ \ x\in\Gamma\}\) be the dual cone of \(\Gamma\). Let \(\mu\) be an admissible measure supported on \(\Gamma^{*}\). Note that in this case the Fantappie transform
\[F_{\mu}(z,w)=\int_{\Gamma^{*}}\frac{d\mu(x)}{w\cdot x-z}\]
admits a complex analytic extension to the domain
\[\operatorname{Re}w\in\Gamma\ \ \ \text{and}\ \ \ \operatorname{Re}z<0.\]
In particular the range of real values \(w\in\Gamma,\ z<0\), is a uniqueness set for the complex analytic function \(F_{\mu}\) defined on the tube domain over this convex set. In short, due to homogeneity, the values
\[F_{\mu}(-1,a)=\int_{\Gamma^{*}}\frac{d\mu(x)}{a\cdot x+1},\ \ a\in\Gamma,\]
determine the measure \(\mu\). Moreover, since the above function is real analytic in the variable \(a\), a non-trivial zero set of a difference \(F_{\mu}(-1,a)-F_{\nu}(-1,a)\) is a proper analytic subset of \(\mathrm{int}\Gamma\).
Mutatis mutandis, the following result is proved.
**Theorem 2.5**.: _Let \(\Gamma\subseteq\mathbb{R}^{d}\) be an acute, convex and solid cone and let \(\mu\) be an admissible measure supported by the dual cone \(\Gamma^{*}\). There exists a different, admissible measure supported on \(\Gamma^{*}\) and moment equivalent to \(\mu\) if and only if there exists \(a\in\mathrm{int}\Gamma\), such that_
\[\sup_{\begin{subarray}{c}p(x)\leq\frac{1}{n-x+1}\\ x\in\Gamma^{*}\end{subarray}}\int pd\mu<\inf_{\begin{subarray}{c}q(x)\geq \frac{1}{a,x+1}\\ x\in\Gamma^{*}\end{subarray}}\int qd\mu, \tag{2.3}\]
_where \(p,q\) are polynomials functions on \(\Gamma^{*}\)._
_Moreover, the range of values of \(a\) above is an open, everywhere dense subset of \(\mathrm{int}\,\Gamma\)._
## 3. Discontinuous separating functions
Let \(X\subset\mathbb{R}^{d}\) be a closed set. Working with test functions of polynomials growth at infinity imposes the following adaptation of the class of Baire-1 functions. We refer to [35] for the classical setting.
**Definition 3.1**.: A function \(f:X\longrightarrow\mathbb{R}\) is called a _Baire function of the first category and of polynomial growth_, in short \(f\in\mathcal{BP}_{1}(X)\), if there exists a sequence \(\phi_{n}\in C_{p}(X)\) subject to a uniform bound (\(M>0,N\geq 0\)):
\[|\phi_{n}(x)|\leq M(1+\|x\|^{2})^{N},\ \ n\geq 1,\ x\in X, \tag{3.1}\]
such that, pointwisely,
\[f(x)=\lim_{n\to\infty}\phi_{n}(x),\ \ x\in X.\]
In particular a semi-continuous function \(f\) defined on \(X\) and of polynomial growth satisfies \(f\in\mathcal{BP}_{1}(X)\). Indeed, assume \(f\) is lower semi-continuous and
\[|f(x)|\leq\rho(x):=M(1+\|x\|^{2})^{N},x\in X,\]
for some constants \(M>0,N\geq 0\). There exists a monotonically increasing sequence of continuous functions \(\phi_{n}\) converging pointwisely to \(f\). Then the sequence \(\phi_{n}^{\prime}=\min(\phi_{n},\rho)\) is uniformly bounded from above by the weight \(\rho\) and converges poinwisely to \(f\). Similarly, the operation \(\phi_{n}=\max(\phi_{n}^{\prime},-2\rho)\) provides a lower bound of polynomial decay at infinity, without altering the convergence behavior.
**Theorem 3.2**.: _Let \(X\subset\mathbb{R}^{d}\) be a closed subset and let \(\mu\) be an admissible measure on \(X\). \(\mu\) is \(X\)-indeterminate if and only if \(\mu\) has a non-polynomial separating function \(f\in\mathcal{BP}_{1}(X)\)._
Proof.: The essence of the construction of Daniell's integral is: a linear, positive functional on \(C_{p}(X)\) admits a _unique_ extension to a linear, positive functional on \(\mathcal{BP}_{1}(X)\). Indeed, given the admissible Radon measure \(\mu\), if a sequence of continuous functions \(\phi_{n}\) converges to \(f\) as stated in (3.1), then Lebesgue dominated convergence theorem implies \(\int fd\mu=\lim_{n}\int\phi_{n}d\mu\). A detailed analysis of this uniqueness of extension phenomenon is contained in [10, 17].
In other terms, two admissible measures \(\mu,\nu\), moment equivalent on \(X\), are separated by the function \(f\) above:
\[\int fd\mu\neq\int fd\nu\]
if and only if
\[\int\phi_{n}d\mu\neq\int\phi_{n}d\nu,\]
for \(n\) large. Translating this observation into M. Riesz' framework, we infer: \(\mu\) is \(X\)-indeterminate if and only if there exists a non-polynomial function \(f\in\mathcal{BP}_{1}(X)\) subject to:
\[\sup_{p\leq f}\int pd\mu<\inf_{q\geq f}\int qd\mu,\]
where \(p,q\) are polynomials, and the inequalities are restricted to points of \(X\).
Note that the uniqueness of extension of a positive linear functional does not extend beyond Baire first class. Indeed, Dirichlet's characteristic function \(\chi=\chi_{\mathbb{Q}}\) of rational numbers in the interval \([0,1]\) yields, for Riemann's integral:
\[\sup_{p\leq\chi}\int_{0}^{1}p(x)dx\leq 0<1\leq\inf_{q\geq\chi}\int_{0}^{1}q(x)dx\]
and the measure \(dx\) is determined on \([0,1]\).
A simple application of this extension of the field of test, separating function is given by the multivariate distribution function associated to a measure. To this aim, for a point \(a\in\mathbb{R}^{d}\), we define the semi-bounded orthant
\[\Pi_{a}=\{x\in\mathbb{R}^{d}:\ x_{j}\leq a_{j}\quad\text{for}\quad j=1,\ldots,d\}.\]
We denote by \(\chi_{a}=\chi_{\Pi_{a}}\) the characteristic function of this set.
**Corollary 3.3**.: _Let \(X\subset\mathbb{R}^{d}\) be a closed subset and let \(\mu\) be an admissible measure on \(X\). The measure \(\mu\) is \(X\)-indeterminate, if and only if there exists \(a\in\mathbb{R}^{d}\) such that_
\[\sup_{\begin{subarray}{c}p(x)\leq\chi_{a}(x)\\ x\in X\end{subarray}}\int pd\mu<\inf_{\begin{subarray}{c}q(x)\geq\chi_{a}(x) \\ x\in X\end{subarray}}\int qd\mu.\]
Proof.: The linear subspace of \(\mathcal{BP}_{1}(X)\) contains the characteristic functions of all rectangles of the form \((b_{1},a_{1}]\times(b_{2},a_{2}]\times\ldots\times(b_{d},a_{d}]\). Moreover, it is well known that the associated step functions are a set of uniqueness for any Radon measure on \(X\), see, e.g., [10].
Modifications of separating functions become more flexible in the class of Baire-1 functions. We state only a simple observation in this direction.
**Proposition 3.4**.: _Let \(X\subset\mathbb{R}^{d}\) be a closed subset and let \(\mu\) be an admissible, \(X\)-indeterminate measure. Denote by \(S\subset X\) the closed support of \(\mu\). Let \(\phi\in C(X)\) be a continuous, separating function from the left, with respect to \(\mu\), specifically:_
\[\inf_{\begin{subarray}{c}p(x)\leq\phi(x)\\ x\in X\end{subarray}}\int(\phi-p)d\mu>0.\]
_Let \(\psi\in C(X)\) be a continuous function satisfying \(\psi(x)\leq\phi(x),\ x\in X\setminus S\). Then the Baire-1 function_
\[\Phi(x)=\left\{\begin{array}{ll}\phi(x),\ x\in S,\\ \psi(x),\ x\in X\setminus S,\end{array}\right.\]
_is left separating for \(\mu\)._
Proof.: Note that the function \(\Phi\) is upper-semicontinuous, therefore \(\Phi\) is Baire-1. Let \(p\) be a polynomial function satisfying \(p(x)\leq\Phi(x),\ x\in X\). Then \(p(x)\leq\phi(x),\ x\in X\), hence
\[\inf_{\begin{subarray}{c}p(x)\leq\Phi(x)\\ x\in X\end{subarray}}\int_{X}(\Phi-p)d\mu =\inf_{\begin{subarray}{c}p(x)\leq\Phi(x)\\ x\in X\end{subarray}}\int_{S}(\phi-p)d\mu\] \[\geq\inf_{\begin{subarray}{c}p(x)\leq\phi(x)\\ x\in X\end{subarray}}\int(\phi-p)d\mu>0.\]
The restriction of separation from the left in the statement is not essential, since either
\[\inf_{\begin{subarray}{c}p(x)\leq\phi(x)\\ x\in X\end{subarray}}\int(\phi-p)d\mu>0,\]
or
\[\inf_{\begin{subarray}{c}q(x)\geq\phi(x)\\ x\in X\end{subarray}}\int(q-\phi)d\mu>0.\]
Moreover, adding a polynomial to a function of polynomial growth will not change its \(\mu\) separating status. In particular, assuming \(\phi\geq 0\) in the statement above, one can take \(\psi=0\).
We elaborate below the important case of characteristic functions of coordinate quadrants, seen as separating functions. Suppose \(h\in\mathbb{R}[t]\) satisfies
\[\begin{cases}h(t)\geq 1\quad\text{for}\quad t\geq 0,\\ h(t)\geq 0\quad\text{for}\quad t\in\mathbb{R}.\end{cases} \tag{3.2}\]
Such polynomials are usually obtained via quadrature formulas, see for instance [11] Section II.3.
Let \(Q=\{(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}:x_{j}\geq 0\quad\text{for}\quad j=1, \ldots,d\}\) and consider
\[H(x_{1},\ldots,x_{d})=\prod_{j=1}^{d}h(x_{j})\]
and notice that \(H\geq\chi_{Q}\) on \(\mathbb{R}^{d}\), i.e.,
\[\begin{cases}H(x)\geq 1\quad\text{for}\quad x\in Q,\\ H(x)\geq 0\quad\text{for}\quad x\in\mathbb{R}^{d}\setminus Q.\end{cases}\]
For a subset \(\mathcal{I}\subseteq\{1,\ldots,d\}\), we let
\[Q_{\mathcal{I}}=\{(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}:x_{j}\geq 0\text{ for }j \notin\mathcal{I}\text{ and }x_{k}\leq 0\text{ for }k\in\mathcal{I}\}.\]
In a similar fashion for a function \(\varphi\), we let
\[\varphi_{\mathcal{I}}(x_{1},\ldots,x_{d})=\varphi(x_{1},\ldots,x_{k-1},-x_{k},x_{k+1},\ldots,x_{d}).\]
Note that, the polynomial \(H_{\mathcal{I}}\) satisfies
\[\begin{cases}H_{\mathcal{I}}(x)\geq 1\quad\text{for}\quad x\in Q_{\mathcal{I}}, \\ H_{\mathcal{I}}(x)\geq 0\quad\text{for}\quad x\in\mathbb{R}^{d}\setminus Q_{ \mathcal{I}}.\end{cases}\]
and \(H_{1}:=H_{\{1\}}=\sum_{\mathcal{I}\neq 0}H_{\mathcal{I}}\), where the sum is taken over all nonempty subsets of \(\{1,\ldots,d\}\), satisfies
\[\begin{cases}H_{1}(x)\geq 1\quad\text{for}\quad x\in\mathbb{R}^{d}\setminus Q,\\ H_{1}(x)\geq 0\quad\text{for}\quad x\in Q.\end{cases}\]
Therefore, think of \(\chi_{Q}\) as a separating function, i.e.,
\[1-H_{1}\leq\chi_{Q}\leq H.\]
Let \(\mu\) be an admissible measure on \(\mathbb{R}^{d}\) with \(\chi_{Q}\) as a separating function, i.e.,
\[\inf_{g\leq\chi_{Q}\leq f}\int(f-g)\,d\mu\geq\gamma>0,\]
where \(f,g\in\mathbb{R}[x_{1},\ldots,x_{d}]\). We infer
\[\int(H-(1-H_{1}))\,d\mu\geq\gamma,\]
i.e.,
\[\sum_{\mathcal{I}\in\mathcal{O}\{1,\ldots,d\}}\int H_{\mathcal{I}}d\mu-\mu( \mathbb{1})\geq\gamma>0,\]
where \(\mu(\mathbb{1}):=\int\,d\mu\).
We can, of course, translate \(Q\) via \(Q\mapsto a+Q\), for \(a\in\mathbb{R}^{d}\), where
\[a+Q:=\{y\in\mathbb{R}^{d}:y=a+(x_{1},\ldots x_{d})\text{ for some }(x_{1},\ldots,x_{d} \in Q\},\]
which amounts to considering the function
\[(\tau_{a}\,\varphi)(x)=\varphi(x-a).\]
All in all, we can state the following indeterminateness necessary condition.
**Proposition 3.5**.: _Let \(h(t)\) be a polynomial satisfying 3.2 and let \(H(x_{1},\dots,x_{d})=\prod_{j=1}^{d}h(x_{j})\). Assume \(\mu\) is an indeterminate, admissible measure defined on \(\mathbb{R}^{d}\). Then there exists \(a\in\mathbb{R}^{d}\) and a constant \(\gamma_{a}>0\), such that_
\[\sum_{\mathcal{I}\in\mathcal{D}\setminus\{1,\dots,d\}}\int\tau_{a}(H_{ \mathcal{I}})d\mu\geq\mu(\mathbb{1})+\gamma_{a}.\]
The condition in the statement is obviously open with respect to \(a\).
## 4. Poisson transform
Arguably the closest multivariate integral transform to the classical Cauchy transform in 1D is the Poisson transform. We elaborate below a few details, with [33] as a basic reference.
Let \(P(x,t)=c_{d}\left(\frac{t}{[t^{2}+\|x\|^{2}]^{\frac{d+1}{2}}}\right)\) be Poisson's kernel in \(\mathbb{R}^{d}\). We denote
\[P_{\mu}(x,t)=\int_{\mathbb{R}^{d}}P(x-u,t)d\mu(u),\ \ x\in\mathbb{R}^{d},t>0,\]
the Poisson transform of the admissible measure \(\mu\). It is a harmonic function in the upper-half space of \(\mathbb{R}^{d+1}\), which determines \(\mu\) by non-tangential limits (\(t\mapsto 0\)) in the distribution sense. Accordingly, given an admissible, indeterminate measure \(\mu\) on \(\mathbb{R}^{d}\), there exists a value \((x_{0},t_{0})\) with the property that the continuous function \(u\mapsto P(x_{0}-u,t_{0})\) is separating with respect to \(\mu\). In analogy to the 1D situation, we prove that almost any pair \((x_{0},t_{0})\in\mathbb{R}^{d}\times(0,\infty)\) has this property. Performing an averaging in the mean of the respective monotonic approximation we obtain an everywhere criterion of indeterminateness.
**Theorem 4.1**.: _Let \(\mu\) be an admissible measure on \(\mathbb{R}^{d}\). The following are equivalent:_
_a) The measure \(\mu\) is moment indeterminate;_
_b) There exists \((x_{0},t_{0})\in\mathbb{R}^{d}\times(0,\infty)\) such that_
\[\kappa_{\mu}(x_{0},t_{0}):=\inf_{p(u)\leq P(x_{0}-u,t_{0})\leq q(u)}\int(q-p)d \mu>0, \tag{4.1}\]
_where \(p(u),q(u)\) are polynomial functions,_
_c) For any point \((x_{0},t_{0})\in\mathbb{R}^{d}\times(0,\infty)\) and radius \(0<r<t_{0}\):_
\[\int_{S((x_{0},t_{0}),r)}\kappa_{\mu}(x,t)d\sigma(x,t)>0,\]
_where \(S((x_{0},t_{0}),r)\) denotes the sphere and \(d\sigma\) is its surface area measure._
Proof.: The equivalence between a) and b) follows from the fact that the Poisson transform \(P_{\mu}\) determines \(\mu\). It remains to prove that a) implies c). Assume that the measure \(\mu\) is indeterminate, that is there exists a moment equaivalent, admissible measure \(\nu\), different than \(\mu\).
Suppose that there exists \((x_{0},t_{0})\in\mathbb{R}^{d}\times(0,\infty)\) such that
\[\inf_{p(u)\leq P(x_{0}-u,t_{0})\leq q(u)}\int(q-p)d\mu=0.\]
Then
\[P_{\mu}(x_{0},t_{0})=P_{\nu}(x_{0},t_{0}).\]
The harmonic function \(u(x,t)=P_{\mu}(x,t)-P_{\nu}(x,t)\) is not identically zero, but vanishes at the point \((x_{0},t_{0})\). The zero set \(V\) of \(u(x,t)\) is a real analytic hypersurface of \(\mathbb{R}^{d}\times(0,\infty)\), which by the maximum principle cannot contain any euclidean sphere.
Note that the function \(\kappa_{\mu}\) is Borel measurable. Its zero set is included in \(V\), hence \(\kappa_{\mu}>0\) on an open subset of any euclidean sphere, in particular \(S((x_{0},t_{0}),r)\). So, its average on any sphere is non-zero if and only if the measure \(\mu\) is indeterminate.
**Corollary 4.2**.: _An admissible measure \(\mu\) is indeterminate on \(\mathbb{R}^{d}\) if and only if_
\[\int_{S((0,1),r)}\kappa_{\mu}(x,t)d\sigma(x,t)>0 \tag{4.2}\]
_for some \(0<r<1\)._
Given an admissible measure \(\mu\) it is natural to consider the set of Poisson transforms of equivalent measures:
\[\Delta_{\mu}(x,t)=\{P_{\nu}(x,t):\ \ \nu\sim\mu\}.\]
This is a closed, convex set, hence an interval. According to the proof above, \(\Delta_{\mu}(x,t)\) reduces to a point if and only if \(\kappa_{\mu}(x,t)=0\).
**Lemma 4.3**.: _The length of the interval \(\Delta_{\mu}(x,t)\) is \(\kappa_{\mu}(x,t)\)._
Proof.: Riesz' extension theorem (see Theorem 1.1) shows that we can populate the interval
\[[\sup_{p(u)\leq P(x-u,t)}\int pd\mu,\inf_{q(u)\geq P(x-u,t)}\int qd\mu]\]
by values \(\int P(x-u,t)d\nu(u)\), where \(\nu\) is a measure, moment equivalent to the prescribed measure \(\mu\).
We have just proved that either \(\Delta_{\mu}(x,t)\) consists of a single point for every \((x,t)\in\mathbb{R}^{d}\times(0,\infty)\), or this happens only on an exceptional locus contained in the zero set of a non-trivial harmonic function.
_Remark 4.4_.: Notes on the exceptional set \(E=\{(x,t),\ \kappa_{\mu}(x,t)=0\}\) attached to an indeterminate, admissible measure \(\mu\) on \(\mathbb{R}^{d}.\)
If \(\nu\) is a moment equivalent measure to \(\mu,\) then \(E\) is contained in the zero set of the harmonic function \(P_{\mu}-P_{\nu},\) in itself a real analytic subset of \(\mathbb{R}^{d}\times(0,\infty)\). On the other hand, if \(\kappa_{\mu}(x,t)\neq 0,\) then Riesz' extension theorem implies that there exists a moment equivalent measure \(\nu,\) such that \(P_{\mu}(x,t)\neq P_{\nu}(x,t).\) In other terms,
\[E=\bigcap_{\nu\sim\mu}\{(x,t):\ P_{\mu}(x,t)=P_{\nu}(x,t)\}.\]
But an arbitrary intersection or real analytic sets is real analytic, see for instance Cor. 2 on pg. 100 in [18]. All in all we have proved the following statement.
**Proposition 4.5**.: _Let \(\mu\) be an indeterminate, admissible measure on \(\mathbb{R}^{d}\). Then the exceptional set \(E=\{(x,t),\ \kappa_{\mu}(x,t)=0\}\) is (a proper, closed) real analytic subset of \(\mathbb{R}^{d}\times(0,\infty).\)_
In particular, the complement of \(E\) into \(\mathbb{R}^{d}\times(0,\infty)\) cannot have relatively compact connected components. Indeed, otherwise for any measure \(\nu,\) moment equivalent to \(\mu,\) the harmonic function \(P_{\mu}(x,t)\neq P_{\nu}(x,t)\) would be identiucally zero on that component, and hence everywhere. A little more can be said in this respect.
**Proposition 4.6**.: _Let \(\delta>0\) and let \(H\) denote an irreducible, real analytic set of real dimension \(d\), contained in \(\mathbb{R}^{d}\times(\delta,\infty)\). Let \(E\) denote the exceptional set attached to an indeterminate measure on \(\mathbb{R}^{d}\). The germ of \(E\) at any point cannot contain the germ of \(H\)._
Proof.: If the germ of \(E\) contains that of \(H\), then \(E\) contains \(H\) due to the irreducibility assumption. First we prove that
\[\lim_{t^{2}+\|x\|^{2}\to\infty}P_{\mu}(x,t)=0,\]
uniformly on the set \(x\in\mathbb{R}^{d},t\geq\delta.\) Indeed, let \(M\) be a positive number, so that
\[\int_{\|u\|\geq M}\frac{t}{[t^{2}+\|x-u\|^{2}]^{\frac{d+1}{2}}}d\mu(u)\leq[ \frac{1}{t^{\frac{d+1}{2}}}]\int_{\|u\|\geq M}d\mu(u)\]
can be made arbitrary small whenever \(t\geq\delta\) and \(M\) tends to infinity. On the other hand, if \(t^{2}+\|x\|^{2}=R^{2},\) then
\[\int_{\|u\|\leq M}\frac{t}{[t^{2}+\|x-u\|^{2}]^{\frac{d+1}{2}}}d \mu(u)\] \[\leq \int_{\|u\|\leq M}\frac{R}{[t^{2}+\|x\|^{2}-2\langle x,u\rangle+ \|u\|^{2}]^{\frac{d+1}{2}}}d\mu(u)\] \[\leq \int_{\|u\|\leq M}\frac{R}{[R^{2}-2RM]^{\frac{d+1}{2}}}d\mu(u)\]
and the latter integral converges uniformly to zero for \(R\to\infty\).
Assume \(H\subset E\) and let \(\nu\) be another admissible measure, moment equivalent to \(\mu\). For \(R>0\) large, the complement of \(H\cup S((0,0),R)\) contains a connected component, relatively compact in \(\mathbb{R}^{d}\times(\delta,\infty)\). The harmonic function \(P_{\mu}-P_{\nu}\) vanishes on \(E\), hence on \(H\), and it is arbitary small on the sphere. The maximum principle implies \(P_{\mu}=P_{\nu}\) everywhere, a contradiction.
## 5. Preservers of indeterminate measures
### Push forward by projections
Both Fourier transform and Fanttapie transform criteria, respectively Theorems 2.2 and 2.5, contain a separating function of the form \(x\mapsto h(\xi\cdot x)\) associated to a privileged vector \(\xi\in\mathbb{R}^{d}\). Consequently, an orthogonal push forward measure on linear varieties \(V\) containing \(\xi\) will preserve the indeterminateness.
Indeed, let \(\pi:\mathbb{R}^{d}\longrightarrow V\) be the orthogonal projection and assume \(\mu\) is an admissible measure on \(\mathbb{R}^{2}\), respectively supported by a closed cone \(\Gamma^{*}\). Assume, on the respective supports, and with running polynomials \(p\) and \(q\):
\[\sup_{p(x)\leq h(\xi\cdot x)}\int pd\mu<\inf_{q(x)\geq h(\xi\cdot x)}\int qd\mu. \tag{5.1}\]
The function \(h(\xi\cdot x)\) is constant along the fibers of \(\pi\), hence \(h=\pi^{*}g\), where \(g:V\longrightarrow\mathbb{R}\) is a polynomial function. Denoting by \(r,s\) polynomials on \(V\) we infer:
\[\sup_{r\leq g}\int rd\pi_{*}\mu<\inf_{s\geq g}\int sd\pi_{*}\mu,\]
or, equivalently,
\[\sup_{r(\pi(x))\leq h(\xi\cdot x)}\int r(\pi(x))d\mu<\inf_{s(\pi(x))\leq h(\xi \cdot x)}\int s(\pi(x))d\mu\]
which is true in view of (5.1).
Both Fourier transform and Fantappie transform criteria, respectively Theorems 2.2 and 2.5, contain a separating function of the Radon transform \(x\mapsto h(\xi\cdot x)\) associated to a privileged vector \(\xi\in\mathbb{R}^{d}\). Consequently, an orthogonal push forward measure on linear varieties \(V\) containing \(\xi\) will preserve the indeterminateness.
A celebrated theorem of Petersen [21] relates multivariate moment determinateness to the same property of the marginals. The proof was obtained via a natural weighted approximation scheme. Remarking the invariance under linear changes of coordinates, we offer the following partial complementary picture.
**Proposition 5.1**.: _Let \(\mu\) be an admissible positive measure. Then the following statements hold_:
1. _If there exists a basis of_ \(\mathbb{R}^{d}\) _with the property that the_ \(1D\) _marginals of_ \(\mu\) _along parallel projections with respect to this basis are all determinate, then_ \(\mu\) _is determinate._
2. _Assuming the measure_ \(\mu\) _is indeterminate, there exists a coordinate, possibly non-orthogonal, frame with the property that all marginals of_ \(\mu\) _are indeterminate._
Proof.: Part a) is Petersen's theorem, translated to an arbitrary linear basis. To prove part b) we remark that the unit vectors along which the projection of \(\mu\) is indeterminate form an open set on the sphere.
### Small perturbations
Both separating functions \(e^{i\xi\cdot x}\) and \(\frac{1}{1+a\cdot x}\) appearing the the previous section are uniformly bounded on the respective supporting sets. As a matter of fact, a bounded separating function always exists. This simple remark shows that a perturbation of admissible measures, small in total variation, preserves indeterminateness.
**Lemma 5.2**.: _Let \(X\subset\mathbb{R}^{d}\) be a closed subset. The space \(C_{0}(X)\) of continuous functions of compact support contains separating functions for any admissible, indeterminate measure supported by \(X\)._
Proof.: Let \(\mu,\nu\) be two admissible measures on \(X\), moment equivalent, but distinct. That is, there exists a function \(\phi\in C_{p}(X)\) with the property \(\int\phi d\mu\neq\int\phi d\nu\). Let \(K_{n}\subset\mathrm{int}K_{n+1}\) be an exhaustion of \(X\) with compact sets, with an attached system of continuous functions \(\kappa_{n},\ 0\leq\kappa_{n}\leq 1\), satisfying
\[\mathrm{supp}(\kappa_{n})\subset K_{n+1},\ \ \kappa_{n}(x)=1,\ x\in K_{n}.\]
Lebesgue dominated convergence theorem implies
\[\lim_{n}\int\kappa_{n}\phi d\mu=\int\phi d\mu,\]
and similarly for the measure \(\nu\). That is, for \(n\) sufficiently large, the functions \(\kappa_{n}f\) are separating the measures \(\mu\) and \(\nu\).
In particular, on any support and any indeterminate measure there are uniformly bounded separating functions.
**Proposition 5.3**.: _Let \(X\subset\mathbb{R}^{d}\) be a closed set and let \(\mu\) be an admissible, \(X\)-indeterminate measure. There exists \(\epsilon>0\) with the property that for all admissible measures \(\sigma\) supported on \(X\), of total mass less than \(\epsilon\), the measure \(\mu+\sigma\) is admissible and \(X\)-indeterminate._
Proof.: According to the Lemma above, if \(\nu\) is an admissible measure on \(X\), moment equivalent to \(\mu\), there exists a separating function of compact support \(\phi\in C_{0}(X)\). We can assume \(\|\phi\|_{\infty}=1\). Choose
\[\epsilon<|\int\phi d\mu-\int\phi d\nu|.\]
The measure \(\sigma\) in the statement has the property that \(\mu+\sigma\) and \(\nu+\sigma\) are moment equivalent, yet
\[|\int\phi d(\mu+\sigma)-\int\phi d(\nu+\sigma)|>0.\]
### Convolution
Convolution transforms change in general the moment data, preserving however the indeterminateness feature, and possibly improving the regularity of the original measure.
**Proposition 5.4**.: _Let \(\mu\) be an indeterminate, admissible measure on \(\mathbb{R}^{d}\) and let \(\tau\neq 0\) be a positive measure of compact support. Then the convolution \(\tau*\mu\) is admissible and indeterminate._
Proof.: Since \(\tau\) has compact support, the convolution \(\tau*\mu\) is well defined in the sense of distributions, and it is a measure. Let \(p(x)\) be a polynomial. By definition,
\[\int pd(\tau*\mu)=\int\int p(x+y)d\tau(y)d\mu(x).\]
The iterated integral is convergent due to the assumption that the measure \(\mu\) is admissible, that is all continuous functions of polynomial growth are \(\mu\)-integrable.
Taylor's expansion yields the finite sum
\[p(x+y)=\sum_{k}\frac{p^{(k)}(y)}{k!}x^{k}.\]
We deduce that, if \(\nu\) is an admissible measure, moment equivalent to \(\mu\), then
\[\int\int p(x+y)d\tau(y)d\mu(x)=\int\int p(x+y)d\tau(y)d\nu(x).\]
And that is true for all polynomials \(p\).
Assume that the two moment equivalent measures \(\mu\) and \(\nu\) are distinct. We have to prove that \(\tau*\mu\) is different than \(\tau*\nu\). Passing to Fourier transforms we find
\[\widehat{\tau*\mu}=\widehat{\tau}\hat{\mu}.\]
But \(\hat{\tau}\) is an entire function on \(\mathbb{C}^{d}\), while \(\hat{\mu}\) is a function of class \(\mathcal{C}^{\infty}(\mathbb{R}^{d})\). Since \(\tau\neq 0\), the zeros of the Fourier transform \(\hat{\tau}\) are supported on an analytic hypersurface in \(\mathbb{C}^{d}\), hence they are nowhere dense in \(\mathbb{R}^{d}\).
Assuming by contradiction \(\hat{\tau}\hat{\mu}=\hat{\tau}\hat{\nu}\) we find \(\hat{\mu}=\hat{\nu}\), that is \(\mu=\nu\).
The preceding proof can be adapted to more general measures \(\tau\), assuring the convergence of the convolution integral. Letting \(\tau=\phi dx\), where \(\phi\) is a continuous function of compact support one can produce new indeterminate measures \(\phi*\mu\) which are absolutely continuous with respect to Lebesgue measure. The well known support inclusion
\[\text{supp}(\tau*\mu)\subset\text{supp}\tau\ +\ \text{supp}\mu\]
indicates how to adapt the convolution transform to prescribed supports.
A natural convolution transform of an admissible measure \(\mu\) is its Newtonian potential \(U^{\mu}(x)=\int E(x-y)d\mu(y)\), where \(E(x)=-\log\|x\|\) in dimension \(d=2\) and \(E(x)=\|x\|^{2-d}\) for \(d>2\). The fine properties of the functions \(U^{\mu}(x)\) are well studied, to illustrate only by a very recent contribution [36]. Since \(\Delta U^{\mu}=\operatorname{const.\mu}\), one can also regard translates \(E(x-y)\) of the fundamental solutions as separating functions for measures. We do not pursue this path here.
### Positive weights
Multiplying an indeterminate measure with well adapted polynomial weights does not transform it into a determinate measure. We present such an observation.
**Proposition 5.5**.: _Let \(X\subset\mathbb{R}^{d}\) be a closed set and let \(\mu\) be an \(X\)-indeterminate measure. If \(w\) is a non-negative polynomial on \(X\), with finitely many zeros on \(X\), then the measure \(w\mu\) is still indeterminate._
Proof.: Let \(\nu\) be an admissible measure supported on \(X\), moment equivalent to \(\mu\), but different than \(\mu\). Therefore there exists a continuous function \(\phi\) on \(X\), of polynomial growth, such that \(\int\phi d\mu\neq\int\phi d\nu\). Let \(q\) be a polynomial which interpolates \(\phi\) on the zero set \(V\) of \(w\). Since \(\int qd\mu=\int qd\nu\) we infer
\[\int(\phi-q)d\mu\neq\int(\phi-q)d\nu.\]
Remark that he indeterminate measures \(\mu,\nu\) are not supported on the finite set \(V\).
Choose for every \(\delta>0\) a continuous function \(\chi_{\delta}\) with the properties \(0\leq\chi_{\delta}\leq 1\), \(\chi_{\delta}(x)=0\), \(\operatorname{dist}(x,V)<\delta\) and \(\chi_{\delta}(x)=1\), \(\operatorname{dist}(x,V)>2\delta\). In virtue of Lebesgue dominated convergence theorem, there exists \(\epsilon>0\) satisfying
\[\int\chi_{\epsilon}(\phi-q)d\mu\neq\int\chi_{\epsilon}(\phi-q)d\nu.\]
For \(R>0\) the Baire-1 function
\[\tilde{w}_{R}(x)=\left\{\begin{array}{c}w(x),\ |x|<R,\\ \max(1,w(x)),\ |x|\geq R,\end{array}\right.\]
has the property \(\frac{w}{\tilde{w}_{R}}(x)\leq 1\), \(|x|\geq R\).
The measure \(\mu\) decays fast at infinity. In particular, for every continuous function \(\xi\) of polynomial growth one finds
\[\lim_{R\to\infty}\int\frac{w}{\tilde{w}_{R}}\xi d\mu=\int\xi d\mu,\]
and similarly for \(\nu\).
Then the Baire-1 function \(\Phi=\frac{\chi_{\epsilon}(\phi-q)}{w_{R}}\) has polynomial growth and for \(R\) large enough \(\int\Phi wd\mu\neq\int\Phi wd\nu.\) On the other hand, the measures \(w\mu\) and \(w\nu\) are moment equivalent.
A typical choice of weight, for a prescribed finite set \(V\), is \(w(x)=\prod_{\lambda\in V}\|x-\lambda\|^{2}\).
### Equivariant setting
Let \(X\subset\mathbb{R}^{d}\) be a semialgebraic set and let \(G\) be a compact group acting on \(X\). The Haar measure on \(G\) is denoted \(dH\).
**Lemma 5.6**.: _If two \(G\)-invariant, admissible measures on \(X\) are moment equivalent, but distinct, then there exists a \(G\)-invariant separating function._
Proof.: Let \(\phi\in C(X)\) be a continuous separating function for the two measures. The invariance of the measure to translations by an element \(g\) of \(G\) yields
\[\int\phi(gx)d\mu(x)=\int\phi(x)d\mu(x)\neq\int\phi(x)d\nu(x)=\int\phi(gx)d\nu( x).\]
Consequently the \(G\)-invariant average \((\phi^{G})(x)=\int\phi(gx)dH(g)\) satisfies
\[\int\phi^{G}d\mu=\int\phi d\mu\neq\int\phi d\nu=\int\phi^{G}d\nu.\]
The above lemma shows that the push-forward measures on the quotient space \(X/G\) are distinct. In case \(X/G\) can be identified with a semi-algebraic set of Euclidean space, and the projection map is induced by polynomials, then the two push-forward measures are moment equivalent, but distinct. This is the case of the action of the rotation group on \(\mathbb{R}^{d}\). A detailed analysis of the latter scenario was carried out by Berg and Thill [7].
### Completely monotonic functions
The integral transforms we invoked in producing separating functions for indeterminate measures are derived from completely monotonic functions of a single variable. We provide below a general framework of producing lower and upper polynomial functions (as a matter of fact MacLaurin polynomials) of completely monotonic functions. Although the determinateness criteria obtained this way may not be sharp, they open a natural path towards a deeper study of multivariate moment problems.
A smooth function \(\phi:[0,\infty)\longrightarrow\mathbb{R}\) is called _completely monotonic_ if
\[(-1)^{n}\phi^{(n)}(x)\geq 0,\ \ x\geq 0.\]
A classical characterization due to Bernstein relates these functions to Laplace transforms of positive measures on the semi-axis, [13]. The MacLaurin polynomial of degree \(n\) of a smooth function \(\phi\) is denoted, in short, by
\[M_{n}(\phi)(x)=\sum_{j=0}^{n}\frac{\phi^{(n)}(0)}{n!}x^{n}.\]
**Lemma 5.7**.: _Let \(\phi:[0,\infty)\longrightarrow\mathbb{R}\) be a completely monotonic function. Then_
\[M_{2n-1}(\phi)(x)\leq\phi(x)\leq M_{2n}(\phi)(x),\ \ n\geq 1,x\geq 0.\]
Proof.: Taylor's formula with a remainder in integral form
\[\phi(x)=M_{n}(\phi)(x)+\int_{0}^{x}\frac{(x-t)^{n}}{n!}\phi^{(n+1)}(t)dt,\ \ n\geq 0,x\geq 0,\]
implies the result.
Let \(\omega:\mathbb{R}^{d}\longrightarrow[0,\infty)\) be a continuous weight. Passing now to several variables and assuming \(\phi(\omega(x))\) is a separating function for an indeterminate admissible measure \(\mu\) on \(\mathbb{R}^{d}\), we infer
\[\inf_{n}\int[M_{2n}(\phi)(\omega(x))-M_{2n-1}(\phi)(\omega(x))]d\mu(x)>0,\]
or, equivalently,
\[\inf_{n}\left(\frac{\phi^{(2n)}(0)}{(2n)!}\int\omega(x)^{2n}d\mu(x)\right)>0.\]
Similarly, the monotonic approximation of the function \(\cos t\) by its MacLaurin polynomials led Riesz to an effective 1D moment indeterminateness condition, see pg. 47 of [23]. An alternate proof with some variations of the same criterion appears in Freud's monograph [11] section II.5. Both criteria are however slightly weaker than the well known Carleman condition. Regardless to say that, in the multivariate setting, Riesz' determinateness criterion imposed on the marginals of a measure assure its joint determinateness.
## 6. Bounded point evaluations
The guiding light of univariate moment indeterminateness is the existence of bounded point evaluations at non-real values. This is well encoded in the non-vanishing of the Christoffel function associated to a moment data. Much less is known in the multivariable setting. Below we derive a few existence results of bounded point evaluations, or rather bounded hyperplane evaluations, from the observations contained in the previous section.
As before, let \(\Gamma\subseteq\mathbb{R}^{d}\) be an acute, convex and solid cone and let \(\Gamma^{*}\) be the dual cone. For a point \(a\in\mathbb{R}^{d}\) we denote
\[H_{a}=\{x\in\mathbb{R}^{d}:a\cdot x+1=0\}.\]
Notice that a polynomial \(R\in\mathbb{R}[x_{1},\ldots,x_{d}]\) vanishes on \(H_{a}\) if and only if \(R\) factors through \(a\cdot x+1\).
**Proposition 6.1**.: _If an admissible measure \(\mu\) supported on \(\Gamma^{*}\) is moment indeterminate on \(\Gamma^{*}\), then there exists \(a\in\operatorname{int}\Gamma\) such that at least one of the
following quantities is positive, i.e.,_
\[\inf\left\{\int_{\Gamma^{*}}r\,d\mu:r|_{H_{a}}=1\text{ and }r|_{\Gamma^{*}}\geq 0 \right\}>0\]
_or_
\[\inf\left\{\int_{\Gamma^{*}}r\,d\mu:r|_{H_{a}}=-1\text{ and }r|_{\Gamma^{*}} \geq 0\right\}>0,\]
_where \(r\) is a polynomial._
Proof.: Notice that \(r|_{H_{a}}=1\) implies that \(r(x)=1-(a\cdot x+1)p(x)\) for some \(p\in\mathbb{R}[x]\) and
\[\int[1-(a\cdot x+1)p(x)]d\mu\geq\int\left[\frac{1}{a\cdot x+1}-p(x)\right]d\mu.\]
Similarly, if \(r(x)=-1+(a\cdot x+1)q(x)\), then
\[\int[-1+(a\cdot x+1)q(x)]d\mu\geq\int\left[\frac{-1}{a\cdot x+1}+q(x)\right]d\mu.\]
The necessity of the positivity of at least one of the two infima now follows immediately from Theorem 2.5.
In the same spirit, allowing now a positive weight against the indeterminate measure, one finds a necessary and sufficient condition.
**Theorem 6.2**.: _let \(\Gamma\subseteq\mathbb{R}^{d}\) be an acute, convex and solid cone and let \(\Gamma^{*}\) be the dual cone. Let \(\mu\) be an admissible measure supported on \(\Gamma^{*}\). The measure \(\mu\) is moment indeterminate with respect to the support \(\Gamma^{*}\) if and only if there exists \(a\in\operatorname{int}\Gamma\) with the property that at least one of the conditions_
\[\inf\{\int_{\Gamma^{*}}\frac{r(x)d\mu(x)}{1+\|x\|}:\ \ r|_{H_{a}}=\pm 1,\ r|_{ \Gamma^{*}}\geq 0,\ r\in\mathbb{R}[x]\}>0\]
_is satisfied._
Proof.: If \(a\) is an interior point of the convex cone \(\Gamma\), then the angle between \(a\) and any element of the dual cone is bounded from above by a constant less than \(\pi/2\). Consequently, there exists \(\gamma>0\) with the property:
\[a\cdot x\geq\gamma\|x\|,\ \ x\in\Gamma^{*}.\]
Then there are positive constants \(C_{1},C_{2}\) so that
\[\frac{C_{1}}{1+\|x\|}\geq\frac{1}{a\cdot x+1}\geq\frac{C_{2}}{1+\|x\|},\ \ x\in\Gamma^{*}.\]
In, particular, any polynomial \(r\) which is non-negative on \(\Gamma^{*}\) satisfies
\[C_{1}\frac{r(x)}{1+\|x\|}\geq\frac{r(x)}{a\cdot x+1}\geq C_{2}\frac{r(x)}{1+\| x\|},\ \ x\in\Gamma^{*}.\]
Theorem 2.5 completes then the proof.
A natural base change of rational curves provides a generalization of the moment determinateness in terms of the existence of bounded point evaluations outside the real locus. Specifically, we focus on a real, affine, algebraic curve \(X\subset\mathbb{R}^{d}\) which admits a proper parametrization
\[u:\mathbb{R}\longrightarrow X,\]
where \(u\) is a map consisting of real polynomials, such that \(u(\mathbb{R})\) omits at most finitely many points of \(X\) and which is one-to one except finitely many points of \(\mathbb{R}\). We call such an object a _real, polynomial curve_. We refer the reader to Chapter 7 of [32], which is fully dedicated to real parametrizations of real, rational curves. The algebra of regular functions on \(X\) is denoted \(\mathbb{R}[X]=\mathbb{R}[x_{1},x_{2},\ldots,x_{d}]/I_{X}\), where \(I_{X}\) is the reduced ideal associated to the variety \(X\). The complexification of \(X\) is denoted \(X_{\mathbb{C}}\), as defined by the ideal \(I_{X}\otimes_{\mathbb{R}}\mathbb{C}\subset\mathbb{C}[x]\). The polynomial parametrization \(u\) extends to \(U:\mathbb{C}\longrightarrow X_{\mathbb{C}}\), which remains a proper map with finite fibres. Moreover, the curves \(\mathbb{C}\) and \(X_{\mathbb{C}}\) are birationally equivalent, that is the pull-back \(U^{*}:\mathbb{C}(X_{\mathbb{C}})\longrightarrow\mathbb{C}(t)\) is an isomorphism of algebras of rational functions.
We recall the definition of a basic concept in approximation theory. An admissible measure \(\nu\) defined on an affine, complex algebraic variety \(X_{\mathbb{C}}\) admits _analytic bounded point evaluations_ if there exists a non-trivial open subset \(V\subset X_{\mathbb{C}}\) and a positive constant \(C\), with the property
\[|p(\lambda)|\leq C\|p\|_{2,\nu},\ \ \lambda\in V,\ p\in\mathbb{C}[X_{\mathbb{C}}].\]
One can replace the point evaluations by a Bergman or Hardy space norm. For instance, a prescribed point \(\lambda\in V\) and a radius \(r>0\) such that the disk \(D=D(\lambda,r)\) is fully contained with its closure in \(V\) give rise to an estimate of the form:
\[\|p\|_{2,D}\leq C^{\prime}\|p\|_{2,\mu},\]
where \(C^{\prime}\) is a positive constant. And vice-versa, such a Bergman space inequality assures the existence of analytic bounded point evaluations inside the disk \(D\).
The above estimates refer to polynomial functions, but they can be replaced for instance by rational function or analytic functions subject to growth conditions. A landmark result of J. Thomson establishes the existence of analytic bounded point evaluations for compactly supported measures on \(\mathbb{C}\) in terms of the non-density of complex polynomials in Lebesgue space [34]. In our context, M. Riesz' theorem (see Section 22 in [23]) asserts that an admissible measure supported by the real line is indeterminate if and only if it admits analytic bounded point evaluations in the complex plane. We aim at extending this phenomenon to real polynomial curves.
**Theorem 6.3**.: _Let \(X\subset\mathbb{R}^{d}\) be a real, polynomial curve and let \(\mu\) be an admissible, positive measure supported on \(X\). The measure \(\mu\) is \(X\)-indeterminate if and only if there exist bounded analytic point evaluations for \(\mu\), supported on the complexification \(X_{\mathbb{C}}\)._
Proof.: Let \(u:\mathbb{R}\longrightarrow X\) denote a proper, polynomial parametrization of the affine curve \(X\). Let \(F\subset X_{\mathbb{C}}\) be a finite set with the property that the restricted maps
\[u:\mathbb{R}\setminus u^{-1}(F)\longrightarrow X\setminus F\]
and
\[U:\mathbb{C}\setminus U^{-1}(F)\longrightarrow X_{\mathbb{C}}\setminus F\]
are bijective. The minimal set \(F\) with this property will be called by abuse of terminology _the ramification locus_ of the curve \(X\). Note that, in general \(F\) depends on the parametrization.
Denote \(G=U^{-1}F\subset\mathbb{C}\). Any polynomial \(h\in\mathbb{C}[t]\) vanishing on the finite set \(G\) is of the form \(h=U^{*}q=q\circ U,\) where \(q\in\mathbb{C}[X_{\mathbb{C}}]\) and \(q|_{F}=0\). Indeed, there exists a rational function \(r\in\mathbb{C}(X_{\mathbb{C}})\) with the property \(h=U^{*}r\). The local structure of the finite map \(U\) implies that \(r\) is a regular function at all points of \(X_{\mathbb{C}}\). Lemma 2.1 in [12] implies \(r\in\mathbb{C}[X_{\mathbb{C}}]\). If the polynomial \(h\) is real, and vanishes on the finite set \(G\), then one can choose a real element \(q\in\mathbb{R}[X]\) satisfying \(h=U^{*}q=u^{*}q\). Denote by \(w\in\mathbb{R}[t]\) the polynomial with simple zeros at \(G\). Accordingly, \(w=U^{*}\omega\), where \(\omega\in\mathbb{R}[X]\). We note in passing that the class of smooth, complete intersection curves allowing a polynomial parametrization is distinguished by an empty ramification locus, [6].
Assume two admissible measures \(\mu_{1},\mu_{2}\) supported by \(X\) on are moment equivalent, but distinct. According to Proposition 5.5 the measures \(\omega^{2}\mu_{1}\) and \(\omega^{2}\mu_{2}\) are moment equivalent and distinct, and similarly for \(\omega^{4}\mu_{j},\ j=1,2.\)
Since the restricted map \(u|_{\mathbb{R}\setminus u^{-1}F}\) is bijective and the measures \(\omega^{2}\mu_{1},\omega^{2}\mu_{2}\) do not possess atoms on \(F\), one can define by push-forward positive measures \(\sigma_{1},\sigma_{2}\) on \(\mathbb{R}\setminus u^{-1}F\) with the properties
\[\int h(u(t))\sigma_{j}(dt)=\int_{X\setminus F}h\omega^{2}d\mu_{j},\ \ h\in \mathbb{R}[X],\ j=1,2.\]
One step further, one considers the measures \(w^{2}\sigma_{j}\) and their push-forward by \(u\) measures \(\omega^{4}\mu_{j},\ j=1,2.\) Since every element \(h\in\mathbb{R}[t]\) "descends" to \(X\) after multiplication by \(w^{2}\): \(hw^{2}=u^{*}q\), with \(q\in\mathbb{R}[X]\), we infer that the measures \(\sigma_{1},\sigma_{2}\) are moment equivalent. Indeed, for every \(h\in\mathbb{R}[t]\) one finds:
\[\int h(t)w^{2}(t)\sigma_{1}(dt)=\int q(u(t))\sigma_{1}(dt)=\int q\omega^{2}d \mu_{1}=\]
\[\int q\omega^{2}d\mu_{2}=\int q(u(t))\sigma_{1}(dt)=\int h(t)w^{2}(t)\sigma_{1 }(dt).\]
Assume by contradiction that \(w^{2}\sigma_{1}=w^{2}\sigma_{2}\) as Radon measures on the line. Then \(\sigma_{1}-\sigma_{2}\) is a sum of point masses concentrated on the set \(G\cap\mathbb{R}\). By construction, \(\sigma_{1},\sigma_{2}\) do not have point masses on \(G\cap\mathbb{R}\), hence \(\sigma_{1}=\sigma_{2}\). Therefore \(\omega^{2}\mu_{1}=\omega^{2}\mu_{2}\), a contradiction.
In view of M. Riesz' theorem there exist analytic bounded point evaluations on (relatively compact subsets of) \(\mathbb{C}\setminus\mathbb{R}\), for \(w^{2}\sigma_{1},w^{2}\sigma_{2}\). As a matter of fact every point \(\alpha\in\mathbb{C}\setminus\mathbb{R}\) satisfies
\[|p(\alpha)|^{2}\leq C\int p^{2}w^{2}d\sigma_{j},\ \ j=1,2, \tag{6.1}\]
where \(C=C(\alpha)>0\) is locally bounded and \(p\in\mathbb{R}[t].\) On the base \(X\) of the map \(u\) we infer:
\[\left|f(\beta)\right|^{2}\leq C(\beta)\int f^{2}\omega^{4}d\mu_{1}=C(\beta) \int f^{2}\omega^{4}d\mu_{2},\ f\in\mathbb{R}[X],\]
where \(\beta=U(\alpha)\in X_{\mathbb{C}}\setminus X\). The constant \(C(\beta)\) is still locally bounded as a function of \(\beta\). Choose a point \(\lambda\in X_{\mathbb{C}}\setminus X\) and a sufficiently small radius \(r\), so that the function \(|w|\) does not vanish on the closure of the disk \(D=D(\lambda,r)\). Consequently there are constants \(M,M^{\prime}>0\) with the property:
\[\|f\|_{2,D}\leq M^{\prime}\left\|\frac{f}{|w|^{2}}\right\|_{2,D}\leq M\left\| \frac{f}{|w|^{2}}\right\|_{2,w^{4}\mu}=M\|f\|_{2,\mu},\ \ f\in\mathbb{R}[X].\]
To prove the other implication we assume that the admissible measure \(\mu\) defined on \(X\) admits analytic bounded point evaluations. If the measure \(\mu\) does not possess point masses on the ramification locus \(F\), then one can define without ambiguity an admissible measure \(\sigma\) on \(\mathbb{R}\), with the property \(\mu=u_{*}\sigma\). Even if the measure \(\mu\) has atoms on \(F\) one can choose, with some degree of freedom, such a lift \(\sigma\) by selecting point masses on \(G=u^{-1}F\) which satisfy
\[\int_{\mathbb{R}}f\circ ud\sigma=\int_{X}fd\mu,\ \ f\in\mathbb{R}[X].\]
The assumption on the existence of analytic bounded point evaluations imposed on the measure \(\mu\) implies that there exist \(\lambda\in\mathbb{C}\setminus\mathbb{R}\) and \(r>0\) carrying the estimate
\[\|h\|_{2,D}\leq M\|h\|_{2,\sigma},\ \ h\in V, \tag{6.2}\]
where \(V=I_{G}\subset\mathbb{C}[t]\) is a finite codimensional subspace.
Next we prove that a bound of the form (6.2) is true for all polynomials \(h\in\mathbb{C}[t]\). To this aim, let \(h_{1},\ldots,h_{n}\in\mathbb{C}[t]\) be a linearly independent system which spans \(\mathbb{C}[t]/V\). Denote by \(P^{2}(\mu)\) the closure of polynomials in \(L^{2}(\mu)\) and respectively by \(L^{2}_{a}(D)\) the closure of polynomials in \(L^{2}(D,\mathrm{d}\) Area). The entire algebra \(\mathbb{C}[t]\), and a fortiori \(V\), is a subspace of both \(P^{2}(\mu)\) and \(L^{2}_{a}(D)\). Denote by \(H\) the closure of \(V\) in \(P^{2}(\mu)\) and by \(K\) the closure of \(V\) in Bergman space \(L^{2}_{a}(D)\). The restriction map
\[R:H\longrightarrow K,\ \ R(h)=h|_{D},\ \ h\in V,\]
is linear and continuous by (6.2). Denote by \(P_{H}\) the orthogonal projection of \(P^{2}\) onto \(H\). The elements \(f_{j}-P_{H}(f_{j}),\ 1\leq j\leq n\), span the finite dimensional orthogonal complement \(H_{1}=P^{2}(\mu)\ominus H\).
In particular every polynomial \(h\in\mathbb{C}[t]\) can be decomposed as
\[h=h_{1}+h_{2},\ \ h_{1}\in H_{1},h_{2}\in H.\]
The restriction of polynomials to the disk \(D\) extends by continuity to the operation
\[(f_{j}-P_{H}(f_{j}))|_{D}:=(f_{j})|_{D}-RP_{H}(f_{j}),\ \ 1\leq j\leq n,\]
hence one can define
\[h|_{D}=h_{1}|_{D}+R(h_{2})\]
to the effect
\[\|h\|_{2,\mu}^{2}=\|h_{1}\|_{2,\mu}^{2}+\|h_{2}\|_{2,\mu}^{2}.\]
On the other hand, \(H_{1}\) is a finite dimensional space, endowed with a continuous restriction map to the Bergman space \(L_{a}^{2}(D)\). Hence there exists a constant \(M_{1}\) with the property.
\[\|h_{1}\|_{2,D}\leq M_{1}\|h_{1}\|_{2,\mu},\ h_{1}\in H_{1}.\]
In conclusion, for all polynomials \(h\in\mathbb{C}[t]\) an estimate (6.2) exists. According to M. Riesz' theorem, the measure \(\sigma\) is indeterminate.
Let \(\tilde{\sigma}\) be an admissible measure on the real line, moment equivalent to \(\sigma\), but different from \(\sigma\). Clearly the measure \(\tilde{\mu}=u_{*}\tilde{\sigma}\) is moment equivalent to \(\mu\). If \(\mu=\tilde{\mu}\), then the measures \(\sigma,\tilde{\sigma}\) coincide on the space of continuous functions on \(\mathbb{R}\), vanishing on the finite fibre \(G\). The proof of Proposition 5.5 implies \(\sigma=\tilde{\sigma}\), a contradiction.
One may naturally ask what happens on rational curves. The first observation is that push forward via rational parametrizations may alter indeterminateness. A simple example is the embedding
\[F(t)=\left(t,\frac{1}{1+t^{2}}\right),\ t\in\mathbb{R}.\]
Any admissible measure in \(\mathbb{R}^{2}\) supported by the image curve \(\Gamma\) of equation \(y(1+x^{2})=1\) is determinate, due to the fact that bounded polynomials on \(\Gamma\) are separating points.
The class of real algebraic curves on which a positive polynomial function is a sum of squares was thoroughly studied by Plaumann [22] and Scheiderer [27], [28], [29] and [30]. Similarly to the real line, on such curves, the mere existence of bounded point evaluations has moment determinateness implications.
**Theorem 6.4**.: _Let \(X\subset\mathbb{R}^{d}\) be a real algebraic curve on which all positive polynomials are sums of squares. Let \(\mu\) be an admissible measure supported by \(X\) which admits analytic bounded point evaluations on \(X_{\mathbb{C}}\). Then the measure \(\mu\) is indeterminate._
Proof.: Assume first that \(\beta\in X_{\mathbb{C}}\setminus X\) is a \(P^{2}(\mu)\)-bounded point evaluation. That is:
\[|p(\beta)|^{2}\leq C\|p\|_{2,\mu}^{2},\ \ p\in\mathbb{R}[X].\]
Note that
\[\inf_{x\in X}\|x-\beta\|\geq\delta>0.\]
We prove that the function \(\frac{1}{\|x-\beta\|^{2}}\) is separating for the measure \(\|x-\beta\|^{2}d\mu(x)\).
Indeed, let \(q\in\mathbb{R}[X]\) satisfy
\[\frac{1}{\|x-\beta\|^{2}}-q(x)>0,\ \ x\in X.\]
By assumption, the positive polynomial \(1-\|x-\beta\|^{2}q(x)\) is a finite sum of squares of elements \(p_{k}\in\mathbb{R}[X]\):
\[1-\|x-\beta\|^{2}q(x)=\sum_{k=1}^{m}p_{k}(x)^{2}.\]
Note that \(\sum_{k=1}^{m}|p_{k}(\beta)|^{2}=1\). Consequently,
\[\sum_{k=1}^{m}\int|p_{k}(x)|^{2}d\mu(x)\geq\frac{1}{C}\sum_{k=1}^{m}|p_{k}( \beta)|^{2}=\frac{1}{C}>0.\]
Therefore
\[\inf_{q(x)<\frac{1}{\|x-\beta\|^{2}}}\int[\frac{1}{\|x-\beta\|^{2}}-q(x)]\|x- \beta\|^{2}d\mu(x)>0,\]
that is the continuous function of polynomial growth \(\frac{1}{\|x-\beta\|^{2}}\) is separating for the measure \(\|x-\beta\|^{2}d\mu(x)\).
The assumption in the statement was stronger. In particular, we can assume that a full neighborhood of \(\beta\) consists of \(P^{2}(\mu)\) bounded point evaluations with respect to the same constant. For instance
\[|p(\alpha)|^{2}||\alpha-\beta\|^{4}=|p(\alpha)\|\alpha-\beta\|^{2}|^{2}\leq C \int|p(x)|^{2}||x-\beta\|^{4}\frac{\mu(dx)}{||x-\beta\|^{4}},\ \ p\in\mathbb{C}[X].\]
By choosing \(\alpha\) in an open ball \(B=B(\lambda,r)\) with \(\|\beta-\alpha\|>r\) we infer
\[\int_{B}|p(z)|^{2}d\mathrm{vol}\leq M\int_{X}|p(x)|^{2}\|x-\beta\|^{4}\frac{ \mu(dx)}{||x-\beta\|^{4}},\ \ p\in\mathbb{C}[z].\]
That is, the measure \(\frac{\mu}{||x-\beta\|^{4}}\) carries analytic bounded point evaluations on a finite codimensional subspace of \(P^{2}(\mu)\) (and of Bergman's space \(L^{2}_{a}(B)\)). Indeed, \(X_{\mathbb{C}}\) is an algebraic curve and the ideal generated by the function \(\|z-\beta\|^{2}\) in \(\mathbb{C}[X_{\mathbb{C}}]\) has finite codimension. The proof of the preceding theorem implies that \(\frac{\mu}{\|x-\beta\|^{4}}\) admits analytic bounded point evaluations.
Then the first part of the proof takes over, implying that the measure \(\tilde{\mu}=\|x-\beta\|^{2}\frac{\mu}{\|x-\beta\|^{4}}\) is indeterminate. Finally, Proposition 5.5 completes the proof.
## 7. Examples
### Finite maps of real algebraic varieties
It is well known that there exists an admissible measure \(\sigma\) supported on \([0,\infty)\) which is \(\mathbb{R}\)-indeterminate, but \([0,\infty)\)-determinate. The stark difference between Stieltjes, respectively Hamburger, determinateness is well analyzed in Berg and Thill [7].
We put this pathology on algebraic curves, showing that even finite morphisms of curves are not expected to preserve moment indeterminateness. More precisely, let \(\Gamma\) be the parabola of equation \(x=y^{2}\) in \(\mathbb{R}^{2}\) and define the measure \(\mu\), supported on \(\pi\) as follows:
\[\int f(x,y)d\mu=\int_{\mathbb{R}}f(t^{2},t)d\sigma(t),\ f\in\mathbb{R}[x,y].\]
The projection of the \(y\)-axis is an isomorphim of smooth curves, therefore the measure \(\mu\) is \(\Gamma\)-indeterminate. On the other hand, consider the projection \(\pi\) of \(\mathbb{R}^{2}\) onto the \(x\)-axis. The image \(\pi\Gamma=[0,\infty)\) of the curve \(\Gamma\) is not a smooth variety, but only, as expected over the real field, simply a semi-algebraic set. The measure \(\pi_{*}\mu\) acts as follows:
\[\int hd\pi_{*}\mu=\int h(t^{2})d\sigma(t),\ \ h\in\mathbb{R}[t].\]
But this measure is moment-determined on the line.
It is worth mentioning that the above exceptional measure \(\mu\) has a point mass on the ramification locus of the projection map \(\pi\).
### Real Polynomial Curves
The affine curves allowing a polynomial parametrization referred to in our study were known forever, with a changing perspective over centuries: from descriptive geometric or mechanical features, to illustrations of progress in algebraic geometry, or more recently as computational/algorithmic objects of interest. We merely give below a glimpse into the subject, with a few bibliographical indications.
The simplest class of real polynomial curves is described by graphs of polynomial maps \(p:\mathbb{R}\longrightarrow\mathbb{R}^{d-1}\), as for instance the _parabola_
\[y=p(x),\ \ p(x)=ax^{2}+b,\ \ a\neq 0.\]
In the above example and throughout this subsection, \((x,y)\) stand for the coordinates in affine space \(\mathbb{R}^{2}\), or if necessary to pass to complex coordinates, even in \(\mathbb{C}^{2}\).
A celebrated theorem of Abhyankar and Moh [1] asserts that a smooth, rational curve \(X\subset\mathbb{C}^{2}\) admits a polynomial parametrization if and only if it can be transformed into a straight line by invertible linear transforms and automorphisms of the form
\[x_{1}=x+h(y),\ \ y_{1}=y,\ \ h\in\mathbb{C}[y].\]
One step further, Zaindenberg and Lin [37] proved that any simply connected, irreducible polynomial curve in \(\mathbb{C}^{2}\) is equivalent, in the above sense, to a _basic cusp_ curve:
\[x^{k}=y^{\ell},\]
where \(k,\ell\) are relatively prime positive integers. The simple, real polynomial parametrization of this curve is \(x=t^{\ell},y=t^{k}\). As a consequence, it is interesting to note that all 2D simply connected polynomial curves have at most one singular point. In this simple situation, the ramification locus of the curve reduces to a single point \(F=\{(0,0)\}\).
The cited landmark article by Abhyankar and Moh [1] contains the following characterization: a rational curve \(X\subset\mathbb{C}^{2}\) admits a polynomial parametrization if and only if its compactification in projective space contains a single place at infinity. That means that the polynomial equation describing the curve
\[X=\{(x,y)\in\mathbb{C}^{2};\ F(x,y)=0\}\]
starts with an exact power of a linear function, plus a reminder:
\[F(x,y)=(ax+by)^{d}+G(x,y),\ \ |a|+|b|>0,\ \ \deg G<d.\]
A few low degree examples are in order. First, a _cubic with a nodal singular point_ admits a polynomial parametrization:
\[y^{2}=x^{2}(x+1);\ \ x=t^{2}-1,\ y=t^{3}-t.\]
Again, the ramification locus is \(\{(0,0)\}\), with the associated weight \(\omega(t)=t\). Among 2D quartics, the _Kampyle of Eudoxus_ is a polynomial curve:
\[x^{4}=a^{2}(x^{2}+y^{2}),\ \ a>0,\]
or better, in polar coordinates
\[\rho=\frac{a}{\cos^{2}\theta},\]
can be rationally parametrized as function of \(t=\tan\frac{\theta}{2}\). Since the place at infinity is reduced to a point, we deduce from Abhyankar-Moh Theorem that a proper, polynomial parametrization exists.
Another quartic in two dimensions exhibits a _Ramphoid cusp_ (that is both branches at the singular point are tangent to the same semi-axis):
\[y^{4}-2axy^{2}-4ax^{2}y-ax^{3}+a^{2}x^{2}=0,\ \ a>0,\]
with parametrization
\[x=at^{4},\ y=a(t^{2}+t^{3}).\]
Finally, _l'Hospital quintic_ provides another example:
\[64y^{5}=a(25x^{2}+20y^{2}-20ay+4a^{2})^{2},\ \ a>0,\]
with parametrization
\[x=\frac{a}{2}(t-\frac{t^{5}}{5}),\ \ y=\frac{a}{4}(1+t^{2})^{2}.\]
The botanics of algebraic curves is quite substantial, hiding small and big wonders. For more details we refer to the award winning website
**[https://mathcurve.com](https://mathcurve.com)** built and maintained by Robert Ferreol.
The qualitative studies pertaining to (real) polynomial curves are also quite numerous, motivated by applications. The basic reference is the algorithmic oriented book [32] which treats in detail real rational curves and polynomial curves. Re-parametrizations of rational curves with one place at infinity are treated in [15]. A constructive approach to transforming smooth, polynomial curves into lines is taken in [14], while [6] constructs a polynomial parametrization of the same class of curves by integrating an appropriate vector field.
|
2303.13497 | TriPlaneNet: An Encoder for EG3D Inversion | Recent progress in NeRF-based GANs has introduced a number of approaches for
high-resolution and high-fidelity generative modeling of human heads with a
possibility for novel view rendering. At the same time, one must solve an
inverse problem to be able to re-render or modify an existing image or video.
Despite the success of universal optimization-based methods for 2D GAN
inversion, those applied to 3D GANs may fail to extrapolate the result onto the
novel view, whereas optimization-based 3D GAN inversion methods are
time-consuming and can require at least several minutes per image. Fast
encoder-based techniques, such as those developed for StyleGAN, may also be
less appealing due to the lack of identity preservation. Our work introduces a
fast technique that bridges the gap between the two approaches by directly
utilizing the tri-plane representation presented for the EG3D generative model.
In particular, we build upon a feed-forward convolutional encoder for the
latent code and extend it with a fully-convolutional predictor of tri-plane
numerical offsets. The renderings are similar in quality to the ones produced
by optimization-based techniques and outperform the ones by encoder-based
methods. As we empirically prove, this is a consequence of directly operating
in the tri-plane space, not in the GAN parameter space, while making use of an
encoder-based trainable approach. Finally, we demonstrate significantly more
correct embedding of a face image in 3D than for all the baselines, further
strengthened by a probably symmetric prior enabled during training. | Ananta R. Bhattarai, Matthias Nießner, Artem Sevastopolsky | 2023-03-23T17:56:20Z | http://arxiv.org/abs/2303.13497v2 | # TriPlaneNet: An Encoder for EG3D Inversion
###### Abstract
Recent progress in NeRF-based GANs has introduced a number of approaches for high-resolution and high-fidelity generative modeling of human heads with a possibility for novel view rendering. At the same time, one must solve an inverse problem to be able to re-render or modify an existing image or video. Despite the success of universal optimization-based methods for 2D GAN inversion, those, applied to 3D GANs, may fail to produce 3D-consistent renderings. Fast encoder-based techniques, such as those developed for StyleGAN, may also be less appealing due to the lack of identity preservation. In our work, we introduce a real-time method that bridges the gap between the two approaches by directly utilizing the tri-plane representation introduced for EG3D generative model. In particular, we build upon a feed-forward convolutional encoder for the latent code and extend it with a fully-convolutional predictor of tri-plane numerical offsets. As shown in our work, the renderings are similar in quality to optimization-based techniques and significantly outperform the baselines for novel view. As we empirically prove, this is a consequence of directly operating in the tri-plane space, not in the GAN parameter space, while making use of an encoder-based trainable approach.
## 1 Introduction
Generative adversarial networks (GANs) have gained popularity due to their ability to generate images of high resolution and variation. In recent years, numerous works [11, 10] have tackled the problem of multi-view consistent image synthesis with 3D-aware GANs. Typically, nowadays these include a neural rendering engine and volumetrically integrate the internal 3D presentations. Such methods make generators aware of a 3D structure by modeling it with explicit voxel grids [20, 51, 38] or neural implicit representations [11, 41]. Most notably, EG3D [10] introduced a 3D GAN framework based on a tri-plane 3D representation that is both efficient and expressive to enable high-resolution 3D-aware image synthesis. Moreover, they demonstrate state-of-the-art results for unconditional geometry-aware image synthesis.
Main applications of 3D GANs include human face inversion including head tracking, reenactment, facial manipulation, and novel view synthesis of a given image or video. Oftentimes, the classical GAN formulation does not support trivial inversion, i.e. finding the appropriate code in the learned GAN space for a given sample. A straightforward way to achieve this is by obtaining the latent code of the input image via optimization-based or encoder-based approaches, i.e. applying 2D GAN inversion techniques. An existing branch of research studies 2D GAN inversion in high detail [1, 45, 4, 2, 62, 52], but nevertheless, the problem remains underexplored in 3D.
Optimization-based inversion methods are often superior to encoder-based approaches in terms of reconstruction quality. However, encoder-based techniques are orders of
Figure 1: For a given picture, our method predicts the appropriate latent code and the tri-plane offsets for EG3D generator in a feed-forward manner. This way, both the frontal view and the novel view rendering can be obtained with high fidelity and in real-time.
magnitude faster as they map a given image to the latent space of GAN in a single forward pass. Compared to 2D GAN inversion, 3D GAN inversion is a more challenging task as the inversion needs to both preserve the identity of an input image and satisfy the 3D consistency constraint in generated novel views. In particular, optimization-based GAN inversion methods that have no knowledge of the specific GAN architecture make sure to yield a high-quality rendering of the desired image from the same camera view, but the lack of any geometry information in the image may produce broken or flattened geometry when rendered from a novel camera. We improve these shortcomings in two separate ways. First, by predicting an input latent code for the EG3D generator with a convolutional encoder, we observe that the geometry is preserved better. This can be attributed to the fact that the encoder, trained for the inversion task, is exposed to thousands of multi-view images and this way learns to be more 3D-aware. Second, we utilize the knowledge about the model and improve the fidelity and consistency by predicting offsets to the tri-planes that constitute the 3D representation in EG3D. Unlike voxel grids or implicit representations, tri-planes can be naturally estimated by 2D convnets and, as demonstrated by our experiments, can realistically express object features beyond the capabilities of an input latent code, e.g. hands and long hair, without breaking 3D consistency (see Fig. 1). This advantage is achieved by recovering the object representation directly in the world space. Since the tri-plane offsets are fully predicted by convolutional layers, our inversion can run in real-time on modern GPUs.
We propose the EG3D-specific inversion scheme in two stages. In the first stage, the initial inversion is obtained using the pSp encoder that directly embeds the input image into the \(\mathcal{W}+\) space of EG3D. In the second stage, we introduce an encoder, TriPlaneNet, that learns to refine the reconstruction given the original image and initial inversion. Our second stage encoder takes the initial inversion, and the difference between initial inversion and the original image as the inputs and predicts a numerical offset to the initial tri-plane representation. Some recent works [5, 15, 46] for 2D inversion have attempted to improve the reconstruction quality in two stages. First, they obtain the latent code for the input image. Then, they fine-tune generator weights with respect to the initial inversion. However, we demonstrate that TriPlaneNet preserves identity and ensures 3D consistency in novel views better compared to other approaches.
To summarize, our contributions are the following:
* We propose a novel, real-time inversion framework for EG3D that enables high-quality reconstruction while maintaining multi-view consistency by directly utilizing the tri-plane representation.
* We demonstrate that our method achieves on par reconstruction quality compared to optimization-based inversion methods in the original view, while strongly outperforming them for novel view rendering. Our method is also more resilient towards harder cases such as when a hat or accessories are featured.
## 2 Related Work
**3D Generative Models for Human Faces.** Representing and generating diverse 3D human faces and heads attracted increasing attention over the last decade [37, 12, 23], while the appearance of NeRF [35] has sparked additional inter
Figure 2: Overview of the proposed method. TriPlaneNet consists of two branches. The first branch (above) comprises the predictor \(\hat{w}=\mathbf{\phi}(x)\) of the pivotal latent code \(\hat{w}\in\mathcal{W}+\), which results in an RGB image \(\hat{y}=\mathcal{R}(\mathbf{G}(\hat{w}))\) after passing it through the EG3D tri-plane generator \(\mathbf{G}(\cdot)\) and renderer block \(\mathcal{R}(\cdot)\) containing super-resolution module. The second branch (below) uses the first-stage approximation \(\hat{y}\) and its difference with the target \((x-\hat{y})\) to predict the numerical offsets to the tri-planes \(\Delta\mathbf{T}\) by a convolutional autoencoder \(\Delta\mathbf{T}=\mathbf{\psi}(\hat{y},x-\hat{y})\), which yields the final prediction \(y=\mathcal{R}(\mathbf{G}(\hat{w})+\Delta\mathbf{T})\).
est in that topic. The first generative models built upon NeRF-style volumetric integration [48, 39] achieved generalization by conditioning the multi-layer perceptron on latent codes, representing the object's shape and appearance. More novel \(\pi\)-GAN [11] and StyleNeRF [17] condition the generative network on the output of the StyleGAN-like generator [26], which amounted to the higher-quality rendering of faces and arbitrary objects with subtle details. As a next major improvement step, authors of EG3D [10] propose a tri-plane 3D representation that serves as a bridge between expressive implicit representations and spatially-restricting explicit representations. As a byproduct, methods such as EG3D and StyleSDF [41] allow for the extraction of explicit, highly detailed geometry of the human faces, while being trained without any volumetric supervision. Some modifications of NeRFs [16, 42, 6] and NeRF-based generative models [8, 21] allows for the explicit expression and appearance control of the rendered faces. Further, recently demonstrated abilities of diffusion models to generate highly accurate 2D images are currently being transferred onto 3D objects [59, 36] and 3D human heads [54].
**GAN Inversion.** Unlike other kinds of generative models, such as VAE or normalizing flows, inverting a GAN (finding the appropriate latent code for a given image) is oftentimes a more tricky and computationally-demanding task. Early attempts focused on the tuning of the latent code with the optimization-based approaches [13, 30, 26]. Various approaches exploited the idea of predicting latent representation by an encoder [18, 34, 44, 63, 43]. In [46], a universal PTI method is introduced which comprises the optimization of a latent code and, consequently, fine-tuning parameters of the generator. A recent survey on GAN inversion [56] compares multiple generic techniques introduced since the appearance of GANs.
**Inversion of 2D GANs** For StyleGAN, an important observation was made by the authors of [1] that operating in the extended \(\mathcal{W}+\) space is significantly more expressive than in the restrictive \(\mathcal{W}\) generator input space. The latter idea has been strengthened and better adapted for face editing with the appearance of pSp [45] and e4e [53], as well as of their cascaded variant ReStyle [4] and other works [2, 62, 52]. Similarly to PTI but in an encoder-based setting, HyperStyle [5] and HyperInverter [15] predict offsets to the StyleGAN generator weights in a lightweight manner in order to represent the target picture in a broader space of parameters.
**Inversion of 3D GANs.** Unlike the 2D case, the inversion of a 3D GAN is a significantly more advanced problem due to the arising ambiguity: the latent code must be both compliant with the target image and correspond to its plausible 3D representation. While PTI remains a universal method that solves this problem for an arbitrary generator, recent art demonstrates the quality rapidly declines when the PTI inversion result is rendered from a novel view. The suggested ways of resolving this fidelity-consistency tradeoff for an arbitrary 3D GAN include incorporating multi-view consistency regularizers [29], augmenting training with surrogate mirrored images [57], or using depth information when available [28]. Our work employs a convolutional encoder and utilizes the EG3D tri-plane structure to improve the 3D consistency of the inverted rendering without additional multi-view constraints required during training.
**Applications of GAN encoders.** While being developed for inversion, convolutional encoders for GANs can compress a lot of information about the domain distribution and be used as a proxy. For instance, some of the recent works employ them for pre-training for semantic segmentation [7], face recognition [49], and as a generic prior [58, 40].
## 3 Method
### Preliminaries
**GAN inversion.** Given a target image \(x\), the goal of GAN inversion is to find a latent code that minimizes the reconstruction loss between the synthesized image and the target image:
\[\hat{w}=\operatorname*{argmin}_{w}\mathcal{L}(x,G(w;\theta)) \tag{1}\]
where \(G(w;\theta)\) is the image generated by pre-trained generator \(G\) parameterized by weights \(\theta\), over the latent \(w\). The loss objective \(\mathcal{L}\) usually employs \(L_{2}\) or LPIPS [61]. The problem in (1) can be solved via optimization or encoder-based approaches. Encoder-based approaches utilize an encoder network \(E\) to map real images into a latent code. The training of an encoder network is performed over a large set of images \(\{x^{i}\}_{i=1}^{N}\) to minimize:
\[\min_{E}\sum_{i=1}^{N}\mathcal{L}(x^{i},G(E(x^{i});\theta)) \tag{2}\]
During inference, an input image is inverted by \(G(E(x);\theta)\). In the recent works [46, 5, 15], a number of approaches are proposed to additionally estimate image-specific generator parameters \(\theta(x)\) by a convolutional network.
**EG3D.** EG3D [10] uses tri-plane 3D representation for geometry-aware image synthesis from 2D images. EG3D image generation pipeline consists of several modules: a StyleGAN2-based feature generator, a tri-plane representation, a lightweight neural decoder, a volume renderer, and a super-resolution module. To synthesize an image, a random latent code \(z\in\mathbb{R}^{D}\) (typically, \(D=512\)) and camera parameters are first mapped to a pivotal latent code \(w\in\mathcal{W}+\) using a mapping network. Then, \(w\) is fed into the StyleGAN2 CNN generator \(\mathbf{G}(\cdot)\) to generate a \(H\times W\times 96\) feature map. This feature map is reshaped to form three 32-channel planes, thus forming a tri-plane feature representation \(\mathbf{T}\) of the corresponding object. To
sample from the tri-plane features, a position \(p\in\mathbb{R}^{3}\) is first projected onto the three feature planes. Then, corresponding feature vectors (\(F_{xy}(p),F_{xz}(p),F_{yz}(p)\)) are retrieved using bilinear interpolation and aggregated. These aggregated features are processed by a lightweight neural decoder to transform the feature into the estimated color and density at the location \(p\). Volume rendering is then performed to project 3D feature volume into a feature image. Finally, a super-resolution module is utilized to upsample the feature image to the final image size. For simplicity, we will later refer to the lightweight neural decoder, renderer, and the super-resolution block, all combined, as the rendering block \(\mathcal{R}(\cdot)\). The high efficiency and expressiveness of EG3D, as well as the ability to work with tri-planes directly, motivates the development of our model-specific inversion algorithm.
pSp.Richardson _et al_. [45] proposed a pSp framework based on an encoder that can directly map real images into \(\mathcal{W}+\) latent space of StyleGAN. In pSp, an encoder backbone with a feature pyramid generates three levels of feature maps. The extracted feature maps are processed by a map2style network to extract styles. The styles are then fed into the generator network to synthesize an image \(\hat{y}\):
\[\hat{y}=G(E(x)+\bar{w}), \tag{3}\]
where \(G(\cdot)\) and \(E(\cdot)\) denote the generator and encoder networks respectively and \(\bar{w}\) is the average style vector of the pretrained generator.
### TriPlaneNet
Our TriPlaneNet inversion framework comprises two branches (see Figure 2 for the overview). The first branch employs a pSp encoder to embed an input image into \(\mathcal{W}+\) space of EG3D. Specifically, given an input image \(x\), we train pSp encoder \(\phi\) to predict the pivotal latent \(\hat{w}\in\mathcal{W}+\):
\[\hat{w}=\phi(x)+\bar{w} \tag{4}\]
where the dimension of \(\hat{w}\) is \(K\times D\) (for the output image resolution of 128, \(K=14\), and \(D=512\)). The piv
Figure 3: Qualitative comparison for image reconstruction. Compared to other approaches, our method can reconstruct a face in the same view in more detail, especially introducing more detail for features such as hats, hair, and background.
otal code is then fed into StyleGAN2 generator \(\mathbf{G}(\cdot)\) in the EG3D pipeline to obtain initial tri-plane features \(\mathbf{T}\). Then, the tri-plane representation is processed by the rendering block \(\mathcal{R}(\cdot)\) to generate initial reconstruction \(\hat{y}\):
\[\hat{y}=\mathcal{R}(\mathbf{G}(\hat{w})) \tag{5}\]
The second branch consists of a convolutional auto-encoder \(\psi\) that learns to predict numerical offsets to the initial tri-plane features. The input to the autoencoder network is the channel-wise concatenation of initial reconstruction \(\hat{y}\) and the difference between an input image and initial reconstruction (\(x-\hat{y}\)). Given this extended input, the autoencoder is tasked with computing tri-plane offsets \(\Delta\mathbf{T}\) with respect to tri-plane features obtained in the first branch:
\[\Delta\mathbf{T}=\mathbf{\psi}(\hat{y},x-\hat{y}) \tag{6}\]
The new tri-plane features corresponding to the inversion of the input image \(x\) are then computed as an element-wise addition of tri-plane offsets \(\Delta\mathbf{T}\) with initial tri-plane features \(\mathbf{T}=G(\hat{w})\). This new tri-plane representation is similarly processed by the rendering block \(\mathcal{R}(\cdot)\) to obtain the final reconstructed image:
\[y=\mathcal{R}(\mathbf{T}+\Delta\mathbf{T}) \tag{7}\]
The autoencoder follows the typical U-Net [47] architecture, consisting of a contracting path and an expansive path. For our encoder backbone, we use the pre-trained IR-SE-50 architecture from [14]. In the decoder network, instead of using typical nearest neighbor upsampling, we use sub-pixel convolutional layers [50] in order to efficiently upscale the extracted features. The decoder architecture is similar to that of RUNet [22] with some minor modifications. Detailed overview of the architecture is presented in Appendix B.
### Loss Functions
The pipeline is trained by minimizing the loss function that decomposes into the separate loss expressions for two branches:
\[\mathcal{L}_{\phi,\psi}(x,y,\hat{y})=\mathcal{L}_{\phi}(x,\hat{y})+\mathcal{L }_{\psi}(x,y) \tag{8}\]
For training the encoder \(\phi(\cdot)\) in the first branch, we employ pixel-wise \(\mathcal{L}_{2}\) loss, LPIPS loss [61], and ID loss [14], closely following the proposed loss for pSp [45]. Therefore, the total loss formulation is given by
\[\mathcal{L}_{\phi}(x,\hat{y})=\lambda_{1}\mathcal{L}_{2}(x,\hat{y})+\lambda_{2 }\mathcal{L}_{\text{LPIPS}}(x,\hat{y})+\lambda_{3}\mathcal{L}_{id}(x,\hat{y}) \tag{9}\]
In order to train the auto-encoder \(\psi(\cdot)\) in the second branch, we replace \(\mathcal{L}_{2}\) in (9) with \(\mathcal{L}_{1}\) smooth loss. Then, the loss objective becomes:
\[\mathcal{L}_{\psi}(x,y)=\lambda_{4}\mathcal{L}_{1}(x,y)+\lambda_{5}\mathcal{ L}_{\text{LPIPS}}(x,y)+\lambda_{6}\mathcal{L}_{id}(x,y) \tag{10}\]
## 4 Experiments
### Training procedure
**Datasets.** Since our focus is on the human facial domain, we use FFHQ [25] dataset and 100,000 synthesized images from EG3D pre-trained on FFHQ for training and perform the evaluation on the 2824 CelebA-HQ [33, 24] test set. We extract the camera pose and pre-process the data in the same
Figure 4: Qualitative evaluation on novel view rendering of yaw angle -0.6, -0.3, and 0.6 radians (full and zoom-in). In comparison to others, our method preserves identity and multi-view consistency better when rendered from a novel view.
way as in [10]. Since the pre-processing technique used couldn't identify the camera poses of 4 images, we skip the quantitative evaluation of 4 images for all the methods presented in the paper. Instead of using ground truth camera poses for synthesized images, we extract the camera pose following the same procedure as for other datasets to ensure consistency. We also augment the FFHQ dataset by mirroring it.
**Training details.** We conduct all our experiments in the human facial domain. Therefore, our pre-trained EG3D generator is also trained on the FFHQ dataset [25]. For training our models, we adopt the same training configuration from [45] except for some minor modifications, which will be explicitly mentioned here. We start the training of the second branch only after 20K steps and train both branches until 500K steps. Afterwards, we freeze the first branch and fine-tune the second branch until 1M steps. The model and the ablations are trained with a batch size of 3. We operate in the resolution of \(256\times 256\) and set the loss weights as follows: \(\lambda_{1}=\lambda_{2}=1\), \(\lambda_{3}=0.1\), \(\lambda_{4}=1\), \(\lambda_{5}=\lambda_{6}=0.1\).
**Baselines.** We compare our approach with both optimization- and encoder-based inversion methods. For optimization-based methods, we select \(\mathcal{W}+\) optimization from [26] and PTI from [46]. For encoder-based methods, we choose pSp from [45]. For \(\mathcal{W}+\) optimization, we optimize the latent code for 1K steps. For PTI, we first optimize the latent code \(\hat{w}\in\mathcal{W}+\) for 1K steps and then fine-tune the generator for 1K steps. For pSp, we employ the original training configuration from [45] with a batch size of 4. In addition to FFHQ and mirrored FFHQ, we include 100K EG3D synthesized examples for training the pSp encoder.
### Results
**Comparison to the state-of-the-art.** We present the evaluation of our approach w.r.t. the baselines in Fig. 3 and Table 2. Commonly used metrics L\({}_{2}\), LPIPS [61], MS-SSIM [55], and ID similarity [14] have been selected to analyze various aspects of perceptual similarity between inputs and corresponding reconstructions. Among others, PTI achieves the best results according to L\({}_{2}\), LPIPS, and MS-SSIM when the input image is re-rendered from the same view. However, our method outperforms the baselines according to the ID similarity, while being an order of magnitude times faster than optimization-based approaches. From a visual standpoint, our approach preserves finer details of the input image such as background objects, hair, and hat better than the other methods.
Furthermore, we demonstrate the identity preservation quality of input image re-rendering from a novel view in Table 3 and in Fig. 4. Our method significantly outperforms other inversion approaches w.r.t. ID similarity and extrapolates fine details to novel views in a more precise way. The ID score declines faster above a certain value of the yaw angle due to the limitation of EG3D to represent angles broader than in its training distribution.
**Ablation study.** In Fig. 5 and Table 4, we ablate over the possible differences in our model design, such as loss functions and changes introduced to the pSp [45] model. As some of those were introduced to mitigate checkerboard-style artifacts, we demonstrate visually how the LPIPS loss weight and incorporated pre-trained dual discriminator for EG3D affect the visual quality of the renderings. All models in this ablation are trained for 500K steps. We observed that setting LPIPS loss weight \(\lambda_{5}\) to 1 introduces artifacts. As can be seen in Table 4, synthesized examples from EG3D for training improve both pSp and TriPlaneNet.
demonstrated qualitatively and quantitatively, the best result is achieved by setting LPIPS loss weight to 0.1, including EG3D generated samples in the training set, using sub-pixel convolutional layers and disabling the discriminator.
**Evaluation for a multi-view sequence.** In Fig. 6, we evaluate the multi-view consistency of the reconstruction on a short smartphone-captured multi-view video of a person asked to stand still. The video consists of 106 frames and includes left-to-right sweeps approx. from \(-90^{\circ}\) to \(90^{\circ}\) yaw angle. We evaluate the novel view rendering capability for a single video frame by comparing it to the nearest neighbor video frame and the consistency of the marching cubes reconstructions from EG3D. Similarly, we measure the per-point distance to the 3D model estimated by SfM software [3] for the whole sequence. Despite that the shoulders and invisible head parts are hard to reconstruct for the method, we notice that the facial part and frontal hair are highly view-consistent and close to the ground truth.
**Novel view synthesis for a talking head video.** We demonstrate an application of our method to render in-the-wild videos from a novel view. In Fig. 7, frames of a video with a person talking and their rendering from a fixed novel view in the EG3D space are presented. The background in the video was removed by a matting network [27].
The encoder is capable of representing tiny details of in-the-wild portrait imagery in 3D and supports complex facial expressions.
### PTI and tri-plane offsets behavior
Both our method and optimization- and encoder-based baselines can be decomposed into two stages: estimating the latent code and the delta for the generator parameters. In Fig. 8, we show how combining these steps, each performed either by optimization (opt.) or an encoder (pred.),
\begin{table}
\begin{tabular}{l|c c c c|c c c c c c|c} \hline \hline \multirow{3}{*}{Method} & \multirow{3}{*}{\(\mathrm{L}_{2}\downarrow\)} & \multirow{3}{*}{LPIPS \(\downarrow\) MS-SSIM \(\uparrow\)} & \multicolumn{6}{c|}{\(\begin{array}{c}\text{\small{\bf{S}}}\text{\small{\bf{S}}}\text{\small{\bf{S}}} \text{\small{\bf{S}}}\text{\small{\bf{S}}}\text{\small{\bf{S}}}\text{\small{ \bf{S}}}\text{\small{\bf{S}}}\text{\small{\bf{S}}}\text{\small{\bf{S}}}\text{ \small{\bf{S}}}
influences the inversion behavior.
\(\mathcal{W}+\)\(\mathrm{opt.}\) is performed per single image and, while it is capable of reconstructing the face in the same view, it doesn't account for 3D geometry due to the lack of the supervision from other views. As a result, we observe skewed rendering from a novel view. Accordingly, this is also the main reason why the consistency breaks with the PTI = (\(\mathcal{W}+\)\(\mathrm{opt.}\) + EG3D params opt.) method. Interestingly, tri-plane prediction, performed on top of \(\mathcal{W}+\)\(\mathrm{opt.}\), can alleviate the damage to the geometry caused by \(\mathcal{W}+\)\(\mathrm{opt.}\)
\(\mathcal{W}+\)\(\mathrm{pred.}\) by a pSp encoder, on the contrary, introduces more consistent geometry, since it was trained to reconstruct images from arbitrary views. At the same time, the rendering quality in the same view is marginally worse than PTI. Applying the PTI's second step (\(\mathrm{EG3D}\) params opt.) helps improve it significantly, however, it incorrectly modifies head proportions, similar to the \(\mathcal{W}+\)\(\mathrm{opt.}\) behavior. To investigate this effect further, instead of optimizing EG3D parameters after \(\mathcal{W}+\)\(\mathrm{pred.}\), we try optimizing the tri-plane offsets directly, and this fully cancels the deterioration of geometry while preserving high fidelity in the same view. Since both EG3D params opt. and tri-plane opt. are performed for a single image (i.e. without multi-view supervision during training), this may indicate that offsetting the tri-planes is more spatially restrictive and thus stable. Our method reduces the artifacts observed with \(\mathcal{W}+\)\(\mathrm{pred.}\) + tri-plane opt. by utilizing an encoder for tri-plane offsets.
## 5 Conclusion
We present a novel approach for EG3D inversion that achieves high-quality reconstructions with view consistency and can be run in real time on modern GPUs. We also show that directly utilizing tri-plane representation better preserves 3D structure and identity in novel views compared to other approaches. Although our method achieves compelling results and is on par with optimization-based approaches for frontal face inversion, both visually and quantitatively, it has certain limitations. For instance, it is limited by the range of yaw angles shown to EG3D during training and cannot model the background depth. In addition, similarly to PTI and others, our approach may introduce artifacts for challenging examples in CelebA-HQ where the prediction of the first branch differs significantly from the input. We show a detailed analysis of such cases in Appendix C and propose a _cascaded test-time refinement_ (CTTR) approach for improving the method's robustness.
Figure 8: Comparison of hybrid approaches on CelebA-HQ test dataset. We refer to the computation done via optimization as \(\mathrm{opt.}\) and via an encoder (pSp or TriPlaneNet) as pred. We observe that the methods starting from \(\mathcal{W}+\)\(\mathrm{opt.}\) have issues with consistency, whereas subsequent tri-plane pred. can partially alleviate it. Experiments starting from \(\mathcal{W}+\)\(\mathrm{pred.}\) demonstrate that the tri-plane space is more spatially restrictive than the EG3D parameters space. _Electronic zoom-in recommended_.
## Acknowledgments
This work was supported by the ERC Starting Grant Scan2CAD (804724) and the German Research Foundation (DFG) Research Unit "Learning and Simulation in Visual Computing." We also thank Yawar Siddiqui, Guy Gafni, Shivangi Aneja, and Simon Giebenhain for participating in the sample videos and helpful discussions.
|
2308.07948 | Leveraging Symmetries in Pick and Place | Robotic pick and place tasks are symmetric under translations and rotations
of both the object to be picked and the desired place pose. For example, if the
pick object is rotated or translated, then the optimal pick action should also
rotate or translate. The same is true for the place pose; if the desired place
pose changes, then the place action should also transform accordingly. A
recently proposed pick and place framework known as Transporter Net captures
some of these symmetries, but not all. This paper analytically studies the
symmetries present in planar robotic pick and place and proposes a method of
incorporating equivariant neural models into Transporter Net in a way that
captures all symmetries. The new model, which we call Equivariant Transporter
Net, is equivariant to both pick and place symmetries and can immediately
generalize pick and place knowledge to different pick and place poses. We
evaluate the new model empirically and show that it is much more sample
efficient than the non-symmetric version, resulting in a system that can
imitate demonstrated pick and place behavior using very few human
demonstrations on a variety of imitation learning tasks. | Haojie Huang, Dian Wang, Arsh Tangri, Robin Walters, Robert Platt | 2023-08-15T16:16:29Z | http://arxiv.org/abs/2308.07948v2 | # Leveraging Symmetries in Pick and Place
###### Abstract
Robotic pick and place tasks are symmetric under translations and rotations of both the object to be picked and the desired place pose. For example, if the pick object is rotated or translated, then the optimal pick action should also rotate or translate. The same is true for the place pose; if the desired place pose changes, then the place action should also transform accordingly. A recently proposed pick and place framework known as Transporter Net Zeng, Florence, Tompson, Welker, Chien, Attarian, Armstrong, Krasin, Duong, Sindhwani et al. (2021) captures some of these symmetries, but not all. This paper analytically studies the symmetries present in planar robotic pick and place and proposes a method of incorporating equivariant neural models into Transporter Net in a way that captures all symmetries. The new model, which we call _Equivariant Transporter Net_, is equivariant to both pick and place symmetries and can immediately generalize pick and place knowledge to different pick and place poses. We evaluate the new model empirically and show that it is much more sample efficient than the non-symmetric version, resulting in a system that can imitate demonstrated pick and place behavior using very few human demonstrations on a variety of imitation learning tasks.
Deep Learning, Manipulation, Vision +
Footnote †: journal: Journal Tifc
## Introduction
Pick and place is an important paradigm in robotic manipulation where a complex manipulation problem can be decomposed into a sequence of grasp (pick) and place operations. Recently, multiple learning approaches have been proposed to solve this problem, including Zeng, Florence, Tompson, Welker, Chien, Attarian, Armstrong, Krasin, Duong, Sindhwani et al. (2021); Wang, Kohler and Platt (2021). These methods focus on a simple version of the planar pick and place problem where the method looks at the scene and outputs a single pick and a single place pose. This problem has an important structure in the form of symmetries in \(\mathrm{SE}(2)\) that can be expressed with respect to the pick and place pose. The pick symmetry is easiest to see. If the object to be grasped is rotated (in the plane), then the optimal grasp pose clearly must also rotate. A similar symmetry exists in place pose. If an object is to be placed into an environment in a particular way, then if the environment rotates, the desired place pose must also rotate.
If we are to design a robotic learning system for pick and place, it should ideally encode the symmetries described above. This is a structure that exists in the problem and there is a possibility to simplify learning by encoding this structure into our learned solutions. The question is how to accomplish this. This paper examines the symmetries that exist in the pick and place problem by identifying invariant and equivariant equations that we would expect to be preserved. Then, we consider existing pick and place models and find that those architectures only express some but not all problem symmetries. Finally, we propose a novel pick and place model that we call _Equivariant Transporter Net_ that encodes all symmetries and shows that it outperforms models that do not preserve the relevant symmetries.
### Symmetries in Transporter Net
This paper builds on top of the _Transporter Net_ model Zeng, Florence, Tompson, Welker, Chien, Attarian, Armstrong, Krasin, Duong, Sindhwani et al. (2021). Transporter Net is a sample-efficient model for learning planar pick and place behaviors through imitation learning. Compared to many other approaches Qureshi, Mousavian, Paxton and Fox (2021); Curtis, Fang, Kaelbling, Lozano-Perez and Garrett (2022), it does not need to be pre-trained on the involved objects - it only needs to be trained on the given demonstrations. Transporter Net achieves sample efficiency in this setting by encoding the symmetry of the picked object into the model. Once the model learns to pick and place an object presented in one orientation, that knowledge immediately generalizes to a finite set of other pick poses. This is illustrated in Figure 0(a). The left side of Figure 0(a) shows a pick-place problem where the robot must pick the orange object and place it inside the green outline. Because the model encodes the symmetry of the picked object, the ability to solve the place task on the left side of Figure 0(a) immediately implies an ability to solve the place task on the right side of Figure 0(a) where the object to be picked has been
rotated. We will refer to this as a \(SO(2)\)-place symmetry. Since _Transporter Net_ used a set of discrete rotations, it actually achieves \(C_{n}\)-place symmetry where \(C_{n}\) is the finite cyclic subgroup of \(\mathrm{SO}(2)\) that contains a set of \(n\) rotations.
### Equivariant Transporter Net
This paper analyzes the symmetries present in the pick and place problem and expands Transporter Net in the following ways. First, we constrain the pick model to be equivariant (an expression of symmetry) with respect to the \(\mathrm{SO}(2)\) group by incorporating equivariant convolutional layers into the pick model. This is, if there is a rotation of the object to be picked, the pick pose will also rotate. We will refer to this as a \(\mathrm{SO}(2)\)-pick symmetry. The second way we extend Transporter Net is by making it equivariant with respect to changes in place orientation. That is, if the place model learns how to place an object in one orientation, that knowledge generalizes immediately to different place orientations. Our resulting placing model is equivariant both to changes in pick and place orientation, and can be viewed as a direct product of two groups, \(\mathrm{SO}(2)\times\mathrm{SO}(2)\) as illustrated in Figure 0(b). This expanded symmetry improves the sample efficiency of our model by enabling it to generalize over a larger set of problems. Finally, we also propose a goal-conditioned version of Equivariant Transporter Net where the desired place pose is provided to the system in the form of an image as shown in Figure 8.
### Contributions
Our specific contributions are as follows. 1) We systematically analyze the symmetries present in the planar pick and place problem. 2) We propose Equivariant Transporter Net, a novel version of Transporter Net that has \(C_{n}\)-equivariant pick symmetry and \(C_{n}\times C_{n}\)-equivariant place symmetry.* 3) We propose a variation of Equivariant Transporter Net that can be used with standard grippers rather than just suction cups. 4) We propose a goal-conditioned version of Equivariant Transporter Net. 5) We evaluate the approach both in simulation tasks and on physical robot versions of three of the gripper tasks. Our results indicate that our approach is more sample efficient than the baselines and therefore learns better policies from a small number of demonstrations. Video and code are available at [https://haojhuang.github.io/etp_page/](https://haojhuang.github.io/etp_page/).
Footnote *: Our implementation uses the discrete \(C_{n}\) group instead of the continuous \(\mathrm{SO}(2)\) group in order to compare with the baseline _Transporter Net_ Zeng et al. (2021). The \(\mathrm{SO}(2)\) version of our model could be easily achieved with the irreducible representations based on our implementation.
This paper extends our recent work Huang, Wang, Walters and Platt (2022a) in the following ways. First, we cover the concepts, algorithms, and results in a more comprehensive way. Second, we generalize our proofs of equivariance from \(C_{n}\) to any subgroup of \(\mathrm{SO}(2)\). We also analyze the extension to \(\mathrm{SO}(3)\) mathematically and provide intuition. Third, we propose a goal-conditioned extension of the work and show that the new method outperforms on the benchmark of goal-conditioned tasks. Finally, we add an ablation study that characterizes the model for differently sized cyclic groups, \(C_{n}\).
### Comparison to related works
**Pick and Place.** Pick and place is an important topic in manipulation. Many fundamental skills like packing, kitting, and stacking require inferring both the pick and the place action. Traditional assembly methods in factories use customized workstations so that fixed pick and place actions can be manually predefined. Recently, considerable research has focused on vision-based manipulation. Some work Narayan and Likhachev (2016); Chen, Chen, Sui, Ye, Liu, Bahar and Jenkins (2019); Gualtieri and Platt (2021) assumes that object mesh models are available in order to run ICP Besl and McKay (1992) and align the object model with segmented observations or completions Yuan, Khot, Held, Mertz and Hebert (2018); Huang, Yang and Platt (2021). Other work learns a category-level pose estimator Yoon, DeSouza and Kak (2003); Deng, Xiang, Mousavian, Eppner, Bretl and Fox (2020) or key-point detector Nagabandi, Konolige, Levine and Kumar (2020); Liu, Jonschkowski, Angelova and Konolige (2020); Manuelli, Gao, Florence and Tedrake (2019) from training on a large dataset. Recently, Wen, Lian, Bekris and Schaal (2022) realizes a close-loop intra-category policy by mimicking the extracted pose trajectory from a few video demonstrations. However, these methods often require expensive object-specific labels or pre-training, making them difficult to use widely. Recent advances in deep learning have provided other ways to rearrange objects from perceptual data. Qureshi, Mousavian, Paxton and Fox (2021) represent the scene as a graph over segmented objects to do goal-conditioned planning; Curtis, Fang, Kaelbling, Lozano-Perez and Garrett (2022) propose a general system consisting of a perception module, grasp module, and robot control module to solve multi-step manipulation tasks. These approaches often require prior knowledge like good segmentation module and human-level hierarchy. End-to-end models Zakka, Zeng, Lee and Song (2020); Khansari, Kappler, Luo, Bingham and Kalakrishnan (2020); Devin, Rowghanian, Vigorito,
Figure 1: Visual explanation of \(\mathrm{SO}(2)\)-equivariace (left figure) v.s. \(\mathrm{SO}(2)\times\mathrm{SO}(2)\)-equivariance of the place model (right figure)
Richards and Rohanimanesh (2020); Berscheid, Meissner and Kroger (2020) that directly map input observations to actions can learn quickly and generalize well. Shridhar, Manuelli and Fox (2022) learn one multi-task policy with language-conditioned imitation-learning. Wu, Yan, Kurutach, Pinto and Abbeel (2020) achieve fast learning speed on deformable object manipulation tasks with reinforcement learning. However, most methods need to be trained on large datasets. For example, Khansari, Kappler, Luo, Bingham and Kalakrishnan (2020) collects a dataset with 7.2 million samples. Devin, Rowghanian, Vigorito, Richards and Rohanimanesh (2020) collects \(40K\) grasps and places per task. Zakka, Zeng, Lee and Song (2020) collects 500 disassembly sequences for each kit. The focus of this paper is on improving the sample efficiency of this class of methods on various manipulation tasks.
**Equivariance Learning in Manipulation.** Fully Convolutional Networks (FCN) are translationally equivariant and have been shown to improve learning efficiency in many manipulation tasks Zeng, Song, Yu, Donlon, Hogan, Bauza, Ma, Taylor, Liu, Romo et al. (2018); Morrison, Leitner and Corke (2018). The idea of encoding SE(2) symmetries in the structure of neural networks is first introduced in G-Convolution Cohen and Welling (2016). The extension work proposes an alternative architecture, Steerable CNN Cohen and Welling (2017). Weiler and Cesa (2019) propose a general framework for implementing E(2)-Steerable CNNs. Weiler, Geiger, Welling, Boomsma and Cohen (2018) first investigated the SE(3) steerable convolution kernels for volumetric data with the trick of vectorizing. Cesa, Lang and Weiler (2021) parameterizes filters with a band-limited basis to build E(3)-steerable kernels. Thomas, Smidt, Kearnes, Yang, Li, Kohlhoff and Riley (2018) and Fuchs, Worrall, Fischer and Welling (2020) extended the equivariance to graph neural networks.
In the context of robotics learning, Zhu, Wang, Biza, Su, Walters and Platt (2022) decouple rotation and translation symmetries to enable the robot to learn a planar grasp policy online within \(1.5\) hours. Compared with Zhu, Wang, Biza, Su, Walters and Platt (2022) that formulated the planar grasping task as a bandit problem, our work focuses on pick-place tasks and learns from demonstrations. Wang, Walters, Zhu and Platt (2022) use SE(2) equivariance in Q learning to solve multi-step sequential manipulation pick-place tasks. Compared with Wang, Walters, Zhu and Platt (2022), our work leverages the larger \(\mathrm{SO}(2)\times\mathrm{SO}(2)\) symmetry group for the pick-conditioned place policy and tackles rearrangement tasks through the imitation learning. Hussein, Gaber, Elyan and Jayne (2017); Hester, Vecerik, Pietquin, Lanctot, Schaul, Piot, Horgan, Quan, Sendonaris, Osband et al. (2018); Vecerik, Hester, Scholz, Wang, Pietquin, Piot, Heess, Rothoff, Lampe and Riedmiller (2017). Recently, various SE(3) equivariant architectures Thomas, Smidt, Kearnes, Yang, Li, Kohlhoff and Riley (2018); Fuchs, Worrall, Fischer and Welling (2020); Chen, Liu, Chen, Li and Hill (2021); Deng, Litany, Duan, Poulenard, Tagliasacchi and Guibas (2021) have been proposed and applied to solve manipulation problems. Simeonov, Du, Tagliasacchi, Tenenbaum, Rodriguez, Agrawal and Sitzmann (2022) use Vector Neurons Deng, Litany, Duan, Poulenard, Tagliasacchi and Guibas (2021) to get SE(3)-invariant object representations so that the model can manipulate objects in the same category with a few training demonstrations. Huang, Wang, Zhu, Walters and Platt (2022) leverages the SE(3) invariance of the grasping evaluation function to enable better grasping performance. Xue, Yuan, Wang, Gao and Xu (2022) use SE(3)-equivariant key points to infer the object's pose for pick and place. However, most SE(3)-equivariant pick-place methods Simeonov, Du, Tagliasacchi, Tenenbaum, Rodriguez, Agrawal and Sitzmann (2022); Xue, Yuan, Wang, Gao and Xu (2022) require a segmentation model and a pre-trained point descriptor for each category, which limits their adaptations to various tasks. Although our proposed pick-place symmetry is defined on SE(2) in this work, we will briefly talk about how to extend the idea to SE(3)-pick-place problems with Proposition 3.
## Background on Symmetry Groups
### The Groups \(\mathrm{SO}(2)\) and \(C_{n}\)
In this work, we are primarily focus on rotations expressed by the group \(\mathrm{SO}(2)\) and its cyclic subgroup \(C_{n}\subseteq\mathrm{SO}(2)\). \(\mathrm{SO}(2)\) contains the continuous planar rotations \(\{\mathrm{Rot}_{\theta}:0\leq\theta<2\pi\}\). The discrete subgroup \(C_{n}=\{\mathrm{Rot}_{\theta}:\theta\in\{\frac{\pi\theta}{2\pi}|0\leq i<n\}\}\) contains only rotations by angles which are multiples of \(2\pi/n\). The special Euclidean group \(\mathrm{SE}(2)=\mathrm{SO}(2)\times\mathbb{R}^{2}\) describes all translations and rotations of \(\mathbb{R}^{2}\).
### Representation of a Group
A \(d\)-dimensional _representation_\(\rho\colon G\to\mathrm{GL}_{d}\) of a group \(G\) assigns to each element \(g\in G\) an invertible \(d\times d\)-matrix \(\rho(g)\). Different representations of \(\mathrm{SO}(2)\) or \(C_{n}\) help to describe how different signals are transformed under rotations.
1. **The trivial representation \(\rho_{0}\colon\mathrm{SO}(2)\to\mathrm{GL}_{1}\)** assigns \(\rho_{0}(g)=1\) for all \(g\in G\), i.e. no transformation under rotation.
2. **The standard representation** represents each group element by its standard rotation matrix. Notice that \(\rho_{0}\) and \(\rho_{1}\) can be used to represent elements from either \(\mathrm{SO}(2)\) or \(C_{n}\).
3. **The regular representation \(\rho_{\mathrm{reg}}\)** of \(C_{n}\) acts on a vector in \(\mathbb{R}^{n}\) by cyclically permuting its coordinates \(\rho_{\mathrm{reg}}(\mathrm{Rot}_{2\pi/n})(x_{0},x_{1},...,x_{n-2},x_{n-1})=(x_{ n-1},x_{0},x_{1},...,x_{n-2})\). We can rotate by multiples of \(2\pi/n\) by \(\rho_{\mathrm{reg}}(\mathrm{Rot}_{2\pi/n})=\rho_{\mathrm{reg}}(\mathrm{Rot}_{2 \pi/n})^{i}\).
4. **The quotient representation** of \(C_{n}\) for \(k\) dividing \(n\) is denoted \(\rho_{\mathrm{out}}^{C_{n}/C_{k}}\) and acts on \(\mathbb{R}^{n/k}\) by permuting \(|C_{n}|/|C_{k}|\) channels: \(\rho_{\mathrm{out}}^{C_{n}/C_{k}}(\mathrm{Rot}_{2\pi i/n})(\mathbf{x})_{j}=( \mathbf{x})_{j+i\bmod(n/k)}\), which implies features that are invariant under the action of \(C_{k}\).
5. **The irreducible representation \(\rho_{\mathrm{irrep}}^{i}\)** could be considered as the basis function with the order/frequency
of \(i\), such that any representation \(\rho\) of \(G\) could be decomposed as a _direct sum_ of them: \(\rho(g)=Q^{\top}(\bigoplus_{i}\rho_{i\mathrm{rep}}^{i})Q\), where \(Q\) is an orthogonal matrix.
For more details, we refer interesting readers to Serre (1977), Weiler and Cesa (2019), Lang and Weiler (2020) and Cesa, Lang and Weiler (2021).
### Feature Map Transformations
We formalize images and 2D feature maps as feature vector fields, i.e. functions \(f\colon\mathbb{R}^{2}\to\mathbb{R}^{c}\), which assign a feature vector \(f(\mathbf{x})\in\mathbb{R}^{c}\) to each position \(\mathbf{x}\in\mathbb{R}^{2}\). While in practice we discretize and truncate the domain of \(f\left\{(i,j):1\leq i\leq W,1\leq j\leq W\right\}\), here we will consider it to be continuous for the purpose of analysis. The action of an element \(g\in\mathrm{SO}(2)\) on \(f\) is a combination of a rotation in the domain of \(f\) via \(\rho_{1}\) (this rotates the pixel positions) and a transformation in the channel space \(\mathbb{R}^{c}\) (aka. fiber space) by \(\rho\in\left\{\rho_{0},\rho_{1},\rho_{\mathrm{reg}},\rho_{\mathrm{irrep}}\right\}\). If \(\rho=\rho_{\mathrm{reg}}\), then the channels cyclically permute according to the rotation. If \(\rho=\rho_{0}\), the channels do not change. We denote this action (the action of \(g\) on \(f\) via \(\rho\)) by \(T_{g}^{\rho}(f)\):
\[[T_{g}^{\rho}(f)](\mathbf{x})=\rho(g)\cdot f(\rho_{1}(g)^{-1}\mathbf{x}). \tag{1}\]
For example, the action of \(T_{g}^{\rho_{\mathrm{reg}}}(f)\) is illustrated in Figure 2 for a rotation of \(g=\pi/2\) on a \(2\times 2\) image \(f\) that uses \(\rho_{\mathrm{reg}}\). The expression \(\rho_{1}(g)^{-1}\mathbf{x}\) rotates the pixels via the standard representation. Multiplication by \(\rho(g)=\rho_{\mathrm{reg}}(g)\) permutes the channels. For brevity, we will denote \(T_{g}^{\mathrm{reg}}=T_{g}^{\rho_{\mathrm{reg}}}\) and \(T_{g}^{0}=T_{g}^{\rho_{0}}\).
### Equivariant Mappings and Steerable Kernels
A function \(F\) is equivariant if it commutes with the action of the group,
\[T_{g}^{\mathrm{out}}[F(f)]=F(T_{g}^{\mathrm{in}}[f]) \tag{2}\]
where \(T_{g}^{\mathrm{in}}\) transforms the input to \(F\) by the group element \(g\) while \(T_{g}^{\mathrm{out}}\) transforms the output of \(F\) by \(g\). For example, if \(f\) is an image, then \(\mathrm{SO}(2)\)-equivariance of \(F\) implies that it acts on \(f\) in the same way regardless of the orientation in which \(f\) is presented. That is, if \(F\) takes an image \(f\) rotated by \(g\) (RHS of Equation 2), then it is possible to recover the same output by evaluating \(F\) for the un-rotated image \(f\) and rotating its output (LHS of Equation 2). The most equivariant mappings between spaces of feature fields are _convolutions with G-steerable kernels_Weiler et al. (2018); Jenner and Weiler (2021). Denote the input field type as \(\rho_{\mathrm{in}}\colon G\to\mathbb{R}^{d_{\mathrm{in}}\times d_{\mathrm{in}}}\) and the output field type as \(\rho_{\mathrm{out}}\colon G\to\mathbb{R}^{d_{\mathrm{out}}\times d_{\mathrm{out}}}\). The G-steerable kernels are convolution kernels \(K\colon\mathbb{R}^{n}\to\mathbb{R}^{d_{\mathrm{out}}\times d_{\mathrm{in}}}\) satisfying the _steerability constraint_, where \(n\) is the dimensionality of the space
\[K(g\cdot x)=\rho_{\mathrm{out}}(g)K(x)\rho_{\mathrm{in}}(g)^{-1} \tag{3}\]
## Problem Statement
This paper considers behavior cloning for planar pick and place problems. These problems are planar in the sense that the observation is a top-down image and the pick and place actions are motions to coordinates in the plane. Given a set of demonstrations that contains a sequence of one or more observation-action pairs \((o_{t},a_{t})\), the objective is to infer a policy \(p(a_{t}|o_{t})\) where the action \(a_{t}=(a_{\mathrm{pick}},a_{\mathrm{place}})\) describes both the pick and place components of action, and the observation \(o_{t}\) describes the current state in terms of a top-down image of the workspace.
Our model will encode this policy by factoring \(p(a_{\mathrm{pick}}|o_{t})\) and \(p(a_{\mathrm{place}}|o_{t},a_{\mathrm{pick}})\) and representing them as two separate neural networks. This policy be can used to solve tasks that are solvable in a single time step (i.e. a single pick and place action) as well as tasks that require multiple pick and place actions to solve. \(a_{\mathrm{pick}}\) and \(a_{\mathrm{place}}\) are parameterized in terms of \(\mathrm{SE}(2)\) coordinates \((u,v,\theta)\), where \(u,v\) denote the pixel coordinates of the gripper position and \(\theta\) denotes gripper orientation. \(\theta_{\mathrm{pick}}\) is defined with respect to the world frame and \(\theta_{\mathrm{place}}\) is the delta action between the pick pose and place pose.
Before describing Equivariant Transporter Net, we analyze the original Transporter Net Zeng et al. (2021) architecture from a different perspective.
### Description of Transporter Net
Transporter Network Zeng et al. (2021) solves the planar pick and place problem using the architecture shown in Figure 3. The pick network \(f_{\mathrm{pick}}\colon o_{t}\mapsto p(u,v)\) maps and image \(o_{t}\) onto a probability distribution \(p(u,v)\) over pick position \((u,v)\in\mathbb{R}^{2}\). The output pick position \(a_{\mathrm{pick}}^{*}\) is calculated by maximizing \(f_{\mathrm{pick}}(o_{t})\) over \((u,v)\). (Since Zeng et al. (2021) uses suction cups to pick, that work ignores pick orientation.) The place position and orientation is calculated as follows. First, an image patch \(c\) centered on \(a_{\mathrm{pick}}^{*}\) is cropped from \(o_{t}\) to represent the pick action as well as the object. Then, the crop \(c\) is rotated \(n\) times to produce a stack of \(n\) rotated crops. We denote this stack of crops as
Figure 3: The Architecture of Transporter Net.
### Transporter Network
\[\mathcal{R}_{n}(c)=(T_{2\pi i/n}^{0}(c))_{i=0}^{n-1}, \tag{4}\]
where we refer to \(\mathcal{R}_{n}\) as the "lifting" operator of \(C_{n}\). Then, \(\mathcal{R}_{n}(c)\) is encoded using a neural network \(\psi\). The original image, \(o_{t}\), is encoded by a separate neural network \(\phi\). The distribution over place location is evaluated by taking the cross correlation between \(\psi\) and \(\phi\),
\[f_{\rm place}(o_{t},c)=\psi(\mathcal{R}_{n}(c))\star\phi(o_{t}), \tag{5}\]
where \(\psi\) is applied independently to each of the rotated channels in \(\mathcal{R}_{n}(c)\). Place position and orientation is calculated by maximizing \(f_{\rm place}\) over the pixel position (for position) and the orientation channel (for orientation).
### Analysis of Transporter Net
The model architecture described above gives Transporter Network the following equivariance property.
**Proposition 1**.: _The Transporter Net place network \(f_{\rm place}\) is \(C_{n}\)-equivariant. That is, given \(g\in C_{n}\), object image crop \(c\) and scene image \(o_{t}\),_
\[f_{\rm place}(o_{t},T_{g}^{0}(c))=\rho_{\rm reg}(-g)f_{\rm place}(o_{t},c). \tag{6}\]
Proposition 1 expresses the following intuition. A rotation of \(g\) applied to the orientation of the object to be picked results in a \(-g\) change in the placing angle, which is represented by a permutation along the channel axis of the placing feature maps. We denote the permutation in the channel space as \(\rho_{\rm reg}(-g)\). This is a symmetry over the cyclic group \(C_{n}\leq{\rm SO}(2)\) which is encoded directly into the model. It enables it to immediately generalize over different orientations of the object to be picked and thereby improves sample efficiency.
To prove Proposition 1, We start with some common lemmas. In order to understand continuous rotations of image data, it is helpful to consider a \(k\)-channel image as a mapping \(f\colon\mathbb{R}^{2}\to\mathbb{R}^{k}\) where the input \(\mathbb{R}^{2}\) defines the pixel space. We consider images centered at \((0,0)\) and for non-integer values \((x,y)\) we consider \(f(x,y)\) to be the interpolated pixel value. Similarly, let \(K\colon\mathbb{R}^{2}\to\mathbb{R}^{l\times k}\) be a convolutional kernel where \(k\) is the number of the input channels and \(l\) is the number of the output channels. Although the input space is \(\mathbb{R}^{2}\), we assume the kernel is \(r\times r\) pixels and \(K(x,y)\) is zero outside this set. The convolution can then be expressed by \((K\star f)(\vec{v})=\sum_{\vec{u}\in\mathbb{Z}^{2}}f(\vec{v}+\vec{w})K(\vec{ w})\), where \(\vec{v}=(i,j)\in\mathbb{R}^{2}\).
Without loss of generality, assume that \(f\colon\mathbb{R}^{2}\to\mathbb{R}\) and define \(\hat{f}\colon\mathbb{R}^{2}\to\mathbb{R}^{n}\) to be the \(n\)-fold duplication of \(f\) such that \(\hat{f}(\vec{v})=(f(\vec{v}),\dots,f(\vec{v}))\). Consider a diagonal kernel \(\tilde{K}\colon\mathbb{R}^{2}\to\mathbb{R}^{n\times n}\) where \(\tilde{K}(\vec{v})\) is a diagonal \(n\times n\) matrix \(\mathrm{Diag}(K_{1},\dots,K_{n})\) and each \(K_{i}\colon\mathbb{R}^{2}\to\mathbb{R}^{1\times 1}\). For such inputs and kernels, we have the following permutation equivariance.
**Lemma 1**.: \[(\rho_{\rm reg}(g)\tilde{K})\star\hat{f}=\rho_{\rm reg}(g)(\tilde{K}\star\hat {f})\]
Proof.: By definition \(h_{i}=(\tilde{K}\star\tilde{f})_{i}=K_{i}\star f\). Define \(h=(h_{1},\cdots,h_{n})\) and it is clear that permitting the 1x1 kernels \(K_{i}\) also permutes \(h_{i}\), so \(\rho_{\rm reg}(g)h=(\rho_{\rm reg}(g)\tilde{K})\star\tilde{f}\) as desired.
We require one more lemma on the equivariance of the lifting operator \(\mathcal{R}_{n}\).
**Lemma 2**.: \[\mathcal{R}_{n}(T_{g}^{0}f)=\rho_{\rm reg}(-g)\mathcal{R}_{n}(f)\]
Proof.: First, we compute
\[\mathcal{R}_{n}(f)(\vec{x})=(f(\vec{x}),f(g^{-1}\vec{x}),\dots,f(g^{-(n-1)} \vec{x})).\]
Then both \(\mathcal{R}_{n}(T_{g}^{0}f)\) and \(\rho_{\rm reg}(-g)\mathcal{R}_{n}(f)\) equal to
\[(f(g^{-1}\vec{x}),\dots,f(g^{-(n-1)}\vec{x}),f(\vec{x})).\]
Proof of Proposition 1.: We prove the \(C_{n}\)-place equivariance of Transporter Net under rotations of the picked object,
\[\psi(\mathcal{R}_{n}(T_{g}^{0}c))\star\phi(o_{t})=\rho_{\rm reg}(-g)(\psi( \mathcal{R}_{n}(c))\star\phi(o_{t}) \tag{7}\]
Since \(\psi\) is applied independently to each of the rotated channels in \(\mathcal{R}_{n}(c)\), we denote \(\psi_{n}((f_{1},\dots,f_{n}))=(\psi(f_{1}),\dots,(\psi(f_{n}))\). By Lemma 2, the left-hand side of Equation 7 is
\[\psi(\mathcal{R}_{n}(T_{g}^{0}c))\star\phi(o_{t})=\psi_{n}(\rho_{\rm reg}(-g) \mathcal{R}_{n}(c))\star\phi(o_{t}).\]
Since \(\psi_{n}\) applies \(\psi\) on each component, it is equivariant to the permutation of components and thus the above equation becomes
\[=(\rho_{\rm reg}(-g)\psi_{n}(\mathcal{R}_{n}(c))\star\phi(o_{t}).\]
Finally applying Lemma 1 gives
\[=\rho_{\rm reg}(-g)(\psi_{n}(\mathcal{R}_{n}(c)\star\phi(o_{t}))\]
as desired.
The main idea of the proof is shown in Figure 4. Namely, \(\psi(\mathcal{R}_{n}(\cdot))\) is equivariant in the sense that rotating the crop \(c\) induces a cyclic shift in the channels of the output.
Figure 4: Illustration of the main part of the proof of Proposition 1. Rotating the crop \(c\) induces a cyclic shift in the channels of the output \(\psi(\mathcal{R}_{n}(T_{g}^{0}c))=\rho_{\rm reg}(-g)\psi(\mathcal{R}_{n}(c))\).
Formally, \(\psi(\mathcal{R}_{n}(T_{g}^{0}c))=\rho_{\mathrm{reg}}(-g)\psi(\mathcal{R}_{n}(c))\). Noting that a permutation of the filters \(K\) in the convolution \(K\star\phi(\alpha_{t})\) induces the same permutation in the output feature maps completes the proof. Here \(\psi\) is a simple CNN with no rotational equivariance. The equivariance results from the lifting operator \(\mathcal{R}_{n}\).
However, only the place network of Transporter Net has the \(C_{n}\)-equivariance. Instead, our proposed method incorporates not only the rotational equivariance in the pick network but also \(C_{n}\times C_{n}\)-equivariance in the place network.
## 4 Equivariant Transporter
### Equivariant Pick
Our approach to the pick network is similar to that in Transporter Net Zeng et al. (2021) except that: 1) we explicitly encode _the pick symmetry_ into the pick networks, thereby making pick learning more sample efficient; 2) we consider the pick orientation so that we can use parallel jaw grippers rather than just suction grippers.
ModelWe propose an equivariant model for detecting the planar pick pose. First, we decompose the learning process of \(a_{\mathrm{pick}}\in\mathrm{SE}(2)\) into two parts,
\[p(a_{\mathrm{pick}})=p(u,v)p(\theta|(u,v)), \tag{8}\]
where \(p(u,v)\) denotes the probability of success when a pick exists at pixel coordinates \(u,v\) and \(p(\theta|(u,v))\) is the probability that the pick at \(u,v\) should be executed with a gripper orientation of \(\theta\). The distributions \(p(u,v)\) and \(p(\theta|(u,v))\) are modeled as two neural networks:
\[f_{p}(o_{t}) \mapsto p(u,v), \tag{9}\] \[f_{\theta}(o_{t},(u,v)) \mapsto p(\theta|(u,v)). \tag{10}\]
Given this factorization, we can query the maximum of \(p(a_{\mathrm{pick}})\) by evaluating \((\hat{u},\hat{v})=\operatorname*{arg\,max}_{(u,v)}[p(u,v)]\) and then \(\hat{\theta}=\operatorname*{arg\,max}_{\theta}[p(\theta|\hat{u},\hat{v})]\). This is illustrated in Figure 5. The bottom of Figure 5 shows the maximization of \(f_{p}\) at \(a_{\mathrm{pick}}^{*}\). The right side shows the evaluation of \(f_{\theta}\) for the image patch centered at \(a_{\mathrm{pick}}^{*}\).
Pick SymmetryThere are two equivariant relationships that we would expect to be satisfied for planar picking:
\[f_{p}(T_{g}^{0}(o_{t})) =T_{g}^{0}(f_{p}(o_{t})) \tag{11}\] \[f_{\theta}(T_{g}^{0}(o_{t}),T_{g}^{0}(u,v)) =s(g)(f_{\theta}(o_{t},(u,v))). \tag{12}\]
where \(s\) is the shift operator and satisfies \(s(g)f(x)=f(x+g)\).
Equation 11 states that the pick location distribution found in an image rotated by \(g\in\mathrm{SO}(2)\), (LHS of Equation 11), should correspond to the distribution found in the original image subsequently rotated by \(g\), (RHS of Equation 11).
Equation 12 says that the pick orientation distribution at the rotated grasp point \(T_{g}^{0}(u,v)\) in the rotated image \(T_{g}^{0}(o_{t})\) (LHS of Equation 12) should be shifted by \(g\) relative to the grasp orientation at the original grasp points in the original image (RHS of Equation 12).
We encode both \(f_{p}\) and \(f_{\theta}\) using equivariant convolutional layers Weiler and Cesa (2019) which constrain the models to represent only those functions that satisfy Equations 11 and 12. Specifically, we select the trivial representation as the output type for \(f_{p}\) and the regular representation as the output type for \(f_{\theta}\), which is a _special case_1 of Equation 12
Footnote 1: The \(\mathrm{SO}(2)/C_{2}\) quotient group can be realized by using the basis function with a period of \(\pi\).
\[f_{\theta}(T_{g}^{0}(o_{t}),T_{g}^{0}(u,v))=p_{\mathrm{reg}}(f_{ \theta}(o_{t},(u,v)))\ \ \forall g\in C_{n} \tag{13}\]
Gripper Orientation Using the Quotient GroupA key observation in planar picking is that, for many robots, the gripper is bilaterally symmetric, i.e. grasp outcome is invariant when the gripper is rotated by \(\pi\). We can encode this additional symmetry to reduce redundancy and save computational cost using the quotient group \(\mathrm{SO}(2)/C_{2}\) which identifies orientations that are \(\pi\) apart. When using this quotient group for gripper orientation, \(s(g)\) in Equation 12 is replaced with \(s(g\operatorname*{mod}\pi)\)2 and \(\rho_{\mathrm{reg}}\) in Equation 13 is replaced with \(\rho_{\mathrm{reg}}^{C_{n}/C_{2}}\).
Footnote 2: The \(\mathrm{SO}(2)/C_{2}\) quotient group can be realized by using the basis function with a period of \(\pi\).
### Equivariant Place
Assumes that the object does not move during picking, given the picked object represented by the image patch c centered on \(a_{\mathrm{pick}}\), the place network models the distribution of \(a_{\mathrm{place}}=(u_{\mathrm{place}},v_{\mathrm{place}},\theta_{\mathrm{place}})\) by:
\[f_{\mathrm{place}}(o_{t},c)\mapsto p(a_{\mathrm{place}}|o_{t},a_{\mathrm{pick}}), \tag{14}\]
where \(p(a_{\mathrm{place}}|o_{t},a_{\mathrm{pick}})\) denotes the probability that the object at \(a_{\mathrm{pick}}\) in scene \(o_{t}\) should be placed at \(a_{\mathrm{place}}\). Our place model architecture closely follows that of Transporter Net Zeng et al. (2021). The main difference is that we explicitly encode equivariance constraints on both \(\phi\) and \(\psi\) networks. As a result of this change: 1) we are able to simplify the model by transposing the lifting operation \(\mathcal{R}_{n}\) and the processing by \(\phi\); 2) our new model is equivariant with respect to a larger symmetry group \(C_{n}\times C_{n}\), compared to Transporter Net which is only equivariant over \(C_{n}\).
Figure 5: Equivariant Transporter Pick model. First, we find the pick position \(a_{\mathrm{pick}}^{*}\) by evaluating the argmax over \(f_{p}(o_{t})\). Then, we evaluate \(f_{\theta}\) for the image patch centered on \(a_{\mathrm{pick}}^{*}\).
_Equivariant \(\phi\) and \(\psi\)_ We explicitly encode both \(\phi\) and \(\psi\) as equivariant models that satisfy the following constraints:
\[\phi(T^{0}_{g}(o_{t}))=T^{0}_{g}(\phi(o_{t})) \tag{15}\] \[\psi(T^{0}_{g}(c))=T^{0}_{g}(\psi(c)) \tag{16}\]
for \(g\in\mathrm{SO}(2)\). The equivariance constraint of Equation 15 says that when the input image rotates, we would expect the place location to rotate correspondingly. This constraint helps the model generalize across place orientations. The constraint of Equation 16 says that when the picked object rotates (represented by the image patch \(c\)), then the place orientation should correspondingly rotate.
_Place Model_ When the equivariance constraint of Equation 16 is satisfied, we can exchange \(\mathcal{R}_{n}\) (the lifting operation) with \(\psi\):
\[\psi(\mathcal{R}_{n}(c))=\mathcal{R}_{n}(\psi(c))\]
This equality is useful because it means that we only need to evaluate \(\psi\) for one image patch and rotate the feature map rather than processing the stack of image patches \(\mathcal{R}_{n}(c)\) - something that is computationally cheaper. The resulting place model is then:
\[f^{\prime}_{\mathrm{place}}(o_{t},c) = \mathcal{R}_{n}(\psi(c))\star\phi(o_{t}) \tag{17}\] \[= \Psi(c)\star\phi(o_{t}), \tag{18}\]
where Equation 18 substitutes \(\Psi(c)=\mathcal{R}_{n}[\psi(c)]\) to simplify the expression. Here, we use \(f^{\prime}_{\mathrm{place}}\) to denote Equivariant Transporter Net defined using equivariant \(\phi\) and \(\psi\) in contrast to the baseline Transformer Net \(f_{\mathrm{place}}\) of Equation 5. Note that both \(f_{\mathrm{place}}\) and \(f^{\prime}_{\mathrm{place}}\) satisfy Proposition 1. However, \(f_{\mathrm{place}}\) accomplishes this by symmetrizing a non-equivariant network (i.e. evaluating \(\psi(\mathcal{R}_{n}(c))\)) whereas our model \(f^{\prime}_{\mathrm{place}}\) encodes the symmetry directly into \(\psi\).
### Place Symmetry of the Equivariant Transporter Network
\(C_{n}\times C_{n}\)-place symmetryAs Proposition 1 demonstrates, the baseline Transporter Net model Zeng et al. (2021) encodes the symmetry that rotations of the object to be picked (represented by \(c\)) should result in corresponding rotations of the place orientation for that object. However, the pick-conditioned place has a second symmetry that is not encoded in Transporter Net: rotations of the placement (represented by \(o_{t}\)) should also result in corresponding rotations of the place orientation. In fact, as we demonstrate in Proposition 2 below, we encode this _second type_ of symmetry by enforcing the constraints of Equations 15 and 16. Essentially, we go from the \(C_{n}\)-place symmetric model to a \(C_{n}\times C_{n}\)-place symmetric model.
**Proposition 2**.: _Equivariant Transporter Net \(f^{\prime}_{\mathrm{place}}\) is \(C_{n}\times C_{n}\)-equivariant. That is, given rotations \(g_{1}\in C_{n}\) of the picked object and \(g_{2}\in C_{n}\) of the scene, we have that:_
\[f^{\prime}_{\mathrm{place}}(T^{0}_{g_{1}}(c),T^{0}_{g_{2}}(o_{t}))=\rho_{\mathrm{ reg}}(g_{2}-g_{1})T^{0}_{g_{2}}f^{\prime}_{\mathrm{place}}(c,o_{t}). \tag{19}\]
Proposition 2 is illustrated in Figure 6. The top of Figure 6 going left to right shows the rotation of both the object by \(g_{1}\) (in orange) and the place pose by \(g_{2}\) (in green). The LHS of Equation 19 evaluates \(f^{\prime}_{\mathrm{place}}\) for these two rotated images. The lower left of Figure 6 shows \(f^{\prime}_{\mathrm{place}}(c,o_{t})\). Going left to right at the bottom of Figure 6 shows the pixel-rotation by \(T^{0}_{g_{2}}\) and the channel permutation by \(g_{2}-g_{1}\) (RHS of Equation 19).
To prove Proposition 2, we introduce one more lemma.
**Lemma 3**.: \[(T^{0}_{g}(K\star f))(\vec{v})=((T^{0}_{g}K)\star(T^{0}_{g}f))(\vec{v})\] (20)
Proof.: We evaluate the left-hand side of Equation 20:
\[T^{0}_{g}(K\star f)(\vec{v})=\sum_{\vec{w}\in\mathbb{Z}^{2}}f(g^{-1}\vec{v}+ \vec{w})K(\vec{w}).\]
Re-indexing the sum with \(\vec{y}=g\vec{w}\),
\[=\sum_{\vec{y}\in\mathbb{Z}^{2}}f(g^{-1}\vec{v}+g^{-1}\vec{y})K(g^{-1}\vec{y})\]
is by definition
\[=\sum_{\vec{y}\in\mathbb{Z}^{2}}(T^{0}_{g}f)(\vec{v}+\vec{y})(T^{ 0}_{g}K)(\vec{y})\] \[=((T^{0}_{g}K)\star(T^{0}_{g}f))(\vec{v})\]
as desired.
_Proof of Proposition 2_ Recall \(\Psi(c)=\psi(\mathcal{R}_{n}(c))\). We now prove Proposition 2,
\[\Psi(T^{0}_{g_{1}}(c))\star\phi(T^{0}_{g_{2}}(o_{t}))=\] \[\rho_{reg}(g_{2}-g_{1})(T^{0}_{g_{2}}[\Psi(c)\star\phi(o_{t})])\]
Figure 6: Equivariance of our placing network under the rotation of the object and the placement. A \(\frac{\pi}{2}\) rotation on \(c\) and a \(-\frac{\pi}{2}\) rotation on \(\alpha_{t}\backslash c\) are equivariant \(t\):), a \(-\frac{\pi}{2}\) rotation on the placing location, and ii), the shift on the element of placing rotation angle from \(\frac{\pi}{2}\) (the last channel) to \(\frac{\pi}{2}\) (the second channel).
Proof.: We first prove the equivariance under rotations of the placement \(o_{t}\). We claim
\[\Psi(c)\star\phi(T^{0}_{g}(o_{t}))=T^{\rm reg}_{g}(\Psi(c)\star\phi(o_{t})). \tag{21}\]
Evaluating the left hand side of Equation 21,
\[\Psi(c) \star\phi(T^{0}_{g}(o_{t}))\] \[=\Psi(c)\star T^{0}_{g}\phi(o_{t})\] (equivariance of \[\phi\] ) \[=(T^{0}_{g}T^{0}_{g^{-1}}\Psi(c))\star(T^{0}_{g}\phi(o_{t}))\] \[=T^{0}_{g}(T^{0}_{g^{-1}}\Psi(c)\star\phi(o_{t}))\] (Lemma 3) \[=T^{0}_{g}(T^{0}_{g^{-1}}\mathcal{R}_{n}(\psi(c))\star\phi(o_{t}))\] \[=T^{0}_{g}(\mathcal{R}_{n}(T^{0}_{g^{-1}}\psi(c))\star\phi(o_{t}))\] (equivariance of \[\mathcal{R}_{n}\] ) \[=T^{0}_{g}(\mathcal{R}_{n}(\psi(T^{0}_{g^{-1}}c))\star\phi(o_{t}))\] (equivariance of \[\psi\] ) \[=T^{0}_{g}((\rho_{\rm reg}(g)\Psi(c)\star\phi(o_{t}))\] (Lemma 2) \[=T^{0}_{g}\rho_{\rm reg}(g)(\Psi(c)\star\phi(o_{t}))\] (Lemma 1) \[=T^{\rm reg}_{g}(\Psi(c)\star\phi(o_{t})).\]
In the last step, \(T^{\rm reg}_{g}=\rho_{\rm reg}(g)T^{0}_{g}=T^{0}_{g}\rho_{\rm reg}(g)\) since \(T^{0}_{g}\) and \(\rho_{\rm reg}(g)\) commute as \(\rho_{\rm reg}(g)\) acts on the channel space and \(T^{0}_{g}\) acts on the base space. This proves the claim of Equation 21.
Recall \(\Psi(c)=\mathcal{R}_{n}[\psi(c)]\). Using the equivariance of \(\psi\), Proposition 1 could be reformulated as
\[\Psi(T^{0}_{g}c)\star\phi(o_{t})=\rho_{\rm reg}(-g)(\Psi(c)\star\phi(o_{t})) \tag{22}\]
Evaluating the left hand side of Equation 22,
\[\Psi(T^{0}_{g}c) \star\phi(o_{t})\] \[=\mathcal{R}_{n}(\psi(T^{0}_{g}c))\star\phi(o_{t})\] (equivariance of \[\psi\] ) \[=\rho_{\rm reg}(-g)(\psi(\mathcal{R}_{n}(c))\star\phi(o_{t}))\] (Proposition 1) \[=\rho_{\rm reg}(-g)(\mathcal{R}_{n}(\psi(c))\star\phi(o_{t}))\] (equivariance of \[\psi\] ) \[=\rho_{\rm reg}(-g)(\Psi(c)\star\phi(o_{t}))\]
Combining Equation (21) with Equation (22) realizes the Proposition 2.
#### 4.2.2 Translational Symmetry
Note that in addition to the two rotational symmetries enforced by our model, it also has translational symmetry. Since the rotational symmetry is realized by additional restrictions to the weights of kernels of convolutional networks, the rotational symmetry is in addition to the underlying shift equivariance of the convolutional network. Thus, the full symmetry group enforced is the group generated by \(C_{n}\times C_{n}\times(\mathbb{R}^{2},+)\). Equivariant neural networks learn effectively on a lower dimensional space, the equivalence classes of samples under the group action.
#### 4.2.3 From \(C_{n}\times C_{n}\)-place symmetry to \({\rm SO}(2)\times{\rm SO}(2)\)
The above place symmetry is limited to the cyclic group due to the role of \(\mathcal{R}_{n}\), though as \(n\to\infty\), \(C_{n}\) equals \({\rm SO}(2)\). We show the generalization of the \(C_{n}\times C_{n}\)-place symmetry and \({\rm SO}(2)\times{\rm SO}(2)\) place symmetry below.
**Proposition 3**.: _Given \(g\in G\) for \(G\subseteq{\rm SO}(2)\), an equivariant model \(\phi\) satisfying \(\phi(T^{0}_{g}(o_{t}))=T^{0}_{g}(\phi(o_{t}))\) and a function \(\bar{\Psi}\colon c\mapsto K\) satisfying the equivariant constraint \(\bar{\Psi}(T^{0}_{g}c)=T^{0}_{g}\bar{\Psi}(c)\), where \(c\) is the crop \(\in\mathbb{R}^{2}\) and \(K\colon\mathbb{R}^{2}\to\mathbb{R}^{d_{\rm equiv}\times d_{\rm trivial}}\) is a 2D steerable kernel with trivial representation as the input type. The cross-correlation between \(\bar{\Psi}(c)\) and \(\phi(T^{0}_{g}(o_{t}))\) satisfies_
\[\bar{\Psi}(T^{0}_{g_{1}}(c))\star\phi(T^{0}_{g_{2}}(o_{t}))=\rho_{ \rm out}(g_{2}-g_{1})(T^{0}_{g_{2}}[\bar{\Psi}(c)\star\phi(o_{t})]) \tag{23}\]
Proposition 3 states that to satisfy the cross-type place symmetry, one necessary condition is that the output of \(\bar{\Psi}\) is a steerable kernel. It generalizes Proposition 2 to either \(C_{n}\) or \({\rm SO}(2)\). In fact, \(\Psi(c)=\mathcal{R}_{n}[\psi(c)]\) combining the lift operator \(\mathcal{R}_{n}\) and the equivariant constraint of \(\psi\) shown in Equation 16 is a special case of \(\bar{\Psi}(c)\). \(\mathcal{R}_{n}\colon\mathbb{R}^{2}\to K\) outputting a steerable kernel \(K\) that takes the _regular representation_ of \(C_{n}\) as the output type and satisfies the steerability constraint of Equation 3. When using _irreducible representations_ as the output type, we can instantiate \(\rho_{\rm out}(g_{2}-g_{1})\) in RHS of Equation 23 as \(\rho_{\rm irrep}(g_{2}-g_{1})\) which is equivalent to \(s(g_{2}-g_{1})\) after Inverse Fourier Transform.
To prove Proposition 3, we first introduce another lemma.
**Lemma 4**.: _A 2D steerable kernel \(K\colon\mathbb{R}^{2}\to\mathbb{R}^{d_{\rm out}\times d_{\rm trivial}}\) satisfies_
\[T^{0}_{g}K(x)=\rho_{\rm out}(g^{-1})K(x) \tag{24}\]
Proof.: Recall that \(\rho_{0}(g)\) is an identity mapping. Substituting \(\rho_{\rm in}\) with \(\rho_{0}(g)\) and \(g^{-1}\) with \(g\) in the steerability constraint \(K(g\cdot x)=\rho_{\rm out}(g)K(x)\rho_{\rm in}(g)^{-1}\) shown in Equation (3) completes the proof.
\[T^{0}_{g}K(x) =K(g^{-1}x)\] \[=\rho_{\rm out}(g^{-1})K(x)\rho_{\rm in}(g)\] \[=\rho_{\rm out}(g^{-1})K(x)\]
Lemma 4 states that when the input type is the trivial representation, a spatial rotation of the steerable kernel is the same as the inverse channel space transformation. With Lemma 4 in hand, we start the proof of proposition 3
Proof.: Similar to the proof of Proposition 2, we first show the equivariance under rotations of the placement \(o_{t}\). We claim
\[\bar{\Psi}(c)\star\phi(T^{0}_{g}(o_{t}))=T^{\rm out}_{g}(\bar{\Psi}(c)\star \phi(o_{t})) \tag{25}\]
Starting from the left-hand side of Equation 25,
\[\bar{\Psi}(c) \star\phi(T^{0}_{g}(o_{t}))\] \[=\bar{\Psi}(c)\star T^{0}_{g}\phi(o_{t})\] (equivariance of \[\phi\] ) \[=(T^{0}_{g}T^{0}_{g^{-1}}\bar{\Psi}(c))\star(T^{0}_{g}\phi(o_{t}))\] \[=T^{0}_{g}(T^{0}_{g^{-1}}\bar{\Psi}(c)\star\phi(o_{t}))\] (Lemma 3) \[=T^{0}_{g}(\rho_{\rm out}(g)\bar{\Psi}(c)\star\phi(o_{t}))\] (Lemma 4) \[=T^{\rm out}_{g}(\bar{\Psi}(c)\star\phi(o_{t})).\]
Then, we propose the equivariance under rotations of the picked object as
\[\bar{\Psi}(T^{0}_{g}(c))\star\phi(o_{t})=\rho_{\rm out}(-g)(\bar{\Psi}(c)\star \phi(o_{t})) \tag{26}\]
Evaluating the left-hand side of Equation 26,
\[\bar{\Psi}(T_{g}^{0}(c))\star\phi(o_{t})\] \[\quad=T_{g}^{0}\bar{\Psi}(c)\star\phi(o_{t})\quad\text{(equivariance of }\bar{\Psi})\] \[\quad=\rho_{\text{out}}(-g)\bar{\Psi}(c)\star\phi(o_{t})\quad\text{( Lemma \ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeqeq:
(i.e. \(n=|C_{n}|\)). The first layer maps the trivial representation of \(c\) to a quotient regular representation followed by 3 residual blocks containing max-pooling operators. This goes to two equivariant convolution layers and then to an average pooling layer.
Place models \(\phi\) and \(\psi\)Our place model has two equivariant convolution networks, \(\phi\) and \(\psi\), and both have similar architectures to \(f_{p}\). The network \(\phi\) takes as input a zero-padded version of the 4-channel RGBD observation \(o_{t}\), \(\mathrm{pad}(o_{t})\in\mathbb{R}^{4\times(H+d)\times(W+d)}\), and generates a dense feature map, \(\phi(\mathrm{pad}(o_{t}))\in\mathbb{R}^{(H+d)\times(W+d)}\), where \(d\) is the padding size. The network \(\psi\) takes as input the image patch \(c\in\mathbb{R}^{4\times H_{2}\times W_{2}}\) and outputs \(\psi(c)\in\mathbb{R}^{H_{2}\times W_{2}}\). After applying rotations of \(C_{n}\) to \(\psi(c)\), the transformed dense embeddings \(\Psi(c)\in\mathbb{R}^{n\times H_{2}\times W_{2}}\) are cross-correlated with \(\phi(\mathrm{pad}(o_{t}))\) to generate the placing action distribution \(p(a_{\mathrm{place}}|o_{t},a_{\mathrm{pick}})\in\mathbb{R}^{n\times H\times W}\), where the channel axis \(n\) corresponds to placing angles, \(\frac{2\pi i}{n}\) for \(0\leq i<n\).
Group Types and SizesThe networks \(f_{p}\), \(\psi\), and \(\phi:\rho_{0}\mapsto\rho_{0}\), which are all defined using \(C_{6}\) regular representations in the intermediate layers. The ablation study of the group size of the latent feature is discussed in the Experiment section. The network \(f_{\theta}:\rho_{0}\mapsto\rho_{\mathrm{quot}}\) is defined using the quotient representation \(C_{36}/C_{2}\), which corresponds to the number of allowed pick orientations. The lift operator \(\mathcal{R}_{n}\) is implemented with \(C_{36}\) cyclic group, which allows 36 different place orientations. Note that the number of allowed pick and place orientations is a hyperparameter and depends on tasks. Our choice of the \(\frac{\pi}{18}\) discretization, i.e., 18 bilateral-symmetric pick orientations and 36 place orientations, follows the settings of Ravens-10 benchmark Zeng et al. (2021).
Training DetailsWe train Equivariant Transporter Network with the Adam Kingma and Ba (2014) optimizer with a fixed learning rate of \(10^{-4}\). It takes about 0.8 seconds4 to complete one SGD step with a batch size of one on an NVIDIA Tesla V100 SXM2 GPU. Compared with the baseline transporter net which takes around 0.6 seconds to complete one SGD step on the same setting, the equivariant constraint on the weight updating increases \(33\%\) computation load. In fact, Equivariant Transporter Net converges faster than the baseline Transporter Net as shown in Figure 9. This is due to that the larger symmetry group results in a smaller dimensional sample space and thus better coverage by the training data. For each task, we train a single-policy network and evaluate the performance every 1k steps on 100 unseen tests. On most tasks, the best performance is achieved in less than 10k SGD steps.
Footnote 4: The resolution of the input image is \(320\times 160\) for this training time.
## 4 Experiments
We evaluate Equivariant Transporter using the Ravens-10 Benchmark Zeng et al. (2021) and our variations thereof.
### Tasks
Ravens-10 Tasks Ravens-10 is a behaviour cloning simulation benchmark for manipulation, where each task owns an oracle that can sample expert demonstrations from the distribution of successful picking and placing actions with access to the ground-truth pose of each object. The 10 tasks of Ravens can be classified into 3 categories: _Single-object manipulation tasks_ (block-insertion, align-box-corner); _Multiple-object manipulation tasks_ (place-red-in-green, towers-of-hanoi, stack-block-pyramid, palletizing-boxes, assembling-kits, packing-boxes); _Deformable-object manipulation task_ (manipulating-rope, sweeping-piles).
Here we provide a short description of Ravens-10 Environment, we refer readers to Zeng et al. (2021) for details. The poses of objects and placements in each task are randomly sampled in the workspace without collision. Performance on each task is evaluated in one of two ways: 1) pose: translation and rotation error relative to target pose is less than a threshold \(\tau=1\mathrm{cm}\) and \(\omega=\frac{\pi}{12}\) respectively. Tasks: block-insertion, towers-of-hanoi, place-red-in-green, align-box-corner, stack-block-pyramid, assembling-kits. Partial scores are assigned to multiple-action tasks. 2) Zone: Ravens-10 discretizes the 3D bounding box of each object into \(2cm^{3}\) voxels. The Total reward is calculated by \(\frac{\theta\cdot\text{of voxels in target zone}}{\text{total $\theta$ of voxels}}\). Tasks: palletizing-boxes, packing-boxes, manipulating-cables, sweeping-piles. Note that pushing objects could also be parameterized with \(a_{\mathrm{pick}}\) and \(a_{\mathrm{place}}\) that correspond to the starting pose and the ending pose of the end effector.
1. **block-insertion:** Picking up an L-shape block and placing it into an L-shaped fixture.
Figure 7: Simulated environment for parallel-jaw gripper tasks. From left to right: (a) inserting blocks into fixtures, (b) placing red boxes into green bowls, (c) align box corners to green lines, (d) stacking a pyramid of blocks, (e) palletizing boxes.
2. **place-red-in-green:** picking up red cubes and placing them into green bowls. There could be multiple bowls and cubes with different colors.
3. **towers-of-hanoi:** sequentially picking up disks and placing them into pegs such that all 3 disks initialized on one peg are moved to another, and that only smaller disks can be on top of larger ones.
4. **align-box-corner:** picking up a randomly sized box and placing it to align one of its corners to a green L-shaped marker labeled on the tabletop.
5. **stack-block-pyramid:** sequentially picking up 6 blocks and stacking them into a pyramid of 3-2-1.
6. **palletizing-boxes:** picking up 18 boxes and stacking them on top of a pallet.
7. **assembling-kits:** picking 5 shaped objects (randomly sampled with replacement from a set of 20) and fitting them to corresponding silhouettes of the objects on a board.
8. **packing-boxes:** picking and placing randomly sized boxes tightly into a randomly sized container.
9. **manipulating-rope:** manipulating a deformable rope such that it connects the two endpoints of an incomplete 3-sided square (colored in green).
10. **weeeping-piles:** pushing piles of small objects (randomly initialized) into a desired target goal zone on the tabletop marked with green boundaries. The task is implemented with a pad-shaped end effector.
#### Ravens-10 Tasks Modified for the Parallel Jaw Gripper
We select 5 tasks (block-insertion, align-box-corner, place-red-in-green, stack-block-pyramid, palletizing-boxes) from Ravens-10 and replaced the suction cup with the Franka Emika gripper, which requires additional picking angle inference. Figure 7 illustrates the initial state and completion state for each of these five tasks. For each of these five tasks, we defined an oracle agent. Since the Transporter Net framework assumes that the object does not move during picking, we defined these expert generators such that this was the case.
#### Goal-conditioned tasks
We design four image-based goal-conditioned tasks (goal-conditioned block insertion, goal-conditioned block pyramid, goal-conditioned four blocks-no, goal-conditioned cable align) based on ravens-10, as shown in Figure 8. For each of the four tasks, objects are generated with random poses on the workspace and there is no placement in the observation \(o_{t}\). The robot must use pick-place actions to manipulate the objects to the target pose specified in the goal images. For the goal-conditioned cable-align task, the robot needs to align the rope to the straight line shown in the goal image.
### Training and Evaluation
#### Training
For each task, we produce a dataset of \(n\) expert demonstrations, where each demonstration contains a sequence of one or more observation-action pairs \((o_{t},\bar{a}_{t})\) (or \((o_{t},o_{g},\bar{a}_{t})\) for goal-conditioned tasks). Each action \(\bar{a}_{t}\) contains an expert picking action \(\bar{a}_{\rm pick}\) and an expert placing action \(\bar{a}_{\rm place}\). We use \(\bar{a}_{t}\) to generate one-hot pixel maps as the ground-truth labels for our picking model and placing model. The models are trained using a cross-entropy loss.
#### Metrics
We measure performance the same way as it was measured in Transporter Net Zeng et al. (2021) - using a metric in the range of 0 (failure) to 100 (success). Partial scores are assigned to multiple-action tasks. For example, in the block-stacking task where the agent needs to construct a 6-block pyramid, each successful rearrangement is credited with a score of 16.667. We report the highest validation performance during training, averaging over 100 unseen tests for each task.
#### Baselines
We compare our method against Transporter Net Zeng et al. (2021) as well as the following baselines previously used in the Transporter Net paper Zeng et al. (2021). _Form2Fit_Zakka et al. (2020) introduces a matching module with the measurement of \(L_{2}\) distance of high-dimension descriptors of picking and placing locations. _Conv-MLP_ is a common end-to-end model Levine et al. (2016) which outputs \(a_{\rm pick}\) and \(a_{\rm place}\) using convolution layers and MLPs (multi-layer perceptrons). _GT-State MLP_ is a regression model composed of an MLP that accepts the ground-truth poses and 3D bounding boxes of objects in the environment. _GT-State MLP 2-step_ outputs the actions sequentially with two MLP networks and feeds \(a_{\rm pick}\) to the second step. All regression baselines learn mixture densities Bishop (1994) with log-likelihood loss.
Figure 8: Simulated environment for goal-conditioned tasks. The first row shows the initial observations of four tasks where objects are generated with random poses on the workspace. The robot must use pick-and-place actions to manipulate the objects to the target pose specified in the goal images \(o_{g}\), as shown in the second row.
For goal-conditioned tasks, we compare two baselines _Equivariant-Transporter-Goal-Stack_ with _Transporter-Goal-Stack_ and _Transporter-Goal-Split_.
#### Adaptation of Transporter Net for Picking Using a Parallel Jaw Gripper
In order to compare our method against Transporter Net for the five parallel jaw gripper tasks, we must modify Transporter to handle the gripper. We accomplish this by Zeng, Song, Welker, Lee, Rodriguez and Funkhouser (2018) lifting the input scene image \(o_{t}\) over \(C_{n}\), producing a stack of differently oriented input images which is provided as input to the pick network \(f_{\mathrm{pick}}\). The results are then counter-rotated at the output of \(f_{\mathrm{pick}}\) and each channel corresponds to one pick orientation.
### Results for the Ravens-10 Benchmark Tasks
#### Task Success Rates
Table 1 shows the performance of our model on the Raven-10 tasks for different numbers of demonstrations used during training. All tests are evaluated on unseen configurations, i.e., random poses of objects, and three tasks (align-box-corner, assembling-kits, packing-box) use unseen objects. Our proposed Equivariant Transporter Net outperforms all the other baselines in nearly all cases. The amount by which our method outperforms others is largest when the number of demonstrations is smallest, i.e. with only 1 or 10 demonstrations. With just 10 demonstrations per task, our method can achieve \(\geq 95\%\) success rate on 7/10 tasks. With either 1 or 10 demonstrations, the performance of our model is better than baselines trained with 1000 demonstrations on 5/10 tasks.
cases. It indicates that the large symmetry group enables the model to learn on a low-dimension space and achieve better convergence speed.
### Results for Parallel Jaw Griper Tasks
Table 2 compares the success rate of Equivariant Transporter with the baseline Transporter Net on parallel-jaw gripper tasks. Again, our method outperforms the baseline in nearly all cases. One interesting observation that can be made by comparing Tables 1 and 2 is that both Equivariant Transporter and baseline Transporter do better on the gripper versions of the task compared to the original Ravens-10 versions. This is likely caused by the fact that the expert demonstrations we developed for the gripper version task have fewer stochastic gripper poses during pick than the case in the original Ravens-10 benchmark.
### Results for goal-conditioned equivariant transporter net
Table 3 compares the performance of _Equivariant-Transporter-Goal-Stack_ with the two baselines (_Transporter-Goal-Stack_, _Transporter-Goal-Split_) for goal-conditioned tasks. Our model gets better performance than the baselines on all the tasks. In most cases, the performance gap between our model and the baselines becomes larger as the number of demonstrations decreases. It shows the sample efficiency of our proposed method could be used to solve goal-conditioned tasks effectively.
### Ablation Study
#### Ablations
We performed an ablation study to evaluate the relative importance of the equivariant models in pick (\(f_{p}\) and \(f_{\theta}\)) and place (\(\psi\) and \(\phi\)). We compare four versions of the model with various levels of equivariance: non-equivariant pick and non-equivariant place (baseline Transporter), equivariant pick and non-equivariant place, and equivariant pick and equivariant place (Equivariant Transporter).
#### Results
Figure 10 shows the performance of all four ablations as a function of the number of SGD steps for the scenario where the agent is given 10 or 100 expert demonstrations. The results indicate that place equivariance (i.e. equivariance of \(\psi\) and \(\phi\)) is namely responsible for the gains in performance of Equivariant Transporter versus baseline Transporter. This finding is consistent with the argument that the larger \(C_{n}\times C_{n}\) symmetry group (only present with the equivariant place) is responsible for our performance gains. Though the non-equivariant and equivariant pick networks result in comparable performance, the equivariant network is far more computationally efficient. The equivariant model takes a single image of the observation as input while the non-equivariant method Zeng, Song, Welker, Lee, Rodriguez and Funkhouser (2018) needs a stack of n-different rotated input images in order to infer the pick orientation channel-wisely.
#### Ablation Study on Group Size
We compare different group sizes to encode the latent features of our network. Note that the number of pick orientations and the number of place orientations are task-relevant parameters and this ablation study is used to investigate the group size of the intermediate layers. We select (\(C_{4},C_{4},C_{4}\)), (\(C_{6},C_{6},C_{6}\)), (\(C_{8},C_{8}\)) to construct three different settings of \(f_{p},\psi\) and \(\phi\). We build a light version of our model with each setting and train it on block insertion, packing boxes, and manipulating rope with 1 and 10 demos. Specifically, the \(C_{4}\) model has 9 million trainable parameters, and the \(C_{6}\) model and the \(C_{8}\) model have 13.5 million and 18 million parameters, respectively. Note that a large group has a large fiber space dimension which results in more parameters but it also adds more constraints to the free parameters of the kernel. We train each model with data augmentation and evaluate the performance on 100 unseen test cases every 1k training steps. We report the highest success rate, its corresponding training steps, and the success rate with 1k training steps in Table 4. Several findings could be drawn from Table 4. First, the best mean success rates are consistent with different groups on most tasks. As the number of available demonstrations increases, the differences decrease. Second, large group size may boost the convergence speed when looking at the results of Block
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c c c c} \hline Task Name-Demo Number & Block Insertion-1 & Block Insertion-10 & Packing Boxes-1 & Packing Boxes-10 & Manipulating Rope-1 & Manipulating Rope-10 & Manipulating Rope-10 \\ Group Size & \(C_{4}\) & \(C_{6}\) & \(C_{8}\) & \(C_{4}\) & \(C_{6}\) & \(C_{8}\) & \(C_{4}\) & \(C_{6}\) & \(C_{8}\) & \(C_{4}\) & \(C_{6}\) & \(C_{8}\) & \(C_{4}\) & \(C_{6}\) & \(C_{8}\) & \(C_{4}\) & \(C_{6}\) & \(C_{8}\) \\ \hline Best Mean Success Rate & 100 & 100 & 100 & 100 & 100 & 100 & 98.3 & 99.6 & 98.2 & 99.4 & 99.8 & 99.4 & 47.0 & 50.7 & 40.2 & 81.0 & 83.9 & 79.9 \\ SGD Step of the Best Mean Success Rate & 4k & 1k & 1k & 1k & 1k & 1k & 66 & 5k & 62 & 8x & 8x & 66 & 6k & 7k & 7k & 1k & 4k & 5k \\ Mean Success Rate Trained with 1k Sep & 92.0 & 100 & 100 & 100 & 100 & 100 & 87.7 & 94.1 & 97.2 & 99.2 & 97.2 & 98.8 & 28.0 & 39.0 & 12.3 & 81.0 & 70.9 & 43.2 \\ \hline \end{tabular}
\end{table}
Table 4: **Ablation Study on Group Size.** We report the success rate (% mean), and its corresponding training steps on three tasks from Ravens-10 Benchmark with 1 and 10 demos respectively. We test three different cyclic groups for the network \(f_{p}\), \(\psi\), and \(\phi\).
Figure 10: Ablation study. Performance is evaluated on 100 unseen tests of each task.
Insertion-1. However, it could also result in overfitting when comparing the results of Manipulating-Rope-1 since the large-group model has more constraints and parameters. Finally, we think the group size of the intermediate layer might be regarded as a hyper-parameter.
### Experiments on a Physical Robot
We evaluated Equivariant Transporter on a physical robot in our lab. The simulator was not used in this experiment - all demonstrations were performed on the real robot.
SetupWe used a UR5 robot with a Robotiq-85 end effector. The workspace was a \(40cm\times 40cm\) region on a table beneath the robot (see Figure 12). The observations \(o\) were \(200\times 200\) depth images obtained using an Occipital Structure Sensor that was mounted pointing directly down at the table (see Figure 11).
TasksWe evaluated Equivariant Transporter on three of the Ravens-10 gripper-modified tasks: block insertion, placing boxes in bowls, and stacking a pyramid of blocks. Since our sensor only measures depth (and not RGB), we modified the box-in-bowls task such that box color was irrelevant to success, i.e. the task is simply to put any box into a bowl.
DemonstrationsWe obtained 10 human demonstrations of each task. These demonstrations were obtained by releasing the UR5 brakes and pushing the arm physically so that the harmonic actuators were back-driven.
Training and EvaluationFor each task, a single-policy agent was trained for 10k SGD steps. During testing, objects were randomly placed on the table. A task was considered to have failed when a single incorrect pick or place occurred. We evaluated the model on 20 unseen configurations of each task.
ResultsTable 5 shows results from 20 runs of each of the three tasks. Notice that the success rates here are higher than they were for the corresponding tasks performed in simulation (Table 2). This is likely caused by the fact that the criteria for task success in simulation (less than 1 cm translational error and less than \(\frac{\pi}{12}\) rotation error were more conservative than is actually the case in the real world.
DiscussionEquivariant networks are built on top of conventional convolution kernels with the steerability constraint. It does not break the mechanism of weight sharing and updating and thus keeps the robustness of learning and reasoning of CNN. As shown in Figure 11, Equivariant Transporter Net can handle real-sensor noise. Frequently, the crop \(c\) contains multiple objects. For instance, on the stack-block-pyramid task as shown in Figure 12, the crop not only includes the block to be picked but also neighboring blocks or some parts of them. During training, data augmentation also generates images with partially observed objects. For example, on the block insertion task, it shifts some part of the L-shape block or the slot out of the scene. Some special shapes like elongated objects might be difficult to represent with an image crop and may be suitable for the goal-conditioned version of our model. Some high-precision tasks like gear assembly are more sensitive to discretization and it may be tackled more easily with the \(\mathrm{SO}(2)\) version of our model.
Compared to learning pick-and-place skills efficiently, the one-shot generalization and sequential decision-making ability of both Transporter Net and Equivariant Transporter Net seem less compelling. As shown in Table 1, they achieved less than \(50\%\) success rate when trained with 1 demo on the align-box-corner task that requires the agent to generalize the skill to boxes with random sizes and colors during the test. Similarly, the performances on the stack-block-pyramid task trained with 1 demo are below \(40\%\) since if there was a collapse, some blocks might be tilted and it yields out-of-distribution data.
## Conclusion and Limitations
This paper explores the symmetries present in the pick and place problem and finds that they can be described by pick symmetry and place symmetry. This corresponds to the group of different pick and place poses. We evaluate the Transporter Network model proposed in Zeng, Florence, Tompson, Welker, Chien, Attarian, Armstrong, Krasin, Duong, Sindhwani et al. (2021) and find that it encodes half of the place symmetry (the \(C_{n}\)-place symmetry). We propose a novel version of Transporter Net, Equivariant Transporter Net, which we show encodes both types of symmetries. The large symmetry group could also be extended to solve goal-conditioned tasks. We evaluate our model on the Ravens-10 Benchmark and its variations and evaluate against multiple strong baselines. Finally, we demonstrate that the method
Figure 11: Real robot experiment: initial observation \(o_{t}\in\mathbb{R}^{200\times 200}\) from the depth sensor. The left figure shows the block insertion task; the right figure shows the task of placing boxes in bowls. The depth value (\(\mathrm{meter}\)) is illustrated in the color bar.
Figure 12: Stack-block-pyramid task on the real robot. The left figure shows the initial state; the right figure shows the completion state.
\begin{table}
\begin{tabular}{c|c|c|c} \hline Task & \# domes & \# completions / \# trials & success rate \\ \hline stack-block-pyramid & 10 & 1720 & 95.5\% \\ \hline place-box-in-bowl & 10 & 20/20 & 100\% \\ \hline block-insertion & 10 & 20/20 & 100\% \\ \hline \end{tabular}
\end{table}
Table 5: Task success rates for physical robot evaluation tasks.
can effectively be used to learn manipulation policies on a physical robot.
One limitation of our framework as it is presented in this paper is that it relies entirely on behavior cloning. A clear direction is to integrate more on-policy learning which we believe would enable us to handle more complex tasks. Other directions of the multi-task equivariant agent, a closed-loop policy, or 3D Equivariant Transporter Net are also interesting.
|
2304.08776 | Hot and highly magnetized neutron star matter properties with Skyrme
interactions | We study the properties of hot and dense neutron star matter under the
presence of strong magnetic fields using two Skyrme interactions, namely the
LNS and the BSk21 ones. Asking for $\beta$--stability and charge neutrality, we
construct the equation of state of the system and analyze its composition for a
range of densities, temperatures and magnetic field intensities of interest for
the study of supernova and proto-neutron star matter, with a particular
interest on the degree of spin-polarization of the different components. The
results show that system configurations with larger fractions of spin up
protons and spin down neutrons and electrons are energetically favored over
those with larger fractions of spin down protons and spin up neutrons and
electrons. The effective mass of neutrons and protons is found to be in general
larger for the more abundant of their spin projection component, respectively,
spin down neutrons and spin up protons. The effect of the magnetic field on the
Helmhotz total free energy density, pressure and isothermal compressibility of
the system is almost negligible for all the values of the magnetic field
considered. | Omar G. Benvenuto, Eduardo Bauer, Isaac Vidaña | 2023-04-18T07:15:34Z | http://arxiv.org/abs/2304.08776v1 | # Hot and highly magnetized neutron star matter properties with Skyrme interactions
###### Abstract
We study the properties of hot and dense neutron star matter under the presence of strong magnetic fields using two Skyrme interactions, namely the LNS and the B8k21 ones. Asking for \(\beta\)-stability and charge neutrality, we construct the equation of state of the system and analyze its composition for a range of densities, temperatures and magnetic field intensities of interest for the study of supernova and proto-neutron star matter, with a particular interest on the degree of spin-polarization of the different components. The results show that system configurations with larger fractions of spin up protons and spin down neutrons and electrons are energetically favored over those with larger fractions of spin down protons and spin up neutrons and electrons. The effective mass of neutrons and protons is found to be in general larger for the more abundant of their spin projection component, respectively, spin down neutrons and spin up protons. The effect of the magnetic field on the Helmholtz total free energy density, pressure and isothermal compressibility of the system is almost negligible for all the values of the magnetic field considered.
## I Introduction
The properties of neutron stars [1] can be drastically modified due to the presence of strong magnetic fields. Most radio pulsars and accreting neutron stars in X-ray binaries present surface magnetic fields with intensities in the range \(10^{12}-10^{13}\) G [2]. Also recycled millisecond pulsars and old neutron stars in low-mass X-ray binaries have high surface fields of about \(10^{8}-10^{9}\) G [3; 4]. In the surface of soft-gamma-ray repeaters and slowly spinning anomalous X-ray pulsars, the so-called "magnetars", the field can reach values of the order of \(10^{14}-10^{15}\) G [5; 6; 7; 8]. The intensity of these fields may grow by several orders of magnitude in the dense interior of all these compact objects up to an upper limit, \(B\leq 10^{18}(M/1.4M_{\odot})(R/10\,{\rm km})^{-2}\) G, which follows from the virial theorem of magnetohydrostatic equilibrium [2; 9]. However, the origin of such large intensities remains still uncertain. These strong fields could be the fossil remnants from those of the progenitor star [10], or alternatively, they could be generated after the formation of the neutron star by some kind of dynamo process due to some long-lived electric currents flowing in the highly conductive neutron star material [11]. Another possibility that has long been considered by many authors, with however contradictory results, is that these fields result from a spontaneous phase transition to a ferromagnetic state at densities corresponding to theoretically stable neutron stars (see, _e.g.,_ Refs. [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38]). Whatever is the origin of these fields, however, it is clear that the study of nuclear matter under the influence of strong magnetic fields is fundamental for a complete understanding of the magnetic properties of neutron stars.
Several authors have studied the magnetization of symmetric nuclear matter and pure neutron matter [39; 40; 41; 42; 43; 44; 45]. The magnetization of \(\beta\)-stable neutron star matter, however, has received less attention in the literature. Blandford and Hernquist in Ref. [46], for instance, studied extensively the magnetization of \(\beta\)-stable matter for a single-component electron gas and for the crust matter of neutron stars. This study was generalized years later by Broderick _et al._ in Ref. [47] by including also the contribution of neutrons and protons. The effect of the density dependence of the nuclear symmetry energy on the magnetization of \(\beta\)-stable matter was studied few years ago by Dong _et al._[48], concluding that the magnetic susceptibility of protons, electrons and muons can be larger than that of neutrons. These authors found also that the anomalous magnetic moment of protons enhances their magnetic susceptibility to the point that it can be one of the main contributions and, therefore, should not be neglected. In Ref. [49] Rabhi _et al._ employed a relativistic mean field (RMF) approach to study the effects of strong magnetic field on the proton and neutron spin polarization, and the magnetic susceptibility in asymmetric nuclear matter. The authors of this work showed that magnetic fields of about \(10^{16}-10^{17}\) G have noticeable effects on the range of densities of interest for the study of neutron star crusts. They found also that, although the magnetic susceptibility of protons is larger for weaker fields, the one of neutrons becomes of the same order or even larger at subsaturation densities for small values of the proton fraction when the fields are larger than \(\sim 10^{16}\) G. In the recent years, the Coimbra group has published
several works devoted to the study the effects of strong magnetic fields on the crust-core transition and the inner crust of neutron stars [50; 51; 52; 53; 54; 55; 56]. In these works the Vlasov equation is used to determine, using a RMF approach, the dynamical spinodal instability region which gives a good estimation of the crust-core transition in neutron stars. The results of these works show that strong magnetic fields of the order of \(10^{15}-10^{17}\) G have a large effect on the spinodal region, defining the crust-core transition as a succession of stable and unstable regions due to the opening of new Landau levels. The results of these studies show also that sufficiently strong magnetic fields can significantly modify the extension of the unstable region and, therefore, the crust of magnetized neutron stars. The effect of temperature on the crust-core transition of magnetar was studied by this group in Ref. [52] and very recently also in Ref. [56]. In these works, the authors showed that the effect on the extension of the crust-core transition is washed away for temperatures above \(10^{9}\) K for magnetic field intensities \(\sim 5\times 10^{16}\) G but may still persist if a magnetic field as high as \(\sim 5\times 10^{17}\) G is considered. They found that for lower temperatures, the effect of the magnetic field on the crust-cross transition is noticeable and grows as the temperature decreases.
In this work we study the properties of an electrically neutral system of neutrons, protons and electrons in equilibrium with respect to the weak interaction (\(\beta\)-equilibrium), at finite temperature and in the presence of strong magnetic fields using Skyrme interactions. We consider a range of densities, temperatures and magnetic field intensities of interest for the study of supernova and proto-neutron star matter. We construct the equation of state (EoS) of the system and analyze its composition, with a particular interest on the degree of spin-polarization of the different components. The isothermal compressibility is also calculated and analyzed. The final scope of this work is to establish a framework for a future study of neutrino propagation in hot dense neutron star matter under the presence of strong magnetic fields. Neutrino cross sections and, consequently, the neutrino mean free path can be substantially affected by the presence of strong magnetic fields in neutron stars. For instance, the emission of neutrinos becomes asymmetric. It depends on the direction of the neutrino, as it was recently shown in Ref. [57] where the neutrino mean free path in hot pure neutron matter under the presence of strong magnetic fields was analyzed by two of the authors of the present work. The extension of this analysis for the propagation of neutrinos in (more realistic) \(\beta\)-stable matter is left for the near future, and it will be based on the results of the present work.
The paper is organized in the following way. The formalism employed to determine the properties of hot neutron star matter under the presence of a external magnetic field is described in Sec. II. Results are presented and discussed in Sec. III. Finally, a short summary and the main conclusions of this work are given in Sec. IV.
## II Formalism
As said in the introduction, in this work we consider an electrically neutral system of neutrons, protons and electrons in \(\beta\)-equilibrium, at finite temperature and in the presence of strong magnetic fields. The physical state of the system can be obtained by minimizing a function \(F\) which is constructed from the Helmholtz total free energy density \(\mathcal{F}\) of the system and two constraints that express, respectively, the conditions of baryon number conservation and electrical charge neutrality
\[F=\mathcal{F}+\alpha\left(\rho-\sum_{\tau=n,p}\sum_{\sigma=\uparrow,\downarrow }\rho_{\tau\sigma}\right)+\beta\sum_{\sigma=\uparrow,\downarrow}\left(\rho_{ p\sigma}-\rho_{e\sigma}\right). \tag{1}\]
Here \(\rho\) is the total baryon number density, and \(\rho_{\tau\sigma}\) and \(\rho_{e\sigma}\) are, respectively, the densities of neutrons (\(\tau=n\)), protons (\(\tau=p\)) and electrons (\(e\)), with spin up (\(\sigma=\uparrow\)) or spin down (\(\sigma=\downarrow\)) projections. The quantities \(\alpha\) and \(\beta\) are the two Lagrange multipliers associated to each one of the constraints. The minimization of \(F\) requires its partial derivatives with respect to the particle densities and the two multipliers to be zero, _i.e.,_
\[\frac{\partial F}{\partial\rho_{\tau\sigma}} = 0\,\ \frac{\partial F}{\partial\rho_{e\sigma}}=0\,\] \[\frac{\partial F}{\partial\alpha} = 0\,\ \ \frac{\partial F}{\partial\beta}=0. \tag{2}\]
Remembering that the chemical potential of a particle \(i\) is \(\mu_{i}=\partial\mathcal{F}/\partial\rho_{i}\), the above conditions yield the following set of eight equations
\[\mu_{i}-b_{i}\alpha+q_{i}\beta=0,\ \ i=n_{\uparrow},n_{\downarrow},p_{ \uparrow},p_{\downarrow},e_{\uparrow},e_{\downarrow}\, \tag{3}\]
\[\rho=\sum_{\tau=n,p}\sum_{\sigma=\uparrow,\downarrow}\rho_{\tau\sigma}\, \tag{4}\]
\[\sum_{\sigma=\uparrow,\downarrow}\rho_{p\sigma}=\sum_{\sigma=\uparrow, \downarrow}\rho_{e\sigma}\, \tag{5}\]
where \(b_{i}\) is the baryon number of particle \(i\) and \(q_{i}\) its electric charge. Eliminating the Lagrange multipliers \(\alpha\) and \(\beta\), one can obtain a set of relations among the chemical potentials of the different particles. In general there are as many independent chemical potentials as there are conserved charges, and all the others can be written in terms of them. In the case of neutron stars there are only two conserved charges (baryon number and electric charge), and we chose \(\mu_{n_{\uparrow}}\) and \(\mu_{e_{\downarrow}}\) as the two independent chemical potentials associated with them. Applying now Eq. (3) to \(n_{\uparrow}\) and \(e_{\uparrow}\) one finds
\[\alpha=\mu_{n_{\uparrow}}\,\ \beta=\mu_{e_{\uparrow}}. \tag{6}\]
Therefore, we can write
\[\mu_{i}=b_{i}\mu_{n_{\uparrow}}-q_{i}\mu_{e_{\uparrow}}\,\ \ i=n_{\uparrow},n_{ \downarrow},p_{\uparrow},p_{\downarrow},e_{\uparrow},e_{\downarrow}\, \tag{7}\]
which together with Eqs. (4) and (5) allow to determine the composition of the system. Note that
\[\mu_{n_{\uparrow}}=\mu_{n_{\downarrow}}\,\ \ \mu_{p_{\tau}}=\mu_{p_{\downarrow}}\,\ \ \mu_{e_{\tau}}=\mu_{e_{ \downarrow}}\, \tag{8}\]
that is, in the physical state the chemical potential of each species is independent of their spin projection.
We consider the interaction of electrons only with the magnetic field, but not between electrons with themselves or with protons, whereas to describe the in-medium interactions among the nucleons we employ Skyrme forces. In particular, we use the LNS interaction developed by Cao _et al.,_[58] and the interaction BSk21 [59] of the Brussels-Montreal group. We note that the BSk21 interaction contains two new terms, in addition to the usual ones of the Skyrme force, that are introduced in order to avoid the appearance of ferromagnetic instability at high densities, a general feature of all the conventional Skyrme forces developed in the past, as it is the case of the LNS one.
The total energy density of the system is given by
\[{\cal E}={\cal E}_{nucl}+{\cal E}_{elec}+\mu_{N}B\left(2L_{p}+\rho_{p}-\frac{g _{p}}{2}W_{p}-\frac{g_{n}}{2}W_{n}\right)\, \tag{9}\]
where \({\cal E}_{nucl}\) is the nuclear contribution, obtained in our case using the Hartree-Fock approximation with the Skyrme interaction, \({\cal E}_{elec}\) is the electron one, and the last term shows the explicit dependence of the energy density on the magnetic field. In the next we present separately these three contributions. We note that throughout all this work we use natural units in which \(\hbar=c=1\).
The nuclear contribution can be written in a compact form as
\[{\cal E}_{nucl}=\sum_{\tau=n,p}\sum_{\sigma=\uparrow,\downarrow}\frac{K_{ \tau\sigma}}{2m^{*}_{\tau\sigma}}+\frac{1}{16}\left[(a_{0}+a_{2}w^{2})\rho^{2 }+a_{1}\left(\sum_{\tau=n,p}W_{\tau}\right)^{2}+a_{3}\left(\sum_{\tau=n,p}I_{ \tau}W_{\tau}\right)^{2}\right]. \tag{10}\]
Here
\[w=\frac{1}{\rho}\left[\sum_{\sigma=\pm 1}\rho_{n\sigma}-\sum_{\sigma=\pm 1} \rho_{p\sigma}\right] \tag{11}\]
is the isospin asymmetry, with
\[\rho_{n\sigma}=\frac{1}{(2\pi)^{3}}\int d^{3}\vec{p}f_{n\sigma}(\varepsilon_{ n\sigma}(p),T) \tag{12}\]
and
\[\rho_{p\sigma}=\frac{eB}{(2\pi)^{2}}\sum_{N_{p}}\int_{-\infty}^{\infty}dp_{z} f_{p\sigma}(\varepsilon_{p\sigma}(p_{z},N_{p}),T) \tag{13}\]
being the partial densities of neutron and protons with spin projection \(\sigma\). The effective mass of a spin up or down nucleon is given by
\[m^{*}_{\tau\sigma} = \left[\frac{1}{m_{\tau}}+\frac{1}{4}(b_{0}+c_{0}-(b_{2}+c_{2})wI _{\tau})\rho\right. \tag{14}\] \[+ \left.\frac{s_{\sigma}}{4}\sum_{\tau^{\prime}=n,p}(b_{1}+c_{1}+( b_{3}+c_{3})I_{\tau}I_{\tau^{\prime}})W_{\tau^{\prime}}\right]^{-1}\,\]
where \(m_{\tau}\) is the bare mass of the nucleon, \(I_{\tau}=1(-1)\) for protons (neutrons), and \(s_{\sigma}=1(-1)\) if the spin projection is up (down). The coefficients \(a_{0},\cdots,a_{3}\), \(b_{0},\cdots,b_{3}\) and \(c_{0},\cdots,c_{3}\) are given in terms of the parameters \(t_{i=0,\ldots,5},x_{i=0,\ldots,5}\), \(\gamma\) and \(\beta\) of the LNS and BSk21 interactions through the relations
\[a_{0} = 6t_{0}+t_{3}\rho^{\gamma}\] \[a_{1} = -2t_{0}(1-2x_{0})-\frac{t_{3}}{3}(1-2x_{3})\rho^{\gamma}\] \[a_{2} = -2t_{0}(1+2x_{0})-\frac{t_{3}}{3}(1+2x_{3})\rho^{\gamma}\] \[a_{3} = -2t_{0}-\frac{t_{3}}{3}\rho^{\gamma}\] \[b_{0} = \frac{1}{2}[3t_{1}+t_{2}(5+4x_{2})]\] \[b_{1} = \frac{1}{2}[t_{2}(1+2x_{2})-t_{1}(1-2x_{1})]\] \[b_{2} = \frac{1}{2}[t_{2}(1+2x_{2})-t_{1}(1+2x_{1})]\] \[b_{3} = \frac{1}{2}(t_{2}-t_{1})\] \[c_{0} = \frac{1}{2}[3t_{4}\rho^{\beta}+t_{5}\rho^{\gamma}(5+4x_{5})]\] \[c_{1} = \frac{1}{2}[t_{5}\rho^{\gamma}(1+2x_{5})-t_{4}\rho^{\beta}(1-2x_{ 4})]\] \[c_{2} = \frac{1}{2}[t_{5}\rho^{\gamma}(1+2x_{5})-t_{4}\rho^{\beta}(1+2x_{ 4})]\] \[c_{3} = \frac{1}{2}(t_{5}\rho^{\gamma}-t_{4}\rho^{\beta}). \tag{15}\]
We note that the terms with the parameters \(t_{4},t_{5},x_{4},x_{5}\) and \(\beta\) are absent in the case of the LNS force and, therefore, the coefficients \(c_{0},\cdots,c_{3}\) are set equal to zero in this case. The quantities \(K_{\tau\sigma}\) and \(W_{\tau}\) appearing in Eqs. (9), (10) and (14) are related, respectively, with the kinetic
energy and the spin asymmetry density, and are defined as
\[K_{n\sigma}=\frac{1}{(2\pi)^{3}}\int d^{3}\vec{p}p^{2}f_{n\sigma}(\varepsilon_{n \sigma}(p),T) \tag{16}\]
\[W_{n}=\rho_{n\uparrow}-\rho_{n\downarrow} \tag{17}\]
in the case of neutrons, and
\[K_{p\sigma}=\frac{eB}{(2\pi)^{2}}\sum_{N_{p}}\int_{-\infty}^{\infty}dp_{2}p_{z }^{2}f_{p\sigma}(\varepsilon_{p\sigma}(p_{z},N_{p}),T) \tag{18}\]
\[W_{p}=\rho_{p\uparrow}-\rho_{p\downarrow} \tag{19}\]
for protons. Note that when the magnetic field is assumed to be along the direction of the z-axis, due to the Landau quantization, the \(x\) and \(y\) components of the proton momentum spread over a bound region of area \(2\pi eB\) in the \(p_{x}-p_{y}\) plane, whereas the \(z\) one is not bound and varies continuously. Therefore, the contribution of the protons to any macroscopic quantity per unit volume is evaluated by means of the replacement \(\int d^{3}\vec{p}/(2\pi)^{3}\to eB\int dp_{z}/(2\pi)^{2}\sum_{N_{p}}\). The sums over the proton Landau levels \(N_{p}\), in Eqs. (18) and (19), run from 0 up to a maximum level which is determined numerically, as it is explained with detail in the appendix B of Ref. [60].
The functions \(f_{n\sigma}(\varepsilon_{n\sigma}(p),T)\) and \(f_{p\sigma}(\varepsilon_{p\sigma}(p_{z},N_{p}),T)\) in Eqs. (12)-(13) and (16)-(19) are the corresponding neutron and proton Fermi-Dirac momentum distributions
\[f_{\tau\sigma}(\varepsilon_{\tau\sigma},T)=\left[1+\exp\left(\frac{ \varepsilon_{\tau\sigma}-\mu_{\tau\sigma}}{T}\right)\right]^{-1}\, \tag{20}\]
where the neutron and proton single-particle energies are, respectively
\[\varepsilon_{n\sigma}(p)=m_{n}+\frac{p^{2}}{2m_{n\sigma}^{*}}+\frac{1}{8}v_{n \sigma}-\mu_{N}g_{n}\frac{s_{\sigma}}{2}B\, \tag{21}\]
and
\[\varepsilon_{p\sigma}(p_{z},N_{p}) = m_{p}+\frac{p_{z}^{2}}{2m_{p\sigma}^{*}}+\frac{1}{8}v_{p\sigma} \tag{22}\] \[+ \mu_{N}B\left(2N_{p}+1-g_{p}\frac{s_{\sigma}}{2}\right)\.\]
Here \(\mu_{N}=3.15245\times 10^{-18}\) MeV/G is the nuclear magneton, \(g_{n}=-3.826\) and \(g_{p}=5.586\) are, respectively, the neutron and proton g-factors which take into account their anomalous magnetic moments, and
\[v_{\tau\sigma} = (a_{0}-a_{2}wI_{\tau})\,\rho+s_{\sigma}\sum_{\tau^{\prime}=n,p}( a_{1}+a_{3}I_{\tau}I_{\tau^{\prime}})W_{\tau^{\prime}} \tag{23}\] \[+ \sum_{\tau^{\prime}=n,p}\sum_{\sigma^{\prime}=\uparrow,\downarrow }(b_{0}+c_{0}+(b_{2}+c_{2})I_{\tau}I_{\tau^{\prime}})K_{\tau^{\prime}\sigma^{ \prime}}\] \[+ s_{\sigma}\sum_{\tau^{\prime}=n,p}\sum_{\sigma^{\prime}= \uparrow,\downarrow}s_{\sigma^{\prime}}(b_{1}+c_{1}+(b_{3}+c_{3})I_{\tau}I_{ \tau^{\prime}})K_{\tau^{\prime}\sigma^{\prime}}\]
is the Skyrme single-particle potential energy.
The electron contribution to the total energy density is
\[{\cal E}_{elec} = \frac{eB}{(2\pi)^{2}}\sum_{N_{e}}\sum_{\sigma=\uparrow,\downarrow} \tag{24}\] \[\times \int_{-\infty}^{\infty}dp_{z}\varepsilon_{e\sigma}(p_{z},N_{e})f_ {e\sigma}(\varepsilon_{e\sigma}(p_{z},N_{e}),T)\,\]
where, \(f_{e\sigma}(\varepsilon_{e\sigma}(p_{z},N_{e}),T)\) is the Fermi-Dirac distribution of electrons and \(\varepsilon_{e\sigma}(p_{z},N_{e})\) their single-particle energy which reads
\[\varepsilon_{e\sigma}(p_{z},N_{e})=\sqrt{m_{e}^{2}+2m_{e}\mu_{B}B(2N_{e}+1-g_{ e}\frac{s_{\sigma}}{2})+p_{z}^{2}}\, \tag{25}\]
with \(m_{e},N_{e},\mu_{B}=5.78838\times 10^{-15}\) MeV/G and \(g_{e}=-2\) being, respectively, the mass, the Landau level, the Bohr magneton and the g-factor of the electron. The partial densities of spin up or spin down electrons and the corresponding electron spin asymmetry density are
\[\rho_{e\sigma}=\frac{eB}{(2\pi)^{2}}\sum_{N_{e}}\int_{-\infty}^{\infty}dp_{z}f _{e\sigma}(\varepsilon_{e\sigma}(p_{z},N_{e}),T) \tag{26}\]
and
\[W_{e}=\rho_{e\uparrow}-\rho_{e\downarrow}\, \tag{27}\]
respectively. Note that, as in the case of protons, in Eqs. (24), (26) and (27) the sum over the electron Landau levels \(N_{e}\) runs from 0 up to a maximum level obtained numerically as in the case of the maximum proton Landau level. For details, the reader is again referred to the appendix B of Ref. [60].
The last remained element to be defined is the quantity \(L_{p}\) (see Eq. (9)) of the explicit magnetic field contribution to the total energy density. This quantity is simply
\[L_{p}=\frac{eB}{(2\pi)^{2}}\sum_{N_{p}}N_{p}\sum_{\sigma=\uparrow,\downarrow} \int_{-\infty}^{\infty}dp_{z}f_{p\sigma}(\varepsilon_{p\sigma}(p_{z},N_{p}),T ). \tag{28}\]
Once we have the total energy density, the Helmhotz total free energy density, from which the chemical potentials of all the particle species can be evaluated, is easily obtained from the usual thermodynamical relation
\[{\cal F}={\cal E}-T{\cal S}\, \tag{29}\]
where \({\cal S}\) is total entropy density
\[{\cal S}={\cal S}_{n}+{\cal S}_{p}+{\cal S}_{e}\, \tag{30}\]
with \({\cal S}_{n}\), \({\cal S}_{p}\) and \({\cal S}_{e}\) the corresponding neutron, proton and electron contributions:
\[{\cal S}_{n} = -\sum_{\sigma=\uparrow,\downarrow}\frac{1}{(2\pi)^{3}}\int d^{3}\vec {p}\left[f_{n\sigma}{\rm ln}(f_{n\sigma})+(1-f_{n\sigma}){\rm ln}(1-f_{n\sigma} )\right]\,\] \[{\cal S}_{p} = -\sum_{\sigma=\uparrow,\downarrow}\frac{eB}{(2\pi)^{2}}\sum_{N_{p} }\int_{-\infty}^{\infty}dp_{z}\left[f_{p\sigma}{\rm ln}(f_{p\sigma})+(1-f_{p \sigma}){\rm ln}(1-f_{p\sigma})\right]\,\] \[{\cal S}_{e} = -\sum_{\sigma=\uparrow,\downarrow}\frac{eB}{(2\pi)^{2}}\sum_{N_{e }}\int_{-\infty}^{\infty}dp_{z}\left[f_{e\sigma}{\rm ln}(f_{e\sigma})+(1-f_{ e\sigma}){\rm ln}(1-f_{e\sigma})\right]\, \tag{31}\]
where we have omitted the explicit dependencies of the Fermi-Dirac distributions to simplify the notation.
Once \({\cal F}\) is known one can obtain the pressure of the system simply as
\[P=\rho\left(\frac{\partial{\cal F}}{\partial\rho}\right)_{T,B}-{\cal F}\, \tag{32}\]
from which is possible to determine the isothermal compressibility
\[{\cal K}=\left[\rho\left(\frac{\partial P}{\partial\rho}\right)_{T,B}\right] ^{-1}. \tag{33}\]
## III Results and discussion
In the following we discuss the properties of hot and dense neutron star matter under the presence of strong magnetic fields. Results are presented for densities up to \(0.4\) fm\({}^{-3}\), temperatures \(T=5\), \(15\) and \(30\) MeV, and the magnetic fields strengths \(B=10^{16}\), \(10^{17}\) and \(10^{18}\) G.
We start by showing in Fig. 1 the fractions of neutrons, protons and electrons with spin up and down (\(x_{i}\equiv\rho_{i}/\rho\), \(i=n_{\uparrow},n_{\downarrow},p_{\uparrow},p_{\downarrow},e_{\uparrow},e_{ \downarrow}\)) in \(\beta\)-stable matter, obtained by solving Eqs. (4), (5) and (7), at \(T=5\) MeV for the three magnetic field strengths just mentioned and for the two interactions considered, LNS (panels (a), (c) and (e)), and BSk21 (panels (b), (d) and (f)). As it is seen in the figure, a magnetic field of strength \(10^{16}\) G induces only a extremely low polarization of the spins of the different components of neutron star matter at very low densities, and fields of the order of at least \(10^{17}\) G are needed to see appreciable differences in the fractions of neutrons, protons and electrons with opposite spin projections. We note also that whereas the fraction of protons with spin up (_i.e.,_ oriented parallel to the magnetic field) is larger than the fraction of protons with spin down the opposite is observed in the case of neutrons and electrons. This is simply a consequence of the fact that the proton g-factor is positive while the neutron and electron ones are negative. Due to this, spin up (down) protons (neutrons and electrons) have lower energy than spin down (up) protons (neutrons and electrons) (see Eqs. (21), (22) and (25)). Consequently, the configurations of the system with fractions of spin up protons larger than spin down protons, and fractions of spin down neutrons and electrons larger than spin up neutrons and electrons, have less energy and, therefore, are physically favorable. We observe that although in the case of the BSk21 interaction, for each particle species \(i\), the difference in the fractions between the spin up and spin down component decreases with increasing density, in the case of the LNS an increase of this difference is observed for neutrons and protons for densities \(\rho\gtrsim 0.3\) fm\({}^{-3}\). This is a consequence of the appearance of a ferromagnetic instability predicted by the LNS model at high densities, instability that is corrected in the case of the BSk21 force as we mentioned before.
To understand better the state of spin polarization of the system, we show now the spin asymmetry, defined as the ratio \(W_{j}/\rho_{j}=(\rho_{j\uparrow}-\rho_{j\downarrow})/\rho_{j}\) with \(\rho_{j}=\rho_{j\uparrow}+\rho_{j\downarrow}\) (\(j=n,p,e\)), of each particle species for the three temperatures and the three magnetic fields considered in this work. Results for the LNS and the BSk21 models are presented in Fig. 2 and Fig. 3, respectively. We note first that the value \(W_{j}/\rho_{j}=0\) corresponds to the case in which the species \(j\) is unpolarized, whereas \(W_{j}/\rho_{j}=\pm 1\) means that this species is totally polarized, _i.e.,_ all its spins are aligned along the same direction, parallel (\(W_{j}/\rho_{j}=1\)) or antiparallel (\(W_{j}/\rho_{j}=-1\)) to the one defined by the magnetic field. Partially polarized configurations of a species \(j\) correspond to values of \(W_{j}/\rho_{j}\) between \(-1\) and \(1\). To begin with, we observe in both figures that a magnetic field of \(10^{16}\) G has almost no effect on the spin asymmetry of the different particles, being the system essentially in a global unpolarized state. Just for the lowest temperature \(T=5\) MeV and at very low densities this field induces a very tiny polarization of the particle spins, especially on the electron ones. Only magnetic fields with a strength \(B\geq 10^{17}\) G are able to change the spin polarization of the system from the unpolarized state to a partially polarized one. As the sign of the spin asymmetry of each particle indicates, while protons have their majority of their spins oriented parallel to the magnetic field (\(W_{p}/\rho_{p}>0\)), neutron and electron spins are mostly aligned in the opposite direction (\(W_{n}/\rho_{n}<0\), \(W_{e}/\rho_{e}<0\)). This, as it was discussed above, is because system configurations with larger fractions of spin up protons and spin down neutrons and electrons are energetically favored over those with larger fractions of spin down protons and spin up neutrons and
electrons. We note that although the spin asymmetry of the protons (neutrons) decreases (increases) always with density for the BSk21 interaction, indicating the opposition of this nuclear interaction to the spin polarization induced by the magnetic field, this is not the case when the LNS one is used. In this case, we observe that \(W_{p}/\rho_{p}\) (\(W_{n}/\rho_{n}\)) decreases (increases) of up to a den-\(B=10^{18}\) G) and then it increases (decreases). This behavior is again just a consequence of the ferromagnetic instability predicted by the LNS force. Regarding the electrons, we see that their spin asymmetry always increases monotonously with density, reaching asymptotically their unpolarized state (\(W_{e}/\rho_{e}=0\)) at high densities. We observe also that protons and electrons are more polarized than the spin asymmetry.
Figure 3: (Color online) Spin asymmetry \(W_{j}/\rho_{j}\) of each particle species for neutron star matter at several temperatures and magnetic field strengths for the BSk21 interaction.
Figure 2: (Color online) Spin asymmetry \(W_{j}/\rho_{j}\) of each particle species for neutron star matter at several temperatures and magnetic field strengths for the LNS interaction.
the lower degree of polarization of the neutrons due to its weak anomalous magnetic moment. Note finally, that the spin asymmetry of the three species decreases (in absolute value) when increasing the temperature. This is expected since increasing temperature increases the entropy of the system and, consequently, its disorder. The number of spin up and spin down particles becomes more and more similar and, therefore, the system and becomes less polarized.
In Fig. 4 we show now the effect of the magnetic field on the neutron and proton effective masses predicted by the LNS (thick lines) and the BSk21 (thin lines) interactions for the three temperatures considered in this work. First, we observe that a magnetic field of \(10^{16}\) G is too low to have any effect on the effective mass of both neutrons and protons with different spin projections because, as shown in panels (a), (d) and (g) of Figs. 2 and 3, it has almost no effect on their spin asymmetry and, consequently, their effective masses are essentially spin independent for this field. Second, we note that the increase of \(m^{*}_{\tau\sigma}/m_{\tau}\) above one seen in the case of the BSk21 interaction is a consequence of the density dependence of the effective mass predicted by this force which goes as \((a+b\rho+c\rho^{1+\beta}+d\rho^{1+\gamma})^{-1}\) (see Eq. (14)), whereas the LNS interaction, for which the coefficients \(c_{0},\cdot\cdot\cdot,c_{3}\) are zero, predicts \(m^{*}_{\tau\sigma}\sim(\tilde{a}+\tilde{b}\rho)^{-1}\). Finally, we notice for both interactions that the effective mass of neutrons and protons is in general larger for the more abundant of their spin projection component, respectively, spin down neutrons and spin up protons. Note, however, that the BSk21 interaction in the case of protons predicts \(m^{*}_{p_{i}}>m^{*}_{p_{\uparrow}}\) for densities below \(\sim 0.1\) fm\({}^{-3}\). The splitting of the spin up and spin down nucleon effective masses can be understood by looking at the term \(\frac{s_{\pi}}{d}\sum_{\tau^{\prime}=n,p}(b_{1}+c_{1}+(b_{3}+c_{3})I_{\tau}I_{ \tau^{\prime}})W_{\tau^{\prime}}\) of Eq. (14). To facilitate the analysis of the role of this term on the effective masses, in Fig. 5 we show its density dependence for the two interactions considered for neutron star matter at \(T=5\) MeV in the presence of a magnetic field of \(10^{18}\) G. Let us consider first the case of the LNS interaction. As it is seen in panel (a), for this model this term is always positive for spin up neutrons and spin down protons while it is negative for spin down neutrons and spin up protons. Therefore, is it clear from the definition of the effective mass given in Eq. (14) that, for the LNS interaction, the contribution of this term leads to values of \(m^{*}_{n_{\downarrow}}\) and \(m^{*}_{p_{\downarrow}}\) always larger than those of \(m^{*}_{n_{\uparrow}}\) and \(m^{*}_{p_{\downarrow}}\), respectively. For the BSk21 interaction (panel (b)) we observe that also in this case this term is always positive for spin up neutrons and negative for spin down neutrons and, therefore, \(m^{*}_{n_{\downarrow}}>m^{*}_{n_{\uparrow}}\) in the whole range of densities explored. On the other hand, for densities below \(\sim 0.1\) fm\({}^{-3}\), this term is positive (negative) for spin up (down) protons and vice versa for densities above this value. Therefore, as it is seen in Fig. 4, this interaction predicts \(m^{*}_{p_{\downarrow}}>m^{*}_{p\uparrow}\) for \(\rho<0.1\) fm\({}^{-3}\) and \(m^{*}_{p_{\uparrow}}>m^{*}_{p_{\downarrow}}\) for \(\rho>0.1\) fm\({}^{-3}\). Similar conclusions can be drawn from the analysis of results obtained for other temperatures and magnetic fields.
Let us finish this section by analyzing the bulk thermodynamical properties of the system. We show first in Fig.
Figure 4: (Color online) Neutron and proton effective masses predicted by the LNS (thick lines) and the BSk21 (thin lines) interactions as function of the density for the three temperatures considered in this work.
6 the Helmhotz total free energy density as a function of the density for the two interactions and the different temperatures and magnetic fields considered in this work. As it can be seen the effect of the magnetic field seems to be almost negligible. The reason is mainly due to the low value of the nuclear magneton which makes the interaction of neutrons and protons with the magnetic field too mild for the values of \(B\) considered. Consequently, the contribution to the energy density from the interaction of nucleons with the field (last term of Eq. (9)) is too small, and that from the nuclear interaction \(\mathcal{E}_{nuc}\) (Eq. (10)) depends also very little on it. In addition, also the electron contribution \(\mathcal{E}_{elec}\) (Eq. (24)) to the total energy density depends very weakly on the field, increasing slightly when increasing \(B\). The reason, in this case, should not be attributed to the value of the Bohr magneton, three orders of magnitude larger than the nuclear one, but rather to the fact that the total electron fraction is very low (see Fig. 1). For illustration we show, for the two model interactions considered, in Tabs. I (LNS) and II (BSk21) all the contributions to the Helmholtz total free energy density for three representative densities \(\rho=0.08\) fm\({}^{-3}\), \(\rho=0.16\) fm\({}^{-3}\) and \(\rho=0.32\) fm\({}^{-3}\), a temperature \(T\) of 5 MeV, and the magnetic fields \(B=10^{16}\) G and \(B=10^{18}\) G. Note that also the neutron, proton and electron contributions to the total entropy density depend very little on the magnetic field.
Finally, in Figs. 7 and 8 we show, respectively, the pressure and the isothermal compressibility of the system as a function of the density for the two interactions and the different temperatures and magnetic fields considered in this work. The pressure, as it is required by the stability conditions, increases monotonically with the density and is larger for larger values of the temperature. Note that, as expected from our previous analysis of the Helmhotz total free energy density, both the pressure and the isothermal compressibility present also a very mild dependence on the magnetic field which is almost imperceptible in the figures. As it is seen, the isothermal compressibility decreases monotonously with density from relatively high values at low densities, and it becomes very small for \(\rho\gtrsim 0.3\) fm\({}^{-3}\), showing that from this density on (for these two particular interactions) highly magnetized neutron star matter can be considered an almost incompressible system.
Figure 5: (Color online) Density dependence of the term \(\frac{s_{\sigma}}{4}\sum_{\tau^{\prime}=m,p}(b_{1}+c_{1}+(b_{3}+c_{3})I_{\tau} I_{\tau^{\prime}})W_{\tau^{\prime}}\) of Eq. (14) for the LNS (panel (a)) and BSk21 (panel (b)) interactions for neutron star matter at \(T=5\) MeV in the presence of a magnetic field of \(10^{18}\) G.
Figure 6: (Color online) Helmhotz total free energy density as a function of the density for the two interactions and the different temperatures and magnetic fields considered in this work.
Figure 7: (Color online) Pressure of the system as a function of the density for the two interactions and the different temperatures and magnetic fields considered in this work.
## IV Summary and conclusions
In the present work we have studied the properties of hot and dense neutron star matter under the presence of strong magnetic fields using two Skyrme interactions, namely the LNS interaction developed by Cao _et al.,_[58] and the interaction BSk21 [59] of the Brussels-Montreal group. In particular, we have constructed the equation of state of the system and analyze its composition for a range of densities, temperatures and magnetic field intensities of interest for the study of supernova and proto-neutron star matter, with a particular interest on the degree of spin-polarization of the different components. Our results show that in order to see appreciable differences in the fractions of neutrons, protons and electrons with opposite spin projections the intensity of the magnetic field should be at least of the order of \(10^{17}\) G. They also show that system configurations with larger fractions of spin up protons and spin down neutrons and electrons are energetically favored over those with larger fractions of spin down protons and spin up neutrons and electrons. We have also studied the effect of the magnetic field on the neutron and proton effective masses finding that, for the two interactions considered, that the effective mass of neutrons and protons is in general larger for the more abundant of their spin projection component, respectively, spin down neutrons and spin up protons. Finally, we have determined the bulk thermodynamical properties of the system, finding that the effect of the magnetic field on the Helmhotz total free energy density, pressure and isothermal compressibility of the system is almost negligible due to the low value of the nuclear magneton which makes the interaction of neutrons and protons with the magnetic field too mild for the values of B considered.
## Acknowledgements
I.V. thanks the support of the European Union's Horizon 2020 research and innovation programme under Grant Agreement No. 824093.
\begin{table}
\begin{tabular}{c|c c c c c c c} \hline \hline \(B\) & \(\rho\) & \(\mathcal{E}_{nucl}\) & \(\mathcal{E}_{elec}\) & \(\mu_{N}B\left(2L_{p}+\rho_{p}-\frac{\rho_{p}}{2}W_{p}+\frac{\rho_{n}}{2}W_{n}\right)\) & \(\mathcal{S}_{p}T\) & \(\mathcal{S}_{n}T\) & \(\mathcal{S}_{e}T\) & \(\mathcal{F}\) \\ \hline & 0.08 & 0.612 & 0.104 & 0.017 & 0.046 & 0.272 & 0.008 & 0.407 \\ \(10^{16}\) & 0.16 & 1.590 & 0.362 & 0.050 & 0.082 & 0.296 & 0.016 & 1.608 \\ & 0.32 & 9.700 & 1.720 & 0.257 & 0.139 & 0.245 & 0.035 & 11.258 \\ \hline & 0.08 & 0.635 & 0.151 & \(-\)0.100 & 0.046 & 0.271 & 0.004 & 0.365 \\ \(10^{18}\) & 0.16 & 1.623 & 0.489 & \(-\)0.140 & 0.079 & 0.297 & 0.019 & 1.577 \\ & 0.32 & 9.880 & 1.964 & \(-\)0.087 & 0.134 & 0.247 & 0.032 & 11.344 \\ \hline \hline \end{tabular}
\end{table}
Table 2: As Tab. 1 for the BSk21 interaction.
Figure 8: (Color online) Isothermal compressibility as a function of the density for the two interactions and the different temperatures and magnetic fields considered in this work.
\begin{table}
\begin{tabular}{c|c c c c c c c} \hline \hline \(B\) & \(\rho\) & \(\mathcal{E}_{nucl}\) & \(\mathcal{E}_{elec}\) & \(\mu_{N}B\left(2L_{p}+\rho_{p}-\frac{\rho_{p}}{2}W_{p}+\frac{\rho_{n}}{2}W_{n}\right)\) & \(\mathcal{S}_{p}T\) & \(\mathcal{S}_{n}T\) & \(\mathcal{S}_{e}T\) & \(\mathcal{F}\) \\ \hline & 0.08 & 0.625 & 0.088 & 0.014 & 0.042 & 0.253 & 0.008 & 0.424 \\ \(10^{16}\) & 0.16 & 1.836 & 0.402 & 0.056 & 0.086 & 0.310 & 0.016 & 1.882 \\ & 0.32 & 8.264 & 1.630 & 0.237 & 0.141 & 0.365 & 0.033 & 9.592 \\ \hline & 0.08 & 0.606 & 0.144 & \(-\)0.064 & 0.043 & 0.251 & 0.004 & 0.388 \\ \(10^{18}\) & 0.16 & 1.783 & 0.563 & \(-\)0.084 & 0.079 & 0.310 & 0.018 & 1.855 \\ & 0.32 & 8.259 & 1.939 & \(-\)0.108 & 0.123 & 0.367 & 0.031 & 9.569 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Separate contributions to the Helmhotz total free energy density for densities \(\rho=0.08\) fm\({}^{-3}\), \(\rho=0.16\) fm\({}^{-3}\) and \(\rho=0.32\) fm\({}^{-3}\), a temperature \(T=5\) MeV, and the magnetic fields \(B=10^{16}\) G and \(B=10^{18}\) G for the LNS interaction. The Helmhotz total free energy density is shown in the last column. Units are given in MeV fm\({}^{-3}\). |
2310.12869 | Uncertainty Quantification of Bandgaps in Acoustic Metamaterials with
Stochastic Geometric Defects and Material Properties | This paper studies the utility of techniques within uncertainty
quantification, namely spectral projection and polynomial chaos expansion, in
reducing sampling needs for characterizing acoustic metamaterial dispersion
band responses given stochastic material properties and geometric defects. A
novel method of encoding geometric defects in an interpretable, resolution
independent is showcased in the formation of input space probability
distributions. Orders of magnitude sampling reductions down to $\sim10^0$ and
$\sim10^1$ are achieved in the 1D and 7D input space scenarios respectively
while maintaining accurate output space probability distributions through
combining Monte Carlo, quadrature rule, and sparse grid sampling with surrogate
model fitting. | Han Zhang, Rayehe Karimi Mahabadi, Cynthia Rudin, Johann Guilleminot, L. Catherine Brinson | 2023-10-19T16:18:10Z | http://arxiv.org/abs/2310.12869v1 | Uncertainty Quantification of Bandgaps in Acoustic Metamaterials with Stochastic Geometric Defects and Material Properties
###### Abstract
This paper studies the utility of techniques within uncertainty quantification, namely spectral projection and polynomial chaos expansion, in reducing sampling needs for characterizing acoustic metamaterial dispersion band responses given stochastic material properties and geometric defects. A novel method of encoding geometric defects in an interpretable, resolution independent is showcased in the formation of input space probability distributions. Orders of magnitude sampling reductions down to \(\sim 10^{0}\) and \(\sim 10^{1}\) are achieved in the 1D and 7D input space scenarios respectively while maintaining accurate output space probability distributions through combining Monte Carlo, quadrature rule, and sparse grid sampling with surrogate model fitting.
keywords: 1, Uncertainty Quantification 2, Metamaterials, 3, Acoustics, 4, spectral projection, 5, Polynomial Chaos Expansion +
Footnote †: journal: Physics Open
## 1 Introduction
Acoustic metamaterials have gained enormous attention in recent decades for their capabilities in manipulating vibration waves in myriad ways as a consequence of their design geometry and material properties (Liu et al., 2020). These theo |
2307.11777 | Prediction of Handball Matches with Statistically Enhanced Learning via
Estimated Team Strengths | We propose a Statistically Enhanced Learning (aka. SEL) model to predict
handball games. Our Machine Learning model augmented with SEL features
outperforms state-of-the-art models with an accuracy beyond 80%. In this work,
we show how we construct the data set to train Machine Learning models on past
female club matches. We then compare different models and evaluate them to
assess their performance capabilities. Finally, explainability methods allow us
to change the scope of our tool from a purely predictive solution to a highly
insightful analytical tool. This can become a valuable asset for handball
teams' coaches providing valuable statistical and predictive insights to
prepare future competitions. | Florian Felice, Christophe Ley | 2023-07-20T00:50:26Z | http://arxiv.org/abs/2307.11777v1 | # Prediction of Handball Matches with Statistically Enhanced Learning via Estimated Team Strengths
###### Abstract
We propose a Statistically Enhanced Learning (aka. SEL) model to predict handball games. Our Machine Learning model augmented with SEL features outperforms state-of-the-art models with an accuracy beyond 80%. In this work, we show how we construct the data set to train Machine Learning models on past female club matches. We then compare different models and evaluate them to assess their performance capabilities. Finally, explainability methods allow us to change the scope of our tool from a purely predictive solution to a highly insightful analytical tool. This can become a valuable asset for handball teams' coaches providing valuable statistical and predictive insights to prepare future competitions.
## 1 Introduction
Handball is a popular sport in Europe with growing interest in Northern Africa and South America. As a fast-paced sport, it is gaining interest in the population and in the scientific literature though predictive models are rarely discussed.
In this paper, we propose a Statistically Enhanced Learning (aka. SEL) model to predict handball matches. SEL is a general approach (Felice et al., 2023) that aims to formalize the feature extraction step from feature engineering when preparing data for a Machine Learning (ML) model. It is shown that generating SEL features can lead to a considerable boost of predictive performance while providing meaningful statistical covariates that can be interpretable. Unlike other sports, handball does not benefit from a large literature, and in particular predictive algorithms are scarce. In Section 2, we will describe the construction of our training data set based on publicly available data. We will explore different ML models and show how the SEL methodology helps improve their predictive performance. Next, we will present the results of the trained models in Section 3 and show how we can extract informative sports insights from a ML model via an explainable ML framework. Section 4 shows that this paper not only yields a good performing predictive model, but that it can also serve as user-friendly tool to team coaches in view of preparing upcoming games and competitions. Indeed, our proposal includes a highly performing ML model with, on top, explainability modules that can allow teams to identify important factors impacting the games' results. Finally, we conclude in Section 5.
### The history of handball
With its primitive form going back to the ancient Greece, modern handball was considered to be created by German sports teachers (outdoors with 11-aside players) around 1890 while Scandinavian countries (Denmark and Sweden) introduced a version with 7-aside players around the same period (Hahn et al., 2013). Its original Danish name "Haandbold" was first called in 1898 and the first official competition was organized in 1917 when the term "handball" was also officially used for the first time. It became an Olympic discipline at the 1972 Olympics in Munich for men and at the 1976 Olympics in Montreal for women (Olympics, 2023).
### Literature review and related work
Sports predictions is an active field of research mostly focusing on sports such as football and basketball due to a larger amount of data publicly available. To predict the outcome of a match, several algorithms are considered to model sports matches such as linear regressions (Miljkovic et al., 2010; Rodriguez-Ruiz et al., 2011), support vector machines (Cai et al., 2019), random forests (Groll et al., 2019), XGBoost (Lampis et al., 2023) or neural networks (McCabe and Trevathan, 2008; Huang and Chang, 2010). In the field of football predictions, Groll et al. (2019) use a Random Forest to predict the outcome of football matches. They augment their data set by adding a feature corresponding to the strength of a team, in the principle of Statistically Enhanced Learning (Felice et al., 2023). This value is obtained by modelling scored goals with a bivariate Poisson distribution and assuming that the form of the estimated parameter is \(\log(\lambda_{i})=\beta_{0}+(r_{i}-r_{j})+h\cdot\mathds{1}(\text{team }i\text{ playing at home})\). The parameter \(\beta_{0}\) corresponds to a common intercept and \(h\) is the effect of playing at home. The values of interest are then \(r_{i}\) and \(r_{j}\) as the strength parameters of the home team \(i\) and away team \(j\), estimated via Maximum Likelihood Estimation. These new values will become a new feature in the data set to enhance the future learning algorithm.
Only scarce literature covers the field of handball analytics (Saavedra, 2018). Most of the existing research works are medical analyses looking at body fatigue and injury (Akyuz et al., 2019; Seil et al., 1998; Camacho-Cardenosa et al., 2018) with a particular interest on young players (Madsen et al., 2019; Grabara, 2018; Fonseca et al., 2019). Wagner et al. (2014) propose a review of performance for handball players and teams, highlighting the importance of factors such as experience level, age or playing positions. Pic (2018) showed that, the impact of the home effect can play a role in critical moments of a game. In particular, when the score indicates a draw, the home team may finally win the game. As highlighted by the author, this advantage should be taken with care as it can either come from the effect of playing at home (with more supporters cheering) or from the fatigue of the away team from the travel. Indeed, unlike sports such as football mostly travelling by plane or train, most handball clubs travel by bus which can have an important impact on the players fatigue during the competition. Therefore, the difference in distance traveled between both teams can explain the level of players' freshness.
With a focus on prediction of handball outcomes, Groll et al. (2020) compared different regression approaches to model international handball games. Given the level of under-dispersion (when the variance is lower than the mean, i.e. \(\mathds{V}(X)<\mathds{E}(X)\)) they discarded the regular Poisson distribution and opted for a Gaussian distribution with low variance. In a similar spirit, Felice (2023) proposed to model the number of goals scored by a team with a Conway-Maxwell-Poisson distribution (Sellers, 2022) and derive a strength parameter from the parameters of the fitted distribution.
In this work, we propose a machine learning approach to model handball matches. Our model is also follows the framework of Statistically Enhanced Learning given that we add statistically estimated features to represent the strength of the teams. In the following, we describe the data collected and features created in Section 2. In Section 3, we present the results of trained machine learning models and compare their performance with the additional SEL generated features. We discuss potential extensions and the use of our proposed approach in Section 4 and we conclude in Section 5.
## 2 Materials and methods
In this section we present the data used to train our machine learning models with the associated features.
### Data set
Our data set consists of two data sources that include historical games and team squads information. We extract information of past games using the SportScore API from the service RapidAPI. The extracted data include information such as game location, time, competition and score. The second source used to complete the data set contains information about teams and players. The data are extracted from the website www.handball-base.comand will help us generate teams and players specific variables.
#### 2.1.1 Target response
Our objective being to predict the number of goals scored by each team by the end of a match, our modelling exercise is a two-target regression problem. We then predict the final score for home and away teams. We can also use the score to determine for a team (e.g., the home team) whether the game was won, lost or ended in a draw, and train a classification model on this basis.
#### 2.1.2 Features
Our data set is composed of features which bring different levels of information about the two competing teams. The exhaustive list of features with abbreviations is available in Appendix A.1.
Game informationThese features aim to carry information about the game and its importance. It can help encapsulate information such as potential stress for players and their state of mind (for instance, how seriously they may take a game).
* Day of week: encoded day of the week for the start time of the game.
* Hour: hour of the start time of the game. We can expect that games starting early in the day (e.g., morning) can be less important or players may lack time for preparation.
* Importance: carries the importance of the competition from the lowest (friendly games with value 3) to the highest importance (Champions League with value 1).
* i.e. final
- may potentially be more important than the first day of the Champions League).
Teams' structureWe also consider features allowing us to capture information from the team's physical abilities and experience, and we incorporate them as differences between the home and away teams. There is one feature for each attribute (height, weight, age) and position on the field (wings, back players, line players/pivots and goalkeepers). This set of features can also be seen as SEL variables (as defined in Felice et al. (2023)) but we will refer to them as classical variables.
* Height: difference of the average height of players per position between home and away teams. This aims to measure the difference in physical characteristics between players (e.g., taller back players for one team may result in an advantage both in attack and in defense).
* Weight: difference of the average weight of players per position between home and away teams. Similar to Height, this aims to capture the differences in physical abilities between players.
* Age: difference of the average age of players per position between home and away teams. This aims to capture the difference in experience/maturity between teams. The effect of such feature is expected to have a concave shape, with players becoming more performing when gaining experience (and still being young) up until their career peak after which they will start dropping in performance.
Other features give us information about the team's structure such as the distance to travel or the team's composition.
* Travel distance: distance in kilometers (as the crow flies) to travel for the away team between the club's location and the address of the home team. This aims to capture the potential fatigue caused by the travelling distance.
* Nationalities: ratio between the total count of players' nationalities in a team and the total amount of players. This aims to capture the affinity between players as well as potential language barriers.
* International: share of players selected in their national team.
Team's strengthWe finally add features that correspond to the teams' strength as described in more details in Section 2.2. These variables are the aforementioned SEL features.
* Attack strength: estimated strength in attack via SEL for home and away teams.
* Defense strength: estimated strength in defense via SEL for home and away teams.
### Estimating team strengths
The strength of a team is an undeniably important factor of a handball match but it is not directly measurable and only remains an abstract concept. We can palliate this shortcoming by devising a statistical model that incorporates parameters which are meant to represent the attacking and defensive strengths of each team, and then estimate these parameters. To this end, we consider the recent history (from the ongoing season) of each team's matches and fit the
distribution of scored goals with an appropriate probability law. Felice (2023) explained that the Conway-Maxwell-Poisson distribution (Sellers, 2022) is a very good choice for this purpose as it not only satisfies the discrete nature of goal counts but also handles the problem of over- and under-dispersion one may have to deal with. Hence it is a better choice than the often used Normal (not discrete), Poisson (assumes equi-dispersion) and Negative Binomial (cannot handle under-dispersion) distributions, for instance. An illustration of the fitted distribution on historical data is provided in Figure 1 using the history of Metz Handball games over the course of the season 2021/2022.
The Conway-Maxwell-Poisson distribution possesses two parameters, \(\lambda>0\) and \(\nu\geq 0\) (note that \(\nu=0\) implies that \(\lambda\in(0,1)\)). The parameter \(\lambda\) can be assimilated with the empirical mean (depending on the values of \(\nu\)). For instance, when \(\nu=1\) we retrieve the Poisson distribution for which \(\lambda\) corresponds to both the mean and variance. The parameter \(\nu\) corresponds to the level of dispersion.
Based on the nature of the two parameters, Felice (2023) proposes to penalize irregularities of the team, and consequently defines the strengths of a team as
\[s_{a}=\frac{\log(\lambda_{a})}{\nu_{a}}\quad\text{and}\quad s_{d}=\frac{\nu_{ d}}{\log(\lambda_{d})}, \tag{1}\]
where \(\lambda_{a},\nu_{a}\) stand for the attacking parameters of a team, and \(\lambda_{d},\nu_{d}\) for the defensive parameters of the same team. A strong attack demonstrates a high average number of scored goals (\(\lambda_{a}\)) and a balanced dispersion (\(\nu_{a}\), stable with occasional spikes). On the other hand, a strong defense corresponds to a low average of conceded goals (\(\lambda_{d}\)) and a stable defense over matches (low variance translating into under-dispersion with high value for \(\nu_{d}\)).
We thus model historical matches with the Conway-Maxwell-Poisson distribution. We estimate, for each team, the defense parameters (\(\lambda_{d}\) and \(\nu_{d}\)) as well as the attack parameters (\(\lambda_{a}\) and \(\nu_{a}\)). This consequently gives us the estimated strength parameters \(s_{a}\) and \(s_{d}\) that will constitute the SEL variables as presented in Section 2.1.2.
### Prediction models
To model the outcome of a handball match, we consider both classification and regression models to either predict the winner of the game or the scores of the competing teams. The results of the experimented models for classification and regression are discussed in Section 3.
#### 2.3.1 Classification model
To predict the outcome of a match (as win, draw or loss), we train different ML classification algorithms. A first approach is based on Random Forests (Breiman, 2001). Another model is based on the popular XGBoost algorithm (Chen and Guestrin, 2016). We also use an improved version of the boosting model, CatBoost (Prokhorenkova et al., 2018), which is specialized in handling categorical data. Finally, we train a Multi-Layered Perceptron (Rosenblatt, 1958).
Figure 1: Histogram of goals scored by Metz Handball during the season 2021/2022 versus fitted Conway-Maxwell-Poisson (CMP) distribution.
#### 2.3.2 Regression model
As insightful alternative, we also consider regression models to predict the score of each team during a match. To that end, we use multi-target variants of the aforementioned models which will predict the final score of the home and away teams. We note that, by nature, the Random Forest model does not support multi-target regressions and can only predict one outcome. We thus use Python's modules to implement these models1. The module is a wrapper class of the model that, under the hood, simply trains two models in parallel: one for the home and one for the away team. The implementation for XGBoost does not support multi-target either. Therefore, we also use the same Python module alternative to achieve our goal. The other models, CatBoost and Multi-Layered Perceptron, can handle multiple outcome predictions by nature.
Footnote 1: Using the module MultiOutputRegressor from the scikit-learn library
### Performance metrics
To evaluate the performance of our models, we first define the metrics we use for classification and regression exercises.
#### 2.4.1 Metrics for classification models
To measure performance of our classification models, we use three common metrics in the field of sports predictions: accuracy, weighted \(F_{1}\)-score and Brier score.
The \(F_{1}\)-score is the mean between the precision and recall but can also be written as the ratio between the true positives (\(TP\)) from the confusion matrix with the false positives (\(FP\)) and false negatives (\(FN\)) such as
\[F_{1}=2\cdot\frac{2TP}{2TP+FP+FN}. \tag{2}\]
It is a popular classification metric which puts the same weight between precision (share of predictions being correct) and recall (share of positive items being predicted).
The Brier score (Brier, 1950) is a metric that is used to calculate the distance, in terms of probability, to the actual outcome. It is defined as
\[BS=\frac{1}{n}\sum_{i=1}^{n}(y_{i}-f(x_{i}))^{2}. \tag{3}\]
It computes the squared difference between the predicted probability and the actual outcome (e.g. 0 if the outcome is negative and 1 otherwise). This metric is particularly popular for sports predictions in the context of classification.
#### 2.4.2 Metrics for regression models
To assess the quality of our regression models, we use the Root Mean Squared Error (RMSE) and the Mean Absolute Percentage Error (MAPE).
The Root Mean Squared Error is defined as
\[RMSE=\sqrt{\frac{1}{n}\sum_{i=1}^{n}(y_{i}-f(x_{i}))^{2}}. \tag{4}\]
It computes the average deviation of our model's predictions from the actual scores. Using the square as the power for the difference helps penalize more when the model gives extremely incoherent predictions. On the other hand, when predictions are close to the actual outcome, the penalty is lower.
The Mean Absolute Percentage Error is defined as
\[MAPE=\frac{1}{n}\sum_{i=1}^{n}\left|\frac{y_{i}-f(x_{i})}{y_{i}}\right|. \tag{5}\]
It computes the relative difference between the actual and predicted values and uses the absolute value that puts the same weight to large or small deviations.
## 3 Results
To evaluate the performance of our distinct approaches, we train the different models on several years of female club matches. Our training set spans from September 2019 until April 2023 (representing 3,260 games) and leaves matches from April to June 2023 (250 games) as the test set.
In both the classification and regression settings, we train the four different models presented in Section 2.3 and compare the scenarios with and without the SEL features introduced in Section 2.2. The performance metrics are summarized in Section 3.1. After that comparison, we further investigate the best performing model with explainability frameworks for global and local explanations (Section 3.2).
### Model performances
In the first case of match classification, we train our four models and report the classification metrics evaluated on the test set in Table 1.
We can observe in Table 1 that the Random Forest with SEL features performs the best. Furthermore, we can observe that adding SEL features to our models is always beneficial and, with no exception, strongly helps improve our metrics. The performance improvement is particularly remarkable for the Random Forest model which has the highest gap between two scenarios. Although its performance with classical covariates already achieves 60.11%, the SEL features boost the performance to reach 81.32% by adding the estimated strengths of the opposing teams.
Turning our attention to regression settings, we train multi-target regression models with and without SEL features and report the resulting performance metrics in Table 2. Our metrics of interest here are the Root Mean Squared Error and Mean Absolute Percentage Error for both home and away teams.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Model** & **Features** & **Accuracy** & \(F_{1}\)**-score** & **Brier-score** \\ \hline \multirow{2}{*}{Random Forest} & Classical & 60.11\% & 57.64\% & 0.4837 \\ & Classical + SEL & **81.32\%** & **79.15\%** & **0.3145** \\ \hline \multirow{2}{*}{XGBoost} & Classical & 57.51\% & 55.87\% & 0.6189 \\ & Classical + SEL & 73.57\% & 71.06\% & 0.5784 \\ \hline \multirow{2}{*}{CatBoost} & Classical & 58.29\% & 54.82\% & 0.5181 \\ & Classical + SEL & 79.57\% & 77.04\% & 0.3517 \\ \hline \multirow{2}{*}{Neural Net} & Classical & 54.18\% & 52.75\% & 0.5371 \\ & Classical + SEL & 68.08\% & 66.67\% & 0.4479 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Classification models performance comparison based on accuracy, weighted \(F_{1}\)-score and Brier score. Each model is considered once only based on classical covariates and once with the additional SEL variables.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Features**} & \multicolumn{2}{c}{**Home**} & \multicolumn{2}{c}{**Away**} \\ & & **RMSE** & **MAPE** & **RMSE** & **MAPE** \\ \hline \multirow{2}{*}{Random Forest} & Classical & 5.05 & 14.91\% & 4.85 & 15.34\% \\ & Classical + SEL & 3.96 & 11.50\% & 3.79 & 12.28\% \\ \hline \multirow{2}{*}{XGBoost} & Classical & 5.87 & 15.40\% & 5.16 & 14.08\% \\ & Classical + SEL & 4.24 & 11.63\% & 4.09 & **11.45\%** \\ \hline \multirow{2}{*}{CatBoost} & Classical & 5.13 & 14.86\% & 4.78 & 15.17\% \\ & Classical + SEL & **3.79** & **10.94\%** & **3.73** & 12.06\% \\ \hline \multirow{2}{*}{Neural Net} & Classical & 5.29 & 15.76\% & 5.07 & 16.14\% \\ & Classical + SEL & 5.30 & 15.31\% & 4.96 & 15.30\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Regression models performance comparison based on Root Mean Squared Error and Mean Absolute Percentage Error. Each model is considered once only based on classical covariates and once with the additional SEL variables. Note that we separate the predictions for home and away teams.
We can see from Table 2 that, although the Random Forest with SEL achieves good performance levels, the CatBoost model can predict match scores with the least error. Similar to our first classification use case, adding SEL features greatly benefits to all trained models. This outcome suggests that our best model can accurately predict the outcome of a female handball game with an error of 3.8 goals for the home and away teams. As a comparison, state-of-the-art predictive models for football can achieve a prediction error of 1.194 goals (Groll et al., 2019). Considering that the average number of goals during a game is 1.5 goals (Zebari et al., 2021), this corresponds to a 80% error. In our case for handball, the RMSE of our proposed model on our test set is 3.8 for 27.9 goals scored on average during a match. This then corresponds to an error of 11%. This therefore highlights the reliability of our predictive models strengthened by the supplementary information carried by the SEL covariates.
### Model explainability
Explaining a model's outcome is crucial to trust its predictions and take actions from the generated explanations. In this section, we explore the important features of the selected CatBoost model from Section 3.1 and show how the extracted SEL features are used by our model. Furthermore, we show how explaining predictions can be used to drive actions that can help team coaches prepare an upcoming match.
Many ML models such as Random Forests or Neural Networks are considered to be black boxes: they are excellent in terms of prediction accuracy, but one cannot understand the factors that lead to a given prediction. Therefore, explainability (aka model transparency) is an important capability any ML model should have. Its importance will become even stronger with forthcoming regulations on Artificial Intelligence (Hamon et al., 2020; Sovrano et al., 2022). We distinguish global explanations, which focus on analyzing the overall behavior of the model (importance of features), to local explanations, used to explain predictions of specific observations. The model explainability literature covers a wide range of techniques from model specific approaches benefiting from the properties of a certain model (Du et al., 2019) to model agnostic approaches. Model agnostic solutions span from surrogate models such as LIME (Ribeiro et al., 2016), which aim to locally approximate the black box ML model with a simpler and interpretable model, to game theoretic based approaches.
For our setting, we use game-theoretic based approaches with the SHAP framework (Lundberg and Lee, 2017) to generate explanations. In particular, given the structure of our model, we use the TreeSHAP implementation (Lundberg et al., 2020) which uses the tree structure of the model to perform more efficient and exact calculations of Shapley values.
We note that, for the rest of this section, we focus on explainability of the regression model but the implementations and conclusions also perfectly apply to the classification model.
#### 3.2.1 Feature importance from global explanation
Analyzing feature importance can help understand the behavior of our model and identify influential variables. Computing the Shapley values for each feature of the model, we can observe which are the ones being impactful to the predictions. In other words, Figure 2 tells us which features of our test set contribute the most to the predicted outcome.
We can observe that the SEL features for teams' strengths are considered as very important for the model. We notice that the attack strength of the home team is the most important to predict home goals, followed by the defense strength from the away team. This is perfectly logical, and again underlines the impact of SEL features. To predict the score of the away team, we also observe that the most important feature is the attack strength of the away team. It is then followed by the defense strength of the home team. This is in line with conclusions from Section 3.1. We observed that, by adding these features to our model, the performance considerably improves.
#### 3.2.2 Understand match predictions from local explanations
Analyzing predictions by means of local explainability frameworks can help anticipate events during an upcoming match. In a similar fashion as in the previous section, we use TreeSHAP (Lundberg et al., 2020) to locally explain predictions.
To that end, we analyze the last game of the season 2022/2023 played at home for Metz Handball. This game was played on May 17th, 2023 against Chambray Touraine Handball as part of the "Ligue Butagaz Energie" (LBE), the French female 1st division championship. We choose this game as it is not contained in the training set, the predictions were even made and explanations generated before the game2. Furthermore, the authors attended the game, which can help understand the generated explanations with concrete elements that happened during the actual game. The
model predicted a final score of 32-24 in favor of Metz Handball and the actual game saw Metz winning 30-26 over Chambray Touraine Handball. We present in Figure 3 the explanations generated from the CatBoost model with SEL features for the selected game.
We can read Figure 3 as features close to the top of the plot contribute positively to increasing the number of goals Metz could score during the game. On the other hand, features at the bottom of the plot contribute negatively to goals scored by Metz, i.e. stand for the defense of Chambray. Therefore, in line with our conclusions from the feature importance plot from Figure 2, the attack strength for the home team (Metz) is the main contributor for the final score. The defense from Chambray, however, contributes to lower the total score, without which the outcome would be much worse for the team. Other features such as the experience from international or wing players positively contributed to the victory of Metz. An additional factor is the number of days until the final (end of season). Although the model could not be aware that Metz, playing their last game of the season at home, was about to receive the trophy and celebrate the title of champion, the few days left until the end of season contributed to the motivation of players.
The model still remains based on statistical facts from past events, the explanations and predictions derived from the ML tool still echoed on concrete sports events during the actual match.
## 4 Discussion
We showed that the proposed ML solution achieves a high predictive performance and explanations generated with relevant explainability frameworks allow a translation of our analytical findings into concrete sports events. In this section, we argue that this tool can be used by sports professionals such as team managers to prepare for upcoming games. We also open the discussion for future work on extending team strengths to player abilities as additional SEL covariates.
Figure 2: Feature importance plot using TreeSHAP for predicting home team’s goals
### An analytical tool for coaches
While state-of-the-art Machine Learning models for sports predictions (e.g. football, basketball) usually plateau around 75% accuracy to predict the outcome of a match (Huang and Chang, 2010; Lampis et al., 2023), our proposed solution for handball matches achieves above 80% accuracy. Coupled with explainability capabilities, our approach can translate statistical predictions into real facts happening during a match. Although no model can guarantee that the result of a game will be as predicted, the model can identify statistical facts that can explain parts of the outcome. Such patterns can therefore be used by team coaches in view of a competition.
Knowing the prediction of a game together with the potential main contributors to this outcome can help prepare a game and improve the team's strategy. As we illustrated in Section 3.2.2, local explainability can reveal where a team is expected to excel or struggle during the match. We observed in Table 3 that a team can have an advantage with the experience of their wing players or goalkeepers and struggle due to the defense strength of the opposing team. Therefore, a coach can use these pieces of information to ensure the team can accentuate on predicted strengths and work on removing their weak points.
### From team strengths estimation to player abilities
As presented in Section 2.2, the structure of handball games suggests the use of the Conway-Maxwell-Poisson distribution from which we can derive a formula to estimate the attack and defense strengths of a team. We showed the importance of this feature to the predictive performance of a model, and Felice (2023) illustrated the relevance of such metrics to derive the ranking of clubs. This methodology can be adapted to other settings such as the estimation of individual player abilities. Although the publicly available data does not allow extracting a long history of player statistics over multiple seasons, having access to such data could lead to similar research.
## 5 Conclusions
In this paper, we showed how we can construct a highly accurate predictive model for handball games. While data preparation and feature engineering are often under-explored in the literature (Zheng and Casari, 2018; Felice et al., 2023), our results highlight their importance on the model's performance. This encourages us to focus, in future
Figure 3: Force plot of predicted goals (from CatBoost with SEL) for Metz Handball for the game played on May 17\({}^{\text{th}}\) against Chambray Handball
works, on the preparation of even more meaningful features to capture more signals and further improve the model's performance. The models presented in this paper are trained and evaluated on female championships but this work can easily be extended to male's championships as well as international competitions. In view of the upcoming Olympic Games in Paris in 2024, the presented solution can also target national teams' coaches to prepare for this worldwide event by means of analytical tools powered by accurate Machine Learning models.
|
2305.18894 | Second harmonic generation in anisotropic stratified media: A
generalization of the Berreman method and its application to photonic
materials | We have developed a numerical method for calculating the second harmonic
generation (SHG) generated by an anisotropic material whose optical properties
present an arbitrary modulation in one dimension. The method is based on the
Berreman 4x4 matrix formalism, which is generalized to include nonlinear
optical phenomena. It can be used under oblique incidences of the input beam,
and is valid even when the SHG frequency is close to photonic bands, where the
usual slowly-varying-amplitude approximation breaks down. As an example of
application, we have studied the SHG performance of ferroelectric and
helielectric nematic liquid crystals. The latter present a helicoidal structure
that can be distorted under electric field. In the different tests of the
method we have analyzed the conditions for the most efficient SHG, and compared
with previous results in the case there were any. The obtained results indicate
that the present procedure may contribute to improve the structural design and
enlarge the variety of nonlinear optical materials for their application in
optical devices. | J. Ortega, C. L. Folcia, J. Etxebarria | 2023-05-30T09:50:47Z | http://arxiv.org/abs/2305.18894v1 | Second harmonic generation in anisotropic stratified media: A generalization of the Berreman method and its application to photonic materials.
###### Abstract
We have developed a numerical method for calculating the second harmonic generation (SHG) generated by an anisotropic material whose optical properties present an arbitrary modulation in one dimension. The method is based on the Berreman 4x4 matrix formalism, which is generalized to include nonlinear optical phenomena. It can be used under oblique incidences of the input beam, and is valid even when the SHG frequency is close to photonic bands, where the usual slowly-varying-amplitude approximation breaks down. As an example of application, we have studied the SHG performance of ferroelectric and heliclectric nematic liquid crystals. The latter present a helicoidal structure that can be distorted under electric field. In the different tests of the method we have analyzed the conditions for the most efficient SHG, and compared with previous results in the case there were any. The obtained results indicate that the present procedure may contribute to improve the structural design and enlarge the variety of nonlinear optical materials for their application in optical devices.
Department of Physics, Faculty of Science and Technology, UPV/EHU, Bilbao, Spain
\({}^{*}\)[email protected]_
## 1 Introduction
The process of second-harmonic generation (SHG) in optics is a particular case of three wave mixing that allows two photons of frequency \(\omega\) to be converted into a photon of frequency \(2\omega\)[1, 2]. The effect is very interesting from a technological point of view and can be used, for example, in laser technologies to obtain multi-line emission systems. However, usually the SHG conversion is very poor in homogeneous crystals due to the short coherence length that the SHG process typically presents. The SHG coherence length increases significantly when the so-called phase matching (PM) condition occurs. In this case, the phase mismatch between the fundamental and second-harmonic (SH) waves is zero. In homogeneous materials, this effect can occur along particular directions and light polarizations and, in practice, the effect gives rise to large coherence lengths, which has been traditionally used for the construction of optical devices [3-5].
The SHG performance can be significantly improved by using optical systems with modulated dielectric properties. The most common structures of this kind are stratified optical media with a modulation in one dimension (1D). In these systems, new possibilities of PM occur since the wave vector mismatch between the fundamental and SH fields can be compensated with the wave vector of the dielectric modulation [6]. In general, the periodic systems present photonic properties and exhibit a spectral pattern of photonic band gaps (PBGs) that depends on the particular profile of the modulation of the dielectric tensor. Especially interesting are the PM situations that appear when the wavelength of the
lights involved in the SHG process coincide with the edge of the PBGs. In these cases, the electromagnetic wave is enhanced by resonance, giving rise to a significant increase of the SHG efficiency [7-13]. The process is especially favorable when the wavelengths of both the fundamental and SH lights coincide simultaneously with edges of the photonic system [11]. Under these circumstances the SHG signal of the optical system is dramatically enhanced. However, this process is only possible in some modulated systems that, in most cases, are artificially manufactured. Typically, they consist of multilayer periodic structures made of different dielectric materials that are SHG active. In this kind of systems, the reflectance spectrum presents a set of PBGs whose frequencies occur at multiples of a fundamental one. In order to obtain the desired coincidence of the fundamental and SH frequencies at edges of the PBGs, the periodicity of the layered structure must compensate the normal dispersion of the refractive indices of the materials [12]. In addition, the SHG performance also depends on the coupling of the polarization of the fundamental wave with the local second-order susceptibility tensor. In general, the calculation of the SHG intensity generated by a photonic material is cumbersome and only a few examples of modulated structures have been solved so far. In some cases, the studies are restricted to the so-called slowly-varying-amplitude approximation (SVAA) for the SHG field, which exclude the SHG description at the band edges [14]. In other cases, the problem is solved only for optical systems for which there exist analytical expressions for the polarization eigenstates of the fundamental and SHG fields [7,8]. The problem has also been addressed for isotropic systems modulated in an arbitrary way [9-13]. However, the general case of anisotropic systems with arbitrary modulation and at oblique incidences has not been solved to the best of our knowledge. The development of a tool for studying this problem is very interesting to explore other possibilities of high-performance SHG materials.
In this work, we have developed a numerical method for calculating the SHG field generated by an anisotropic material whose dielectric tensor presents an arbitrary modulation in 1D, even for input lights at oblique incidence. The method is a generalization of the Berreman formalism [15,16] that is commonly used to study linear optical processes in inhomogeneous materials. As an output, we obtain the intensity of the SHG wave transmitted and reflected by the material in absolute units. The only assumption of the method is that the power depletion of the incident beam can be neglected. The procedure can be applied even in the cases where the amplitude of the SHG electric field varies rapidly along the propagation direction. This point is important because the SVAA is violated in situations where the SHG is especially efficient, which present an evident practical interest.
As an example of validation and application of our calculation method, we have studied several optical systems based on ferroelectric nematic liquid crystals (N\({}_{\rm F}\)) [17-27]. The main features of this type of materials will be discussed later. The reason for this election is twofold. On the one hand, these organic materials can exhibit high SHG responses since both the structure of the molecular constituents and the way the molecules are arranged in the phase are very favorable for SHG purposes. On the other hand, in the chiral variants of this type of phases the molecules self-assemble naturally in helicoidal organizations. This is an important advantage with respect to artificial multilayer systems whose fabrication is difficult and expensive. In addition, the structure of the chiral ferroelectric nematic (N\({}_{\rm F}\)*) phases can be modified by low electric fields, giving rise to interesting spectral patterns of the PBGs [28-31].
This work is structured as follows: In the next section we will describe the basics of the calculation procedure, next we will demonstrate the validity of the method in several examples for which there are available theories. Finally, we will study the SHG performance in the N\({}_{\rm F}\)* phase distorted by electric fields. At the end of the paper some conclusions will be drawn.
## 2 Basics of the calculation of the SHG efficiency.
First of all, we will make some comments about the well-known (linear) Berreman method. This 4x4 matrix technique allows us to calculate numerically the light transmitted and reflected by an anisotropic stratified medium with an arbitrary modulation of the dielectric tensor. The method can be also applied to continuously modulated systems by approximating them to discrete stratified media. The method uses the so-called Berreman vectors defined as:
\[\mathbf{\Psi}=\left(\begin{array}{c}E_{x}\\ H_{y}\\ E_{y}\\ -H_{x}\end{array}\right) \tag{1}\]
where \(E_{x,y}\) and \(H_{x,y}\) are the electric and magnetic fields in a reference frame where \(z\) is perpendicular to the layers and (\(x\),\(z\)) define the plane of incidence. The Berreman vectors are continuous through the planes separating two consecutive layers of the medium. Every layer \(l\) (\(l\)=1,\(\ldots\),\(n\)) of the medium is represented by a transfer matrix that depends on the local dielectric tensor, the frequency of the light and the angle of incidence. Therefore, given an input Berreman vector, the output vector through the whole system is obtained by applying subsequently the whole set of transfer matrices corresponding to the different layers. Moreover, under the Berreman formalism, it is possible to calculate the electric and magnetic fields at every inner layer provided we know the corresponding input vector.
Let us now consider an inhomogeneous anisotropic nonlinear medium illuminated by a fundamental beam of frequency \(\omega\). To calculate the SHG electric field generated at a particular layer of the stratified medium we will proceed as follows:
1. First we calculate the fundamental electric field at that layer, corresponding to a given input light. For an input Berreman vector \(\Psi^{\omega}_{input}\), the Berreman vector at the \(l\)-th layer \(\Psi^{\omega}(l)\) is given by:
\[\Psi^{\omega}(l)=\mathrm{U}^{\omega}(l)...\mathrm{U}^{\omega}(2)\mathrm{U}^{ \omega}(1)\Psi^{\omega}_{input} \tag{2}\]
where \(\mathrm{U}^{\omega}(l)\) is the transfer matrix of the \(l\)th layer for frequency \(\omega\), which can be computed following standard methods in the Berreman procedure [15]. As a result of this calculation we obtain the electric field of frequency \(\omega\) at any point of the medium \(E^{\omega}_{j}(l)\) (\(j\)=\(x\),\(y\),\(z\); \(l\)=1,\(\ldots\),\(n\))
2. Now if the medium has a second-order susceptibility tensor \(d_{ijk}(l)\) at layer \(l\), there is a polarization \(P_{i}(l)\) induced by the fundamental electric field which, up to second order, is given by the expression:
\[P_{i}(l)=\varepsilon_{0}\chi_{ij}(l)E^{\omega}_{j}(l)+2\varepsilon_{0}d_{ijk} (l)E^{\omega}_{j}(l)E^{\omega}_{k}(l) \tag{3}\]
where \(\chi_{ij}(l)\) is the linear susceptibility at layer \(l\), and \(\varepsilon_{0}\) the vacuum permittivity. Eq. (3) can be understood as if the medium now presented a dielectric tensor \(\varepsilon^{\prime}{}_{ij}(l)\) distorted by the fundamental electric field and given by:
\[\varepsilon^{\prime}{}_{ij}(l)=\varepsilon_{ij}(l)+2d_{ijk}(l)E^{\omega}_{k} (l) \tag{4}\]
where \(\varepsilon_{ij}(l)=\delta_{ij}+\chi_{ij}(l)\) is the dielectric tensor of the non-distorted material and \(\delta_{ij}\) the Kronecker delta. By using \(\varepsilon^{\prime}{}_{ij}(l)\), we can re-calculate a new transfer matrix \(\mathrm{U}^{\omega}(l)\) for the \(l\)th layer of the distorted structure. The Berreman vector corresponding to the SH field generated at this layer (\(\Psi^{2\omega}(l)\)) can be obtained by means of the expression:
\[\Psi^{2\omega}(l)=\left[\mathrm{U}^{\omega}(l)-\mathrm{U}^{\omega}(l)\right] \Psi^{\omega}(l) \tag{5}\]
where \(\Psi^{\omega}(l)\) is the Berreman vector of the fundamental input light at the \(l\)th layer. On writing Eq. (5) we state that the excess of electric and magnetic fields, generated in the distorted material (with respect to the undistorted original medium), are the fields corresponding to the SHG wave, which is generated as a consequence of the second-order nonlinear response of the material.
Note, however, that the fields directly calculated from Eq. (5) have still to be corrected in order to represent properly the SHG fields, because the optical properties of the surrounding medium can remarkably alter the SH fields that finally give rise to the SHG signal outside the material. An example will clarify this point. Suppose a case in which the frequency \(2\omega\) is inside a PBG of the structure. In this situation, the amplitude of some normal modes will be heavily penalized in their contribution to the SHG field. The modes with small contributions will be precisely those that cannot propagate in the gap and are strongly attenuated inside the sample. A similar circumstance occurs when studying the fluorescent
emission of a dye molecule immersed in a photonic medium: depending on the available density of photonic states the resulting fluorescence is greatly altered with respect to the same emission in vacuum [32-36]. The algorithm to calculate the true Berreman vector of the SH field generated in layer \(l\) starting from \(\boldsymbol{\psi}^{2\omega}(I)\) is explained in the next point.
3. To calculate the correction to the \(\boldsymbol{\psi}^{2\omega}(I)\) vector we will begin by expressing it as a linear combination of 4 linearly independent Berreman vectors. For each layer \(l\), these 4 vectors \(\boldsymbol{\psi}_{k}(I)\) (\(k\)=1,...4) will be constructed from the Berreman vectors \(\boldsymbol{\psi}_{k}\) associated to the polarizations of the four transmission eigenstates of the sample with unit intensity that propagate forward and backward. Vectors \(\boldsymbol{\psi}_{k}\) can be easily determined following well-established procedures in the Berreman formalism [33,36], whereas \(\boldsymbol{\psi}_{k}\) (\(l\)) are deduced from them using the expressions
\[\boldsymbol{\psi}_{k}(I)=\mathrm{U}^{2\omega}(I)...\mathrm{U}^{2\omega}(2) \mathrm{U}^{2\omega}(1)\boldsymbol{\psi}_{k} \tag{6}\]
where \(\mathrm{U}^{2\omega}(I)\) is the transfer matrix for layer \(l\) and frequency \(2\omega\).
It should be noted that although, by construction, the vectors \(\boldsymbol{\psi}_{k}\) are normalized in the sense that the intensities associated to them are equal to unity, the vectors \(\boldsymbol{\psi}_{k}(I)\) are not, and their intensities can be very different from each other.
In general, it can be shown that the time-averaged \(z\) component of the Poynting vector \(S_{z}\) for the light wave associated to a Berreman vector \(\boldsymbol{\psi}\) is given by [16]
\[S_{z}=\frac{c\varepsilon_{0}}{4}\boldsymbol{\psi}^{\dagger}.\mathrm{M}. \boldsymbol{\psi} \tag{7}\]
where \(c\) is the speed of light in vacuum, and
\[M=\begin{pmatrix}0&1&0&0\\ 1&0&0&0\\ 0&0&0&1\\ 0&0&1&0\end{pmatrix} \tag{8}\]
The matrix \(M\) can then be understood as a metric tensor which allows us to define something analogous to a scalar product of a pair of Berreman vectors \(\boldsymbol{\psi}_{1}\) and \(\boldsymbol{\psi}_{2}\) (\(\boldsymbol{\psi_{1}}^{\dagger}.\mathrm{M}.\boldsymbol{\psi_{2}}\)), and a norm \(\|\boldsymbol{\psi}\|=\sqrt{\|\boldsymbol{\psi}^{\dagger}.\mathrm{M}. \boldsymbol{\psi}\|}\), whose square is proportional to the associated light intensity.
Using the 4 normalized vectors \(\boldsymbol{\psi}_{k}(I)/\|\boldsymbol{\psi}_{k}(I)\|\) as a basis, the Berreman vector obtained from Eq. (5) can be expressed as:
\[\boldsymbol{\psi}^{2\omega}(I)=\sum_{k=1}^{4}a_{k}\boldsymbol{\psi}_{k}(I)/ \|\boldsymbol{\psi}_{k}(I)\| \tag{9}\]
where \(a_{k}\) are complex coefficients. Those coefficients can be computed through the scalar products of \(\boldsymbol{\psi}^{2\omega}(I)\) and \(\boldsymbol{\psi}_{k}(I)\), taking into account that, in general, the 4 vectors \(\boldsymbol{\psi}_{k}(I)\) are not orthogonal to each other.
Once written in the form (9) the correction sought for \(\boldsymbol{\psi}^{2\omega}(I)\) due to the optic environment can be easily obtained. The corrected vector is simply
\[\boldsymbol{\psi}^{2\omega}(I)=\sum_{k=1}^{4}a^{\prime}_{k}\boldsymbol{\psi} _{k}(I)/\|\boldsymbol{\psi}_{k}(I)\| \tag{10}\]
where the new coefficients are
\[a^{\prime}_{k}=a_{k}\|\textbf{v}_{k}(l)\| \tag{11}\]
This rescaling of the coefficients using the square root of the intensities ensures an adequate contribution to the electric and magnetic fields of each normal mode of the \(2\omega\) wave generated according to the characteristics of the surrounding medium. As has been said before, if the medium has a photonic nature the norms \(\|\textbf{v}_{k}(l)\|\) can be very different near the bandgaps. Far from the gaps, all of them will be close to unity, but inside the gaps, some of them can be much smaller and, on the contrary, much larger than unity at the resonances at the edges of the forbidden bands. Correction (11) thus accounts for the different degrees of penalty (or enhancement) of each normal mode in the generation of the \(2\omega\) field within the material.
Combining (10) and (11) it finally results
\[\Psi^{2\omega}(l)=\sum_{k=1}^{4}a_{k}\textbf{v}_{k}(l) \tag{12}\]
4. Once we have the Berreman vector of the \(2\omega\) wave generated at each layer, we calculate the total SHG intensity outside the material.
The output Berreman vector \(\Psi^{2\omega}_{output}(l)\) of the second-harmonic field due to the contribution of the \(l\)th layer is obtained by propagating \(\Psi^{2\omega}(l)\) outside the sample, in the form
\[\Psi^{2\omega}_{output}(l)=\textrm{U}^{2\omega}(n)...\textrm{U}^{2\omega}(l+ 1)\Psi^{7\omega}(l) \tag{13}\]
By adding the contributions of all the layers, we have the total SHG field, which will have a Berreman vector
\[\Psi^{2\omega}_{tot,output}=\sum_{l=1}^{n}\textbf{\psi}^{2\omega}_{output}(l) \tag{14}\]
The intensity and polarization of the forward SHG light can now be deduced from \(\textbf{\psi}^{2\omega}_{tot,output}\) following standard procedures [15]. The SHG wave in the backward direction can also be obtained by propagating \(\textbf{\psi}^{2\omega}_{tot,output}\) towards the left. The procedure can also be extended to allow incident beams propagating from left to right and from right to left. We have implemented the above calculations on a computer program written in Mathematica. Taking typically \(n\)=1000 layers, the execution time on a personal computer is only a few seconds.
We will now turn to apply the above procedure to some examples. First, we will check the method in two cases where the solution is known in order to obtain a basic validation of our results. As we previously mentioned, we will take our examples from the world of liquid crystals. More specifically we will use the recently discovered so-called ferroelectric nematic phase as the non-linear medium able to generate the SH field. As will be shown, that phase is very flexible and can adopt a great variety of molecular arrangements of great interest for our purposes. We will treat several structural configurations with different molecular arrangements that will be specified below.
## 3 Study of the SHG performance in the N\({}_{F}\) and N\({}_{F}\)\({}^{*}\) phases.
- SHG in N\({}_{F}\) phase (homogeneous material)
In ordinary nematic (N) liquid crystalline phases, the constituent molecules are rod shaped, present long-range orientational order and have no positional order. The long axes of the molecules are aligned along a preferential direction called molecular director. Typically, N phases are uniaxial and no spontaneous polarization can appear due to the so-called head-to-tail invariance of the molecular arrangement along the director. However, recently the N\({}_{F}\) phase has been discovered [17-27]. In these
phases the head-to-tail invariance is broken and spontaneous polarization appears along the director (see Fig. 1). These phases are very promising for nonlinear optical applications since bulky donor-acceptor groups can be incorporated along the long molecular axis, which results in high nonlinear optical responses. Some preliminary studies of the SHG features of N\({}_{\text{F}}\) liquid crystals have already been done [37-41]. In particular in ref. [41] a complete characterization of the second-order susceptibility tensor has been carried out in the N\({}_{\text{F}}\) compound RM734 [20]. Two independent components of this tensor are compatible with the symmetry of the phase, assuming Kleinman conditions. The corresponding second susceptibility tensor in contracted notation is:
\[\begin{pmatrix}d_{11}&0&0&0&d_{13}&d_{13}\\ d_{13}&0&0&0&0&0\\ d_{13}&0&0&0&0&0\end{pmatrix} \tag{15}\]
where the \(x\)-axis is along the molecular director and \(y\) and \(z\) are two orthogonal directions in the plane perpendicular to \(x\). In ref. [41], it has been stated that the \(d_{11}\) coefficient is one order of magnitude higher than \(d_{13}\) (\(d_{11}=5.6\text{ pm/V}\), \(d_{13}<0.6\text{ pm/V}\)). In the present calculations we have neglected \(d_{13}\), and have assumed a dispersion profile for the ordinary (\(n_{o}\)) and extraordinary (\(n_{e}\)) refractive indices described by the Cauchy formulas \(n_{e}(\lambda)=1.8447+24151/\lambda^{2}\), \(n_{o}(\lambda)=1.6347+24151/\lambda^{2}\) with the wavelength \(\lambda\) expressed in nm.
The SHG signal has been calculated for an input fundamental light linearly polarized along \(x\) and of intensity \(10^{6}\text{ W/m}^{2}\). Under these conditions, the SHG light is also \(x\)-polarized (ee\(\rightarrow\)e conversion). The calculations have been carried out in a sample of 40 um thickness. For this simple case, there exists an analytical expression for the SHG signal that is given by [2]:
Fig 1: Schematic representation of the structure of the N\({}_{\text{F}}\) and N\({}_{\text{F}}\)* phases. Arrows indicate the direction of the local polarization.
Fig 2: SHG intensity vs wavelength of the SH light calculated by our method (blue dots) and using the expression 16 (red line). The intensity of the input fundamental light is 10\({}^{6}\) W/m\({}^{2}\).
\(I^{2\omega}=I^{\omega 2}\frac{32\pi^{2}d_{1}^{2}}{\lambda^{2}\varepsilon_{0}c(n_{ \mathrm{e}}^{\omega})2n_{\mathrm{e}}^{2\omega}}\frac{\sin^{2}\frac{\lambda kt}{2} }{(\Delta k)^{2}}\) (16)
where \(I^{\omega}\) and \(I^{2\omega}\) stand for the intensities of the fundamental and SHG lights, respectively and \(\Delta k=4\pi\big{(}n_{\mathrm{e}}^{2\omega}-n_{\mathrm{e}}^{\omega}\big{)}/\lambda\), being \(\lambda\) the wavelength of the fundamental light in vacuum and \(L\) the sample thickness. Dots in Fig. 2 represent the SHG intensity vs. wavelength of the SH wave obtained by using our calculation procedure whereas the red line represents the curve obtained with Eq. (16). As can be seen, an excellent agreement is obtained between both curves. It should be noted that the procedure we have developed gives the SHG intensity in absolute units.
- SHG in the chiral N\({}_{F}\) phase (inhomogeneous material)
Recently, the chiral version of the N\({}_{\mathrm{F}}\) phase has also been discovered (N\({}_{\mathrm{F}}\)*) [14,28-31]. The structure is similar to that of the traditional chiral nematic (or cholesteric) liquid crystalline phase N*, i.e., it presents a helical arrangement of the molecules. The main difference is that the N\({}_{\mathrm{F}}\)* phase exhibits spontaneous polarization along the local director (see Fig. 1). Due to this fact, the material is SHG active. Both N* and N\({}_{\mathrm{F}}\)* phases present photonic properties, showing a forbidden band that prevents the propagation of the light in the frequency range inside the band for circularly polarized light with the same handedness as the helix.
The SHG response in liquid crystals with helical molecular arrangements has been addressed in the case of the Smectic C* (SmC*) phases [7,8]. The resolution of this SHG problem is interesting from a basic point of view since the existence of a periodic modulation of the dielectric tensor gives rise to new PM possibilities. In the SmC* case, it is possible to obtain analytic solutions for the eigenstates of both for the fundamental and SH waves that propagate through the material. In ref. [7], an exhaustive analysis of the different possibilities of PM is carried out for light propagating along the helix axis. The theory does not require the SVAA, and provides a numerical solution for the general regime of SHG propagation. Based on this theory, Hoshi et al [8] obtained a general analytical description of the SHG phenomena in the SmC* phase. The theory can be easily adapted to account for the SHG performance of N\({}_{\mathrm{F}}\)* phases by modifying the local dielectric and \(d_{ijk}\) tensors according to the new symmetry. We have compared the SHG response obtained in a N\({}_{\mathrm{F}}\)* phase by using our procedure and the analytical theory by Hoshi et al [8]. The comparison is an interesting test of the validity of our calculation method, because the final formulas in ref. [8] are very complicated, especially when the SVAA does not hold, i.e., at the PBGs. For the comparison we have considered a sample of 30\(p\) of thickness (\(L\)=8.27 \(\upmu\)m) with a helical pitch \(p\)=0.2756 \(\upmu\)m. We have used the same optical parameters as described in the homogeneous case for the local dielectric tensor and second-order susceptibility. An input fundamental light circularly polarized with the same handedness as the helix, and propagating forward along the helix axis, has been used. Figure 3a represents the reflectance spectrum of the sample. It is worth mentioning that in N* and N\({}_{\mathrm{F}}\)* liquid crystals only one PBG appears, centered at a wavelength \(\lambda=\overline{n}p\), being \(\overline{n}\) the averaged refractive index. Figs. 3b and c show the forward and backward SHG intensities vs. SH wavelength, respectively, using our procedure (blue dots) and Hoshi's analytical expression (red line). As can be seen, again an excellent agreement is found between both calculations. Two different PMs (A and B) are observed in the spectrum range depicted in the figure. The PM condition holds both for forward and backward propagation of the SHG wave. The intensity corresponding to PM A (343 nm) is dominant in backward propagation. In the case of PM B the wavelength of the PM (532 nm) coincides with that of the long-wavelength photonic edge (LWE) of the band, and the output intensity is the same for both directions of propagation (note the different scales in the ordinate axes in Figs. 3 b and c). We are now going to identify both PM processes.
To do this identification we must recall that the dispersion relation of the light modes of frequency \(\omega\) in a N* phase is given by [42]:
\[l=\pm\left[\left(k_{0}^{2}\overline{n}^{2}+q^{2}\right)\pm\sqrt{4k_{0}^{2} \overline{n}^{2}q^{2}+\alpha^{4}k_{0}^{4}}\right]^{1/2} \tag{17}\]
where \(l\) is the wave vector, \(q=2\pi/p\), \(k_{0}=\omega/c\), \(\overline{n}^{2}=\frac{1}{2}\big{(}n_{e}^{2}+n_{o}^{2}\big{)}\), \(a^{2}=\frac{1}{2}\big{(}n_{e}^{2}-n_{o}^{2}\big{)}\). A schematic representation of the dispersion relationship is depicted in Fig. 4. The branches with positive (negative) slope represent forward (backward) propagating waves. The eigenmodes are essentially circularly polarized except at the edges of the gap or for very high frequencies. At the low (high)-frequency edge of the gap the \(l\)=0 mode is a stationary wave whose electric field locally oscillates in a direction parallel (perpendicular) to the molecular director [42]. Blue branches correspond to polarizations with the same handedness as the helix whereas red branches stand for the opposite.
Figure 3: (a) Reflectance spectrum of the N* sample. Forward (b) and backward (c) propagating SHG intensities vs the SH wavelength. The input light is circularly polarized with the same handedness as the helix, and propagates forward with an intensity of \(10^{\circ}\) W/m\({}^{2}\). The blue dots and the red line correspond to our numerical method and Hoshi’s analytical expression respectively. In (b) and (c) the different PM profiles (A and B) have been enlarged.
We can now easily identify the PMs depicted in Fig. 3 as combinations of modes in the dispersion curve. The PM A in Fig. 3c (343 nm) corresponds to the combination denoted as A in Fig. 4 (green arrow). In our case, the input light is a circularly polarized mode travelling forward, giving rise to a counter-propagating SHG light of the same handedness. Since the PM condition can be written as \(l^{2\omega}=2\)\(l^{\omega}\), Eq. (17) implies the following relation [7]:
\[\frac{2\omega}{c}[\overline{n}(2\omega)+\overline{n}(\omega)]\approx 3q \tag{18}\]
which is, indeed, satisfied for \(\lambda/2=343\) nm (\(\overline{n}(343\) nm) = 1.945 and \(\overline{n}(646\) nm) = 1.791 according to the Cauchy formula). It can be shown that the dependence of the SHG intensity scales as \(L^{2}\). It is worth mentioning, that a much smaller intensity is observed in forward propagation (left peak in Fig. 3b). This SH light arises by the residual fundamental light always reflected inside the sample, that gives rise to a PM configuration equivalent to A in Fig. 4, but with the waves involved in the process propagating in the opposite directions.
Finally, the PM B (right peak in Figs. 3b, c) corresponds to the mode combination depicted as B in Fig 4 (black arrows). The PM condition takes place at the LWE of the gap and occurs for two counter-propagating fundamental input lights. The PM condition can be written as [7]:
\[\frac{2\omega}{c}n_{e}(2\omega)=q \tag{19}\]
Figure 4: PM combinations schematized on the dispersion curves of the optical eigenmodes of the N\({}_{e}\)* phase. Points of the branches with positive (negative) slope represent modes propagating forward (backward). Except at the band edges the polarization of all modes is essentially circular, with the same handedness as the helix for blue branches and with opposite handedness for red branches. In the PM process denoted by A, two photons of frequency \(\omega\) on the origin of the green arrow, give rise to a photon of frequency \(2\omega\) on the point indicated by the arrowhead. In the B process, the two photons on the origins of the black arrows, both of frequency \(\omega\) and with opposite wave vectors \(l\), combine to give a \(2\omega\) photon with wave vector \(l\)=0 at the LWE of the gap (head of the black arrows).
which is satisfied for \(\lambda\)2=532 nm (\(n_{e}\)(532 nm) = 1.930).
This kind of PM gives rise to very high performances when the sample is illuminated from both left and right sides simultaneously, so that the two counter-propagating input lights present similar intensities. In fact, it can be shown that the SHG intensity scales as \(L^{4}\). However, in our case, the fundamental light travelling backward is only the residual counter-propagating mode due to the small reflection that necessarily occurs inside the modulated sample. Note that the reflectivity is very small for the wavelength of fundamental light (\(\lambda\approx 1064\) nm) but it is, nevertheless, high enough to give rise to an important SHG intensity.
It is interesting to note there is no a PM similar to B at the short-wavelength edge (SWE) of the band (480 nm, see Fig. 3a). This absence can be understood if we recall that the SH electric field for this mode would oscillate in a direction perpendicular to the molecular director and, therefore, according to the proposed form for the \(d_{ijk}\) tensor, there is no possibility for SHG.
The PMs discussed here are just two examples of a much larger variety of PM combinations existing in this kind of structures [7]. In fact, we have checked that more PMs are obtained, if a larger spectral range is examined or other input polarizations are used, although the SHG intensities are in general smaller. In all cases we have found an excellent agreement between both calculation procedures which, given the complexity of this case, gives us full confidence in the validity of our method.
#### - SHG in the N\({}_{\mathrm{F}}\)* phase under electric field
Now we calculate the SHG response of a sample with a more complicated modulation. We will consider again the N\({}_{\mathrm{F}}\)* sample but now subjected to an electric field-perpendicular to the helix. The linear optical properties in this configuration have been previously studied in refs [28-31]. For this example, there are not available procedures to calculate the SHG intensity so far. However, the versatility of our calculation method allows us to address this problem.
Firstly, we will briefly describe the structure of the N\({}_{\mathrm{F}}\)* phase under electric field. In a classical nematic material, due to the head-to-tail invariance, the true periodicity of the structure is \(p/2\). This fact gives rise to the appearance of just a single PBG centered at a wavelength \(\lambda=2\pi\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Fig 6a shows the reflectance spectra for a fundamental light circularly polarized with the same handedness as that of the helix at normal incidence. As can be seen, three PBGs can be observed in the depicted spectral range. They correspond to the different harmonics of the fundamental reflection band (1 in Fig. 6a), and are related to the different Fourier components of the helix distortion. This spectral configuration is very interesting for SHG purposes since the photonic materials with multiple PBGs can present high performance when both the fundamental and SH lights are enhanced by resonance simultaneously.
Although the new structure presents a complex SHG response due to the rich variety of PBGs that exhibits, we will focus on the SHG processes in the vicinity of the half-pitch PBG (2 in Fig. 6a). Figs. 6b and c represent the SHG intensity in forward and backward directions, respectively. As can be seen, in both cases two different PMs can be observed, corresponding to two the edges of the band. The highest intensity occurs for the LWE of the band (532 nm). Remarkably, the small distortion induced by the electric field multiplies by three the SHG signal under no field. Contrary to that case, here a SHG peak also appears at the SWE of the band (480 nm). Both PMs scale as \(L^{4}\).
Fig. 5: \(\phi/(2\pi)\) versus z for the N\({}_{\text{r}}\)* phase under an electric field of 8 V/mm, perpendicular to the helix.
We will try to qualitatively interpret the obtained results. As previously mentioned when studying the SHG performance under no electric field, a very efficient PM at the PBG edge appears if there are two fundamental light beams in the sample propagating in opposite directions. In our case, the fundamental input light propagates essentially forward but a counter-propagating residual light with the same polarization always appears inside the modulated material. This residual contribution becomes more relevant under field, since the fundamental light is now in the vicinity of the full pitch PBG (1 in Fig 6a), which results in a more intense reflection of the fundamental light. Obviously, the most efficient SHG conversion would happen when both the fundamental and SHG lights coincide with the edges of the PBGs simultaneously, but this situation requires a particularly favorable index dispersion law and it is impossible to attain in the present case.
Finally, we give a brief account about the appearance of the PM at the SWE of the gap (left peak in Figs. 6b and c). In this case, it can be shown that the polarization of the corresponding \(2\omega\) light mode is not perfectly perpendicular to the local director. Thus, a small SHG response is possible through the \(d_{11}\) coefficient.
## 4 Conclusions
We have presented a numerical procedure for calculating the SHG field generated by an anisotropic material with a 1D modulation of the dielectric permittivity. The method can be applied for arbitrary modulations and even when the input light has an oblique incidence. The method is based on the Berreman 4x4 matrix formalism and, in fact, it is a generalization of that procedure to incorporate SHG processes. The strategy results to be a powerful tool to explore the SHG performance of complex optical
Figure 6: (a) Reflectance spectrum of the N\({}_{6}\)* sample under an electric field of 8 V/mm. The full-pitch, half-pitch, and third-harmonic bands are denoted by 1, 2, and 3, respectively. (b) Forward SHG intensity vs wavelength of the SH light. (c) The equivalent case to (b) but for the SH light travelling backward. The intensity of the input fundamental light is 10\({}^{6}\) W/m\({}^{2}\).
systems. Compared to existing models, the present approach represents a significant step forward in the study of the SHG response of nonlinear anisotropic inhomogeneous media.
As an example of application of the method, the SHG response of a N\({}_{\text{F}}\)* liquid crystal, subjected to electric fields, has been studied. The obtained results show attractive potentialities of these materials for developing cheap and high-performance optical devices. In the near future we are planning to explore experimentally the results predicted in this work.
## Funding
Basque Government project IT1458-22
|
2305.09741 | Existence of the Shafarevich morphism for semisimple local systems on
quasi-projective varieties | Let X be a normal connected complex algebraic variety equipped with a
semisimple complex representation of its fundamental group. Then, under a
maximality assumption, we prove that the covering space of X associated to the
kernel of the representation has a proper surjective holomorphic map with
connected fibres onto a normal analytic space with no positive-dimensional
compact analytic subspace. | Yohan Brunebarbe | 2023-05-16T18:26:29Z | http://arxiv.org/abs/2305.09741v1 | # Existence of the Shafarevich morphism for semisimple local systems on quasi-projective varieties
###### Abstract
Let X be a normal connected complex algebraic variety equipped with a semisimple complex representation of its fundamental group. Then, under a maximality assumption, we prove that the covering space of X associated to the kernel of the representation has a proper surjective holomorphic map with connected fibres onto a normal analytic space with no positive-dimensional compact analytic subspace.
## 1 Introduction
In an attempt to understand which complex analytic spaces can be realised as the universal covering of a complex algebraic variety, Shafarevich asked whether the universal covering \(\tilde{X}\) of any smooth projective variety \(X\) is holomorphically convex [2, IX.4.3]. In other words, does there exists a proper holomorphic map \(sh_{\tilde{X}}\colon\tilde{X}\to\operatorname{Sh}(\tilde{X})\) to a Stein analytic space \(\operatorname{Sh}(\tilde{X})\)? If such a map \(sh_{\tilde{X}}\) exists, then one may assume in addition that it is surjective with connected fibres, and then it is unique: it is the so-called Cartan-Remmert reduction of \(\tilde{X}\). Shafarevich question, nowadays known as Shafarevich conjecture, has been established when \(\pi_{1}(X)\) is virtually nilpotent [10] or when \(\pi_{1}(X)\) has a faithful complex linear representation [1, 2]. See also [14, 1, 15, 16, 17, 18]. The general case is however still open, due to the lack of methods to deal with non-linear fundamental groups. In contrary, when the fundamental group admits many linear representations, one can use the whole apparatus of tools coming from classical and non-abelian Hodge theory and the theory of harmonic maps towards buildings [1, 2, 3, 4, 5, 6].
The goal of this paper is to study the following natural generalization of Shafarevich question.
**Question**.: _Let \(X\) be a (non-necessarily proper) normal complex algebraic variety and \(H\subset\pi_{1}(X)\) a normal subgroup. Under which conditions the corresponding Galois etale covering \(\tilde{X}^{H}\) of \(X\) is holomorphically convex?_
When \(H=\{1\}\), this question is answered positively in [1] and [1], assuming either that \(\pi_{1}(X)\) is residually nilpotent and that the quasi-Albanese map of \(X\) is proper, or that \(X\) admits an admissible integral variation of mixed Hodge structures with a proper period map.
In order to both remove this properness restriction and deal with the general case, we propose the following definition.
**Definition**.: _Let \(X\) be a connected complex algebraic variety. Let \(H\subset\pi_{1}(X)\) be a normal subgroup. The pair \((X,H)\) is called maximal if, for any connected complex algebraic variety \(\bar{X}\) equipped with an open immersion \(X\to\bar{X}\) and every holomorphic map \(v\colon\Delta\to\bar{X}\) such that \(v(\Delta^{*})\subset X\) and \(v(0)\notin X\), the composite homomorphism \(\mathbb{Z}=\pi_{1}(\Delta^{*})\to\pi_{1}(X)\to\pi_{1}(X)/H\) is not constant._
If \(X\) is a connected smooth complex algebraic variety and \(H\subset\pi_{1}(X)\) is a normal subgroup such that \(\pi_{1}(X)/H\) is torsion-free, then the pair \((X,H)\) extends as a maximal pair on a smooth partial compactification of \(X\), see Proposition 3.5.
Our main result in this paper is the following:
**Theorem A**.: _Let \(X\) be a normal connected complex algebraic variety. Let \(H\subset\pi_{1}(X)\) be a normal subgroup which is commensurable to the intersection of the kernels of the monodromy representations of a collection of semisimple complex local systems on \(X\) of bounded rank. Let \(\tilde{X}^{H}\to X\) be the Galois etale cover corresponding to \(H\). Assume that the pair \((X,H)\) is maximal. Then,_
1. _There exists a normal complex space_ \(\tilde{\mathrm{Sh}}_{X}^{H}\) _with no positive-dimensional compact analytic subspace and a surjective proper holomorphic map with connected fibers_ \(\tilde{\mathrm{sh}}_{X}^{H}\colon\tilde{X}^{H}\to\tilde{\mathrm{Sh}}_{X}^{H}\)_._
2. _There exists a connected complex analytic space_ \(\mathrm{Sh}_{X}^{H}\) _and a surjective proper holomorphic map with connected fibers_ \(\mathrm{sh}_{X}^{H}\colon X\to\mathrm{Sh}_{X}^{H}\) _such that the following property holds:_ _For any connected proper complex algebraic variety_ \(Z\) _equipped with an algebraic morphism_ \(f\colon Z\to X\)_,_ \(\mathrm{sh}_{X}^{H}(f(Z))\) _is a point if and only if the image of_ \(\pi_{1}(Z)\) _in_ \(\pi_{1}(X)/H\) _is finite._
It is easily seen that the morphisms \(\tilde{\operatorname{sh}}_{X}^{H}\) and \(\operatorname{sh}_{X}^{H}\) are uniquely characterized (up to unique isomorphism) by their respective defining properties.
**Theorem B** (see Theorem 10.2).: _Let \(X\) be a normal connected complex algebraic variety. Let \(\mathcal{L}\) be a semisimple complex local system on \(X\). Let \(H\) be the kernel of the monodromy representation of \(\mathcal{L}\). Assume that the pair \((X,H)\) is maximal. Assume also that the image of the monodromy representation of \(\mathcal{L}\) is torsion-free. Then there exists a complex local system \(\mathcal{M}\) on \(\operatorname{Sh}_{X}^{H}\) such that \(\mathcal{L}=(\operatorname{sh}_{X}^{H})^{-1}\mathcal{M}\)._
When \(X\) is a smooth projective variety, Theorem A was proved by Katzarkov-Ramachandran [10] when \(\dim X=2\) and by Eyssidieux [11] in general. Our proof follows a similar strategy as in [11], but with a twist. A crucial new ingredient is provided by the following result.
**Theorem C** (= Theorem 6.8).: _Let \(\Gamma\) be a finitely generated group, \(G\) be a reductive \(\mathbb{Q}\)-algebraic group and \(\Sigma\) be a \(\mathbb{Q}\)-Zariski constructible subset of the character variety \(M_{B}(\Gamma,G)\). Assume that for a prime number \(p\), all the conjugacy classes of representations corresponding to elements in the set \(\Sigma(\bar{\mathbb{Q}})\) have bounded image when seen as representations with values in \(G(\bar{\mathbb{Q}}_{p})\). Then \(\Sigma\) consists in finitely many points._
### Acknowledgements.
I warmly thank Ben Bakker, Jeremy Daniel and Marco Maculan for interesting discussions.
## 2 Holomorphic fibrations and Stein factorization
A holomorphic fibration is a proper surjective holomorphic map \(g\colon S\to T\) between two complex analytic spaces such that \(g_{*}\mathcal{O}_{S}=\mathcal{O}_{T}\) (in particular, the fibres of a fibration are connected).
**Theorem 2.1** (Stein, Cartan; cf. [1, Theorem 3]).: _Let \(f_{\alpha}\colon X\to Y_{\alpha}\) be a collection of holomorphic maps between complex analytic spaces. Let \(f\colon X\to\prod_{\alpha}Y_{\alpha}\) be the product map. Assume that the connected components of the fibres of \(f\) are compact. Then the equivalence relation \(R\) defined by these connected components is proper and the quotient ringed space \(X/R\) is a complex analytic space._
The proper surjective holomorphic map with connected fibers \(X\to X/R\) is called the Stein factorization of the collection of holomorphic maps \(f_{\alpha}\).
## 3 Maximal pairs
**Definition 3.1**.: _Let \(X\) be a connected complex algebraic variety and \(\rho\colon\pi_{1}(X)\to\Gamma\) a homomorphism of groups. The pair \((X,\rho)\) is called maximal if, for any connected complex algebraic variety \(\bar{X}\) equipped with an open immersion \(X\to\bar{X}\) and every holomorphic map \(v\colon\Delta\to\bar{X}\) such that \(v(\Delta^{*})\subset X\) and \(v(0)\notin X\), the composite homomorphism \(\mathbb{Z}=\pi_{1}(\Delta^{*})\to\pi_{1}(X)\to\Gamma\) is not constant._
If \(H\subset\pi_{1}(X)\) is a normal subgroup, we recover the definition of the introduction by considering the homomorphism \(\pi_{1}(X)\to\pi_{1}(X)/H\).
**Remark 3.2**.: _It is easily seen that it is sufficient to check the condition in the definition only for one compactification \(\bar{X}\) of \(X\). Also, keeping the notations of the definition, it follows immediately that if the pair \((X,\rho)\) is maximal, then for every holomorphic map \(v\colon\Delta\to\bar{X}\) such that \(v(\Delta^{*})\subset X\) and \(v(0)\notin X\), the composite homomorphism \(\mathbb{Z}=\pi_{1}(\Delta^{*})\to\pi_{1}(X)\to\Gamma\) has an infinite image._
**Proposition 3.3**.: _Let \(f\colon Y\to X\) be a proper algebraic morphism between two connected complex algebraic varieties. Let \(\rho\colon\pi_{1}(X)\to\Gamma\) be a homomorphism of groups._
* _If the pair_ \((X,\rho)\) _is maximal, then the pair_ \((Y,f^{-1}\rho)\) _is maximal._
* _If_ \(f\) _is surjective and the pair_ \((Y,f^{-1}\rho)\) _is maximal, then the pair_ \((X,\rho)\) _is maximal._
Proof.: Since the algebraic map \(f\colon Y\to X\) is proper, there exist \(\bar{Y}\) and \(\bar{X}\) some connected compactifications of \(Y\) and \(X\) respectively, such that \(f\) extends to an algebraic map \(\bar{f}\colon\bar{Y}\to\bar{X}\) with \(\bar{f}^{-1}(X)=Y\).
Assume that the pair \((X,\rho)\) is maximal. Let \(v\colon\Delta\to\bar{Y}\) be a holomorphic map such that \(v(\Delta^{*})\subset Y\) and \(v(0)\notin Y\). Then, the composite map \(\bar{f}\circ v\colon\Delta\to\bar{X}\) is a holomorphic map such that \((\bar{f}\circ v)(\Delta^{*})\subset X\) and \((\bar{f}\circ v)(0)\notin X\). Since the pair \((X,\rho)\) is maximal, the composite homomorphism \(\mathbb{Z}=\pi_{1}(\Delta^{*})\to\pi_{1}(Y)\to\pi_{1}(X)\to\Gamma\) has infinite image. This shows that the pair \((Y,f^{-1}\rho)\) is maximal.
Conversely, assume that \(f\) is surjective and that the pair \((Y,f^{-1}\rho)\) is maximal. It is harmless to assume that \(\bar{f}\) is surjective too. Let \(v\colon\Delta\to\bar{X}\) be a holomorphic map such that \(v(\Delta^{*})\subset X\) and \(v(0)\notin X\). By shrinking \(\Delta\)
one may assume that \(v\) is injective and extends continuoulsy to \(\bar{\Delta}\).
The preimage of \(v(\bar{\Delta}\backslash\{0\})\) by \(f\) is not closed in the preimage of \(v(\bar{\Delta})\) by \(\bar{f}\), since otherwise its image \(f\left(f^{-1}\left(v(\bar{\Delta}\backslash\{0\})\right)\right)=\bar{\Delta} \backslash\{0\}\) by the proper map \(\bar{f}\) would be closed in \(\bar{f}\left(\bar{f}^{-1}\left(v(\bar{\Delta})\right)\right)=\bar{\Delta}\). Therefore, there exists \(y\in\bar{Y}-Y\) in the closure of the preimage of \(v(\bar{\Delta}\backslash\{0\})\). Let \(w\colon\Delta\to\bar{Y}\) be a holomorphic map such that \(w(\Delta^{*})\subset f^{-1}(v(\Delta^{*}))\) and \(w(0)=y\). Therefore, we get a commutative diagram of holomorphic maps:
with \(g(0)=0\). Moreover, up to shrinking both \(\Delta\)'s, one can assume that \(g^{-1}(0)=\{0\}\). By assumption, the image of the homomorphism \(\rho\circ f_{*}\circ(w_{|\Delta^{*}})_{*}\colon\pi_{1}(\Delta^{*})\to\Gamma\) is infinite. It follows that the image of the homomorphism \(\rho\circ(v_{|\Delta^{*}})_{*}\colon\pi_{1}(\Delta^{*})\to\Gamma\) is infinite.
**Proposition 3.4**.: _Let \(X\) be a connected smooth complex algebraic variety equipped with a variation of Hodge structure whose monodromy representation \(\rho\) has discrete image \(\Gamma\). Let \(\tilde{X}^{\rho}\to\mathcal{D}\) be the corresponding period map. Then the following are equivalent:_
1. _The pair_ \((X,\rho)\) _is maximal,_
2. _The period map_ \(\tilde{X}^{\rho}\to\mathcal{D}\) _is proper._
Proof.: Since Proposition 3.3 shows that it is harmless to replace \(X\) by a finite etale cover, one can assume thanks to Selberg's lemma that \(\Gamma\) is torsion-free. In that case, the period map \(\tilde{X}^{\rho}\to\mathcal{D}\) induces a holomorphic map on the quotients \(X\to\Gamma\backslash\mathcal{D}\), and the former is the base-change of the latter along the covering map \(\mathcal{D}\to\Gamma\backslash\mathcal{D}\). Therefore, the period map \(\tilde{X}^{\rho}\to\mathcal{D}\) is proper if and only if the induced map \(X\to\Gamma\backslash\mathcal{D}\) is proper. But the latter is proper if and only if the pair \((X,\rho)\) is maximal by Griffiths' criterion [10, Theorem 9.5] (see also [1, section 2.2]).
**Proposition 3.5** (Compare with [1, Proposition 2.4]).: _Let \(X\) be a smooth algebraic variety. Let \(\rho\colon\pi_{1}(X)\to\Gamma\) be a homomorphism of groups with torsion-free image. Then there exists \(i\colon X\hookrightarrow X^{\prime}\) an open embedding in a smooth algebraic variety and a homomorphism \(\rho^{\prime}\colon\pi_{1}(X^{\prime})\to\Gamma\) such that \(i^{-1}\rho^{\prime}=\rho\) and the pair \((X^{\prime},\rho^{\prime})\) is maximal._
Proof.: Let \(X\subset\bar{X}\) be a smooth compactification such that \(\bar{X}\backslash X=D\) is a normal crossing divisor. Fix a covering of \(\bar{X}\) by polydisks \(P\simeq\Delta^{n_{p}}\) such that \(P^{*}:=P\cap X\simeq(\Delta^{*})^{r_{P}}\times\Delta^{s_{P}}\). For any polydisk \(P\), consider the positive octant \(\mathbb{R}_{\geq 0}^{r_{P}}\). The kernel of the restriction of \(\rho\) to \(\pi_{1}(P^{*})=\mathbb{Z}^{r_{P}}\) defines an integral linear subspace of \(\mathbb{R}^{r_{P}}\), and we denote by \(K\) its intersection with \(\mathbb{R}_{\geq 0}^{r_{P}}\). We may find an integral simplicial subdivision of the standard fan on \(\mathbb{R}_{\geq 0}^{r_{P}}\) for which \(K\) is a union of facets. This subdivision corresponds to a (global) monomial modification \(\bar{X}_{P}\to\bar{X}\) such that the restriction of \(\rho\) to the preimage of \(P^{*}\) is maximal, once we extend the pull-back representation over the boundary components of \(\bar{X}_{P}\backslash X\) with no monodromy. Notice that any further monomial modification \(\bar{Z}\to\bar{X}_{P}\) will also satisfy the condition above \(P\). Thus, taking \(\bar{Z}\) to be a monomial modification of \(\bar{X}\) that dominates each of the \(\bar{X}_{P}\) and extending the representation over the boundary components with no monodromy, we get a maximal representation.
## 4 Shafarevich morphisms
**Definition 4.1**.: _Let \(X\) be a normal connected complex algebraic variety and \(H\subset\pi_{1}(X)\) be a normal subgroup. A Shafarevich morphism for the pair \((X,H)\) is the data of a connected complex analytic space \(\mathrm{Sh}_{X}^{H}\) and a holomorphic fibration \(\mathrm{sh}_{X}^{H}\colon X\to\mathrm{Sh}_{X}^{H}\) such that the following property holds: For any connected proper complex algebraic variety \(Z\) equipped with an algebraic morphism \(f\colon Z\to X\), \(\mathrm{sh}_{X}^{H}(f(Z))\) is a point if and only if the image of \(\pi_{1}(Z)\) in \(\pi_{1}(X)/H\) is finite._
Since a fibration is determined by the equivalence relation it induces on the source, a Shafarevich morphism for the pair \((X,H)\), if existing, is unique up to unique isomorphism.
If \(\Gamma\) is a group and \(\rho\colon\pi_{1}(X)\to\Gamma\) is a homomorphism, then a Shafarevich morphism \(\mathrm{sh}_{X}^{\rho}\colon X\to\mathrm{Sh}_{X}^{\rho}\) for the pair \((X,\rho)\) is by definition a Shafarevich morphism for the pair \((X,\ker\rho)\). Similarly, if \(\Sigma=\{\mathcal{L}_{i}\}_{i\in I}\) is a collection of complex local systems on \(X\), then a \(\Sigma\)-Shafarevich morphism \(\mathrm{sh}_{X}^{\Sigma}\colon X\to\mathrm{Sh}_{X}^{\Sigma}\) is by definition a Shafarevich morphism for the pair \((X,H)\), where \(H\) is the intersection of the kernels of the monodromy representations of the \(\mathcal{L}_{i}\)'s.
**Theorem 4.2**.: _Let \(X\) be a connected normal complex algebraic variety. Let \(H\subset\pi_{1}(X)\) be a normal subgroup and \(\tilde{X}^{H}\to X\) the corresponding Galois etale cover. The following assertions are equivalent:_
1. _There exists a normal complex space_ \(\tilde{\mathrm{Sh}}_{X}^{H}\) _with no positive-dimensional compact analytic subspace and a holomorphic fibration_ \(\tilde{\mathrm{sh}}_{X}^{H}\colon\tilde{X}^{H}\to\tilde{\mathrm{Sh}}_{X}^{H}\)_._
2. _There exists a connected complex analytic space_ \(\mathrm{Sh}_{X}^{H}\) _and a holomorphic fibration_ \(\mathrm{sh}_{X}^{H}\colon X\to\mathrm{Sh}_{X}^{H}\) _such that the following property holds:_ _For any connected proper complex algebraic variety_ \(Z\) _equipped with an algebraic morphism_ \(f\colon Z\to X\)_,_ \(\mathrm{sh}_{X}^{H}(f(Z))\) _is a point if and only if the image of_ \(\pi_{1}(Z)\) _in_ \(\pi_{1}(X)/H\) _is finite._
3. _There exists a connected complex analytic space_ \(\mathrm{Sh}_{X}^{H}\) _and a holomorphic fibration_ \(\mathrm{sh}_{X}^{H}\colon X\to\mathrm{Sh}_{X}^{H}\) _such that the following properties hold:_ _For every (connected) fiber_ \(F\) _of_ \(\mathrm{sh}_{X}^{H}\)_, the image of_ \(\pi_{1}(F)\) _in_ \(\pi_{1}(X)/H\) _is finite; for every irreducible closed complex algebraic subvariety_ \(Z\subset X\) _with normalization_ \(\bar{Z}\to Z\)_, if the image of_ \(\pi_{1}(\bar{Z})\) _in_ \(\pi_{1}(X)/H\) _is finite, then_ \(\mathrm{sh}_{X}^{H}(Z)\) _is a point._
4. _There exists a collection of holomorphic maps_ \(\varphi_{\alpha}\colon\tilde{X}^{H}\to S_{\alpha}\) _such that the connected components of the fibers of the map_ \(\varphi:=\prod_{\alpha}\varphi_{\alpha}\colon\tilde{X}^{H}\to\prod_{\alpha}S_{\alpha}\) _are compact, and such that any connected compact analytic subspace of_ \(\tilde{X}^{H}\) _is contained in a fiber of_ \(\varphi\)_._
The maps in 1, 2 and 3, if existing, are unique (up to unique isomorphism).
Proof.: Let \(\rho\colon\pi_{1}(X)\to\Gamma:=\pi_{1}(X)/H\) be the quotient map.
The assertion 1 clearly implies the assertion 4. Conversely, assuming 4, let \(\psi\colon\tilde{X}^{H}\to S\) be the Stein factorisation of the collection of holomorphic maps \(\varphi_{\alpha}\colon\tilde{X}^{H}\to S_{\alpha}\), cf. Theorem 2.1. Since \(\tilde{X}^{H}\) is normal and \(\psi\) is a proper surjective holomorphic map with connected fibers, \(\psi\) is a holomorphic fibration. If \(T\subset S\) is a connected compact analytic subspace, then \(\psi^{-1}(T)\) is a connected compact analytic subspace of \(\tilde{X}^{H}\). Therefore \(\psi^{-1}(T)\) is contained in a fiber of \(\varphi\), hence in a fiber of \(\psi\), so that \(T\) is a point. This shows 1.
\(1\Rightarrow 2\). By unicity of the map \(\tilde{\mathrm{sh}}_{X}^{H}\), the Galois action of \(\Gamma\) on \(\tilde{X}^{H}\) descends to an action on \(\tilde{\mathrm{Sh}}_{X}^{H}\) which is still properly discontinuous. We claim that the map induced on the quotients \(\mathrm{sh}_{X}^{H}\colon X\to\mathrm{Sh}_{X}^{H}\) is the \(H\)-Shafarevich morphism.
First note that \(\mathrm{sh}_{X}^{H}\) is proper: if \(K\subset\mathrm{Sh}_{X}^{H}\) is a sufficiently small compact subset that lifts to \(\tilde{\mathrm{Sh}}_{X}^{H}\), then \(\left(\mathrm{sh}_{X}^{H}\right)^{-1}(K)\) is the image of the compact
\(\tilde{\operatorname{sh}}_{X}^{H})^{-1}(K)\), hence it is compact. In particular, the fibers of \(\operatorname{sh}_{X}^{H}\) are images of the fibers of \(\tilde{\operatorname{sh}}_{X}^{H}\), hence they are connected.
Let \(Z\) be a connected proper complex algebraic variety equipped with an algebraic map \(f\colon Z\to X\). If the image of \(\pi_{1}(Z)\) in \(\Gamma\) is finite, then the connected components of the analytic space \(Z\times_{X}\tilde{X}^{H}\) are compact. Therefore, their image in \(\tilde{\operatorname{Sh}}_{X}^{H}\) is a point, hence the image of \(Z\) in \(\operatorname{Sh}_{X}^{H}\) is a point.
Conversely, assume that the composite map \(\operatorname{sh}_{X}^{H}\circ f\colon Z\to\operatorname{Sh}_{X}^{H}\) is constant, or equivalently that \(f(Z)\) is contained in a fiber \(F\) of \(\operatorname{sh}_{X}^{H}\). Since \(\tilde{\operatorname{sh}}_{X}^{H}\) is a fibration, the preimage of \(F\) in \(\tilde{X}^{H}\) is a disjoint union of connected compact analytic subspaces. Therefore, the image of \(\pi_{1}(F)\) in \(\Gamma\) is finite, hence so is the image of \(\pi_{1}(Z)\) in \(\Gamma\).
\(2\Rightarrow 3\) is immediate.
\(3\Rightarrow 1\). Let \(F\) be a (connected) fiber of \(\operatorname{sh}_{X}^{H}\). Since the image of \(\pi_{1}(F)\) in \(\Gamma\) is finite, the connected components of \(F\times_{X}\tilde{X}^{H}\) are compact. A fortiori, the connected components of the fibers of the composite map \(\tilde{X}^{H}\to X\to\operatorname{Sh}_{X}^{H}\) are compact. Define \(\tilde{\operatorname{sh}}_{X}^{H}\colon\tilde{X}^{H}\to\tilde{\operatorname{ Sh}}_{X}^{H}\) as its Stein factorization, cf. Theorem 2.1. To prove that \(\tilde{\operatorname{Sh}}_{X}^{H}\) has no positive-dimensional compact analytic subspace, it is equivalent to prove that every connected compact analytic subspace \(Z\subset\tilde{X}^{H}\) is contracted by \(\tilde{\operatorname{sh}}_{X}^{H}\). One may assume that \(Z\) is irreducible. Let \(\bar{Z}\to Z\) be its normalization, so that \(\bar{Z}\) is also a compact irreducible complex analytic space. Let \(Y\subset X\) be the image of \(Z\) by the map \(\tilde{X}^{H}\to X\). It is a compact analytic subspace of \(X\), hence it is an algebraic subvariety by applying Chow's theorem to a compactification of \(X\). Let \(\bar{Y}\to Y\) be the normalization of \(Y\). Since the map \(Z\to Y\) is surjective, the universal property of the normalization implies that the composite map \(\bar{Z}\to Z\to Y\) factorizes through \(\bar{Y}\). Therefore, one has a commutative diagram:
The compact irreducible analytic space \(\bar{Z}\) surjects onto one of the irreducible (= connected) components of the normal analytic space \(\bar{Y}\times_{Y}\tilde{X}^{H}\). Therefore, the image of \(\pi_{1}(\bar{Y})\) in \(\Gamma\) is finite, so that \(\operatorname{sh}_{X}^{H}(\bar{Y})\) is a point and \(\tilde{\operatorname{sh}}_{X}^{H}(Z)\) is a point.
**Proposition 4.3**.: _Let \(X\) be a connected normal complex algebraic variety and \(H\subset\pi_{1}(X)\) be a normal subgroup. Assume that \(\operatorname{sh}_{X}^{H}\colon X\to\operatorname{Sh}_{X}^{H}\) is a Shafarevich morphism for the pair \((X,H)\). Let \(Y\) be a connected normal complex algebraic variety equipped with a proper morphism \(f\colon Y\to X\), and \(f_{*}^{-1}(H)\subset\pi_{1}(Y)\) be the preimage of \(H\) by the induced homomorphism \(f_{*}\colon\pi_{1}(Y)\to\pi_{1}(X)\). Then the Stein factorization of the composition of the maps \(Y\to X\) and \(X\to\operatorname{Sh}_{X}^{H}\) is a Shafarevich morphism for the pair \((Y,f_{*}^{-1}(H))\)._
Proof.: Direct consequence of the definitions.
**Proposition 4.4**.: _Let \(X\) be a connected normal complex algebraic variety and \(H\subset\pi_{1}(X)\) be a normal subgroup. Let \(\pi\colon X^{\prime}\to X\) be a finite etale morphism, and set \(H^{\prime}:=\pi_{*}^{-1}(H)\subset\pi_{1}(X^{\prime})\). Then, the \((X,H)\)-Shafarevich morphism exists if and only if the \((X^{\prime},H^{\prime})\)-Shafarevich morphism exists._
Proof.: If the Shafarevich morphism for the pair \((X,H)\) exists, then the Stein factorization of the composite morphism \(\operatorname{sh}_{X}^{H}\circ\pi\colon X^{\prime}\to\operatorname{Sh}_{X}^{H}\) is the Shafarevich morphism of the pair \((X^{\prime},H^{\prime})\).
Let us now prove the converse. By going to a further finite etale cover of \(X^{\prime}\) and using the first part of the proposition, one can assume that that finite etale cover \(\pi\colon X^{\prime}\to X\) is Galois, with Galois group \(\Gamma\). The Shafarevich morphism for the pair \((X^{\prime},H^{\prime})\) is \(\Gamma\)-equivariant, hence by quotienting by \(\Gamma\) one gets a proper holomorphic map \(X\to\Gamma\setminus\operatorname{Sh}_{X^{\prime}}^{H^{\prime}}\). This map is easily seen to be the Shafarevich morphism for the pair \((X,H)\).
**Proposition 4.5**.: _Let \(Y\to X\) be a surjective proper morphism with connected fibres between normal complex algebraic varieties. Let \(H\subset\pi_{1}(X)\) be a normal subgroup. Let \(G\subset\pi_{1}(Y)\) be the preimage of \(H\) by the homomorphism of groups \(\pi_{1}(Y)\to\pi_{1}(X)\). Assume that the \(G\)-Shafarevich morphism \(\operatorname{sh}_{Y}^{G}\colon Y\to\operatorname{Sh}_{Y}^{G}\) exists. Then there is a (unique) factorization_
_and the induced morphism \(X\to\operatorname{Sh}_{Y}^{G}\) is the \(H\)-Shafarevich morphism._
In particular, this result shows that to prove the existence of Shafarevich morphisms, one needs only to consider the case where the underlying variety is smooth quasi-projective.
Proof.: Let \(\tilde{X}^{H}\to X\) and \(\tilde{Y}^{G}\to Y\) be the Galois etale covers corresponding to \(H\) and \(G\) respectively. Since \(Y\to X\) is a fibration between normal varieties, the induced homomorphism between their fundamental groups is surjective, hence the following commutative diagram is cartesian:
In particular, the map \(\tilde{Y}^{G}\to\tilde{X}^{H}\) is a holomorphic fibration. Thanks to Theorem 4.2, the existence of the \(G\)-Shafarevich morphism is equivalent to the existence of a normal complex space \(\tilde{\operatorname{Sh}}^{G}_{Y}\) with no positive-dimensional compact analytic subspace and a holomorphic fibration \(\tilde{\operatorname{sh}}^{G}_{Y}\colon\tilde{Y}^{G}\to\tilde{\operatorname{ Sh}}^{G}_{Y}\). Since the map \(\tilde{Y}^{G}\to\tilde{X}^{H}\) is a holomorphic fibration, it fibers are contracted by \(\tilde{\operatorname{sh}}^{G}_{Y}\). Since \(\tilde{X}^{H}\) is normal, we obtain a factorization
Applying again Theorem 4.2, the existence of the holomorphic fibration \(\tilde{X}^{H}\to\tilde{\operatorname{Sh}}^{G}_{Y}\) implies the existence of the \(H\)-Shafarevich morphism. Moreover, one gets the diagram from the statement by quotienting by the equivariant action of \(\pi_{1}(X)/H=\pi_{1}(Y)/G\).
**Proposition 4.6**.: _Let \(\{\rho_{i}\colon\pi_{1}(X)\to\Gamma_{i}\}_{i\in I}\) be a finite collection of homomorphisms of groups with torsion-free image on a smooth complex algebraic variety \(X\). Assume that the homomorphism \(\prod_{i\in I}\rho_{i}\colon\pi_{1}(X)\to\prod_{i\in I}\Gamma_{i}\) is maximal on \(X\). Assume moreover that for every \(i\), there exists a partial smooth compactification \(X_{i}\) of \(X\) to which \(\rho_{i}\) extends and such that the Shafarevich morphism for the pair \((X_{i},\rho_{i})\) exist. Then the Shafarevich morphism for the pair \((X,\prod_{i\in I}\rho_{i})\) exists._
Proof.: First note that \(\prod_{i\in I}\operatorname{sh}^{\rho_{i}}_{X_{i}}\colon\prod_{i\in I}X_{i} \to\prod_{i\in I}\operatorname{Sh}^{\rho_{i}}_{X_{i}}\) is the Shafarevich morphism for the representation \(\prod_{i\in I}\colon\pi_{1}(\prod_{i\in I}X_{i})\to\prod_{i\in I}\Gamma_{i}\). On the other hand, since \(\prod_{i\in I}\rho_{i}\) is maximal on \(X\), one has \(\cap_{i\in I}X_{i}=X\). In particular, the diagonal embedding \(X\to\prod_{i\in I}X_{i}\) is proper. By applying Proposition 4.3, it follows that the Shafarevich morphism for the pair \((X,\prod_{i\in I}\rho_{i})\) exists and is equal to the Stein factorization of the composition of the maps \(X\to\prod_{i\in I}X_{i}\) and \(\prod_{i\in I}\operatorname{sh}^{\rho_{i}}_{X_{i}}\).
## 5 Katzarkov-Zuo reductions
**Definition 5.1**.: _Let \(X\) be a connected complex algebraic variety. Let \(G\) be an algebraic group defined over a non-archimedean local field \(K\). Let \(\rho\colon\pi_{1}(X)\to G(K)\) be a representation whose image is Zariski-dense in a reductive group. A Katzarkov-Zuo reduction of the pair \((X,\rho)\) is an algebraic morphism \(\sigma_{X}^{\rho}\colon X\to S_{X}^{\rho}\) such that the following property holds: For any connected complex algebraic variety \(Z\) and any algebraic map \(f\colon Z\to X\), the composite map \(\sigma_{X}^{\rho}\circ f\colon Z\to S_{X}^{\rho}\) is constant if, and only if, the representation \(f^{-1}\rho\) has bounded image._
The following result is proved in [10, Proposition 1.4.7] in the proper case and in [1] and [1, Theorem 0.6] in the general case.
**Theorem 5.2**.: _Let \(X\) be a connected smooth quasi-projective complex algebraic variety. Let \(G\) be an algebraic group defined over a non-archimedean local field \(K\). Let \(\rho\colon\pi_{1}(X)\to G(K)\) be a representation whose image is Zariski-dense in a reductive group. Then the pair \((X,\rho)\) admits a Katzarkov-Zuo reduction._
**Remark 5.3**.: _In the non-proper case, the statement of [1, Theorem 0.6] is slightly weaker than the statement of Theorem 5.2, since \(Z\) is assumed to be normal in loc. cit.. However, the proof of Theorem 5.2 is essentially contained in the proof of Claim 5.21 in [1]. Namely, if \(F\) is a connected component of a fiber of \(\sigma_{X}^{\rho}\), then the arguments in loc. cit. show that every connected component of \(F\times_{X}\tilde{X}^{\rho}\) is sent to a point by the pluriharmonic map with values in the building \(\Delta(G)\) associated to \(\rho\). This point is fixed by the image of \(\pi_{1}(F)\) in \(G(K)\). Since the stabilizer of a point in an Euclidean Bruhat-Tits building is compact, this shows that the image of \(\pi_{1}(F)\) in \(G(K)\) is bounded._
**Proposition 5.4**.: _Let \(X\) be a connected normal complex algebraic variety. Let \(G\) be an algebraic group defined over a non-archimedean local field \(K\). Let \(\rho\colon\pi_{1}(X)\to G(K)\) be a representation whose image is Zariski-dense in a reductive group. Then the pair \((X,\rho)\) admits a Katzarkov-Zuo reduction._
Proof.: Let \(\nu\colon X^{\prime}\to X\) be a surjective proper birational morphism from a connected smooth quasi-projective complex algebraic variety \(X\). In particular, \(\nu\) has connected fibres. Let \(\rho^{\prime}:=\nu^{-1}(\rho)\). Thanks to Theorem 5.2, the pair \((X^{\prime},\rho^{\prime})\) admits a Katzarkov-Zuo reduction \(\sigma_{X^{\prime}}^{\rho^{\prime}}\colon X\to S_{X^{\prime}}^{\rho^{\prime}}\). Let us show that \(\sigma_{X^{\prime}}^{\rho^{\prime}}\) factorizes through \(\nu\). Since \(X\) is normal, it is sufficient to prove
that \(\sigma^{\rho^{\prime}}_{X^{\prime}}\) is connected on every (necessarily connected) fiber \(F\) of \(\nu\). But this follows from the definition of a Katzarkov-Zuo reduction, since the image of \(\pi_{1}(F)\) by \(\rho^{\prime}\) is trivial.
## 6 Moduli spaces of representations
### Definitions
Let \(\Gamma\) be a finitely generated group and \(G\) a reductive \(\mathbb{Q}\)-algebraic group. We denote by \(R(\Gamma,G)\) the \(\mathbb{Q}\)-affine scheme of finite type that represents the functor that associates to any \(\mathbb{Q}\)-algebra \(A\) the set of group homomorphisms from \(\Gamma\) to \(G(A)\). We denote by \(M(\Gamma,G)\) the affine scheme corresponding to the \(\mathbb{Q}\)-algebra \(\mathbb{Q}[R(\Gamma,G)]^{G}\). It is a \(\mathbb{Q}\)-scheme of finite type.
If \(k\supset\mathbb{Q}\) is an algebraically closed field, then a representation \(\rho\colon\Gamma\to G(k)\) is called reductive if the Zariski-closure of its image is a reductive subgroup of \(G(k)\). The orbit under conjugation of a representation \(\rho\colon\Gamma\to G(\mathbb{C})\) is closed if and only if \(\rho\) is reductive. On the other hand, every fiber of the projection \(R(\Gamma,G)\to M(\Gamma,G)\) contains a unique closed orbit. Therefore, the \(\mathbb{C}\)-points of \(M(\Gamma,G)\) are naturally in bijection with the elements in \(R(\Gamma,G)(\mathbb{C})\) corresponding to reductive representations. See [10] and [11] for details.
### The \(\operatorname{SL}_{n}\)-character variety
Let \(\Gamma\) be a finitely generated group and \(n\) a positive integer. For any \(\gamma\in\Gamma\), we denote by \(tr_{\gamma}\) the character function on \(R(\Gamma,\operatorname{SL}_{n})\) defined by \(tr_{\gamma}(\rho):=tr(\rho(\gamma))\). These character functions are invariant under the \(\operatorname{SL}_{n}\)-action by conjugation. In fact, they generate the algebra of invariant functions:
**Proposition 6.1**.: _The \(\mathbb{Q}\)-algebra \(\mathbb{Q}[R(\Gamma,\operatorname{SL}_{n})]^{\operatorname{SL}_{n}}\) is generated by the functions \(tr_{\gamma},\gamma\in\Gamma\)._
This result is an easy consequence of a well-known result of Procesi. We give the details for the reader convenience.
Proof.: Since the group \(\Gamma\) is finitely generated, the affine \(\mathbb{Q}\)-scheme \(R(\Gamma,\operatorname{SL}_{n})\) can be realized as a closed \(\mathbb{Q}\)-subscheme of \((\operatorname{SL}_{n})^{r}\) for some positive integer \(r\). If \(M_{n}\) denote the affine \(\mathbb{Q}\)-scheme of \(n\times n\) square matrices, then the group \(\operatorname{SL}_{n}\) acts by conjugation on the affine \(\mathbb{Q}\)-scheme \((M_{n})^{r}\), and this action preserves the closed subscheme \((\operatorname{SL}_{n})^{r}\subset(M_{n})^{r}\). Therefore, the composition of
the \(\mathbb{Q}\)-algebras homomorphisms \(\mathbb{Q}[(M_{n})^{r}]\to\mathbb{Q}[(\mathrm{SL}_{n})^{r}]\to\mathbb{Q}[R(\Gamma, \mathrm{SL}_{n})]\) is surjective. Since \(\mathrm{SL}_{n}\) is reductive, the induced \(\mathbb{Q}\)-algebras homomorphism \(\mathbb{Q}[(M_{n})^{r}]^{\mathrm{SL}_{n}}\to\mathbb{Q}[R(\Gamma,\mathrm{SL}_{n} )]^{\mathrm{SL}_{n}}\) is surjective too, cf. [10, Chapter 1 SS2]. Finally, our claim is a consequence of the following result of Procesi [11]: the \(\mathbb{Q}\)-algebra \(\mathbb{Q}[(M_{n})^{r}]^{\mathrm{SL}_{n}}\) is generated by the functions that associate to a r-tuple of matrices the trace of a monomial in these matrices.
Since the \(\mathbb{Q}\)-algebra \(\mathbb{Q}[R(\Gamma,\mathrm{SL}_{n})]^{\mathrm{SL}_{n}}\) is finitely generated, one can choose finitely many elements in \(\Gamma\) such that the associated character functions yield a closed embedding \(M_{B}(\Gamma,\mathrm{SL}_{n})\hookrightarrow\mathbb{A}_{\mathbb{Q}}^{r}\) defined over \(\mathbb{Q}\).
### Representations with bounded image
Fix a prime number \(p\). We say that an element of \(M_{B}(\Gamma,\mathrm{SL}_{n})(\bar{\mathbb{Q}}_{p})\) has bounded image if it corresponds to a conjugacy class of (reductive) representations \(\Gamma\to\mathrm{SL}_{n}(\bar{\mathbb{Q}}_{p})\) with bounded image with respect to the topology induced by the topology of \(\bar{\mathbb{Q}}_{p}\).
Since \(\Gamma\) is finitely generated, for any representation \(\rho\colon\Gamma\to\mathrm{SL}_{n}(\bar{\mathbb{Q}}_{p})\), there exists a finite extension \(K\) of \(\mathbb{Q}_{p}\) such that the image of \(\rho\) is contained in \(\mathrm{SL}_{n}(K)\). Then \(\rho\) has bounded image if and only if it is conjugated to a subgroup of \(\mathrm{SL}_{n}(\mathcal{O}_{K})\). In that case, the character functions evaluated at \(\rho\) take their values in \(\mathcal{O}_{K}\).
**Definition 6.2**.: _We denote by \(M(\Gamma,\mathrm{SL}_{n})(\bar{\mathbb{Q}}_{p})^{o}\) the subset of \(M_{B}(\Gamma,\mathrm{SL}_{n})(\bar{\mathbb{Q}}_{p})\) corresponding to (conjugacy classes of) reductive representations on which all character functions take values in \(\bar{\mathbb{Z}}_{p}\)._
**Proposition 6.3**.: _Fix a prime number \(p\). Then:_
1. \(M(\Gamma,\mathrm{SL}_{n})(\bar{\mathbb{Q}}_{p})^{o}\) _is a closed subset of_ \(M_{B}(\Gamma,\mathrm{SL}_{n})(\bar{\mathbb{Q}}_{p})\) _with respect to the topology induced by_ \(\bar{\mathbb{Q}}_{p}\)_,_
2. _The intersection of_ \(M(\Gamma,\mathrm{SL}_{n})(\bar{\mathbb{Q}}_{p})^{o}\) _with_ \(M_{B}(\Gamma,\mathrm{SL}_{n})(K)\) _is compact for every finite extension_ \(K\) _of_ \(\mathbb{Q}_{p}\)_,_
3. \(M(\Gamma,\mathrm{SL}_{n})(\bar{\mathbb{Q}}_{p})^{o}\) _contains the points that correspond to the conjugacy classes of reductive representations_ \(\Gamma\to\mathrm{SL}_{n}(\bar{\mathbb{Q}}_{p})\) _with bounded image._
**Remark 6.4**.: _One can show that an absolutely irreducible representation belongs to \(M(\Gamma,\mathrm{SL}_{n})(\bar{\mathbb{Q}}_{p})^{o}\) if and only if it has bounded image._
Proof.: Since the character functions are polynomial, the induced functions \(M_{B}(\Gamma,\mathrm{SL}_{n})(\bar{\mathbb{Q}}_{p})\to\bar{\mathbb{Q}}_{p}\) are continuous. The first assertion follows immediately, since \(\bar{\mathbb{Z}}_{p}\) is closed in \(\bar{\mathbb{Q}}_{p}\).
For the second assertion, choose finitely many elements in \(\Gamma\) such that the associated character functions yield a closed embedding \(M_{B}(\Gamma,\mathrm{SL}_{n})\hookrightarrow\mathbb{A}_{\mathbb{Q}}^{r}\) defined over \(\mathbb{Q}\). If \(K\) is a finite extension of \(\mathbb{Q}_{p}\) and \(\mathcal{O}_{K}\) is its ring of integers, then the intersection of \(M(\Gamma,\mathrm{SL}_{n})(\bar{\mathbb{Q}}_{p})^{o}\) with \(M_{B}(\Gamma,\mathrm{SL}_{n})(K)\) coincide with the preimage of \(\mathbb{A}_{\mathbb{Q}}^{r}(\mathcal{O}_{K})\), therefore it is compact.
The last assertion was proved above.
### Applications
Let \(k\) be a field and \(X\) be a scheme of finite type over \(k\). A subset \(\Sigma\) of \(X\) is called constructible (with respect to the Zariski topology) if it is in the Boolean algebra generated by the closed subsets; or equivalently if \(\Sigma\) is a disjoint finite union of locally closed subsets. For any field extension \(l\supset k\), we denote by \(\Sigma(l)\) the subset of \(X(l)\) that corresponds to \(k\)-morphisms \(\mathrm{Spec}(l)\to X\) whose image belongs to \(\Sigma\). If \(X\to Y\) is a \(k\)-morphism between two \(k\)-schemes of finite type and \(\Sigma\) is a constructible subset of \(X\), then by a classical result of Chevalley the image of \(\Sigma\) is a constructible subset of \(Y\).
**Lemma 6.5**.: _Let \(X\) be a scheme of finite type over \(\mathbb{Q}\). Let \(\Sigma\) be a constructible subset of \(X\). If \(\Sigma\) is Zariski-dense in \(X\), then the set \(\Sigma(\bar{\mathbb{Q}})\) is dense in \(X(\bar{\mathbb{Q}}_{p})\) for the p-adic topology._
Proof.: We will prove that \(\Sigma(\bar{\mathbb{Q}})\) is even dense in \(X(\mathbb{C}_{p})\) for the p-adic topology. If \(U\subset X\) is a smooth and dense open subset contained in \(\Sigma\), then \(U(\mathbb{C}_{p})\) is dense in \(X(\mathbb{C}_{p})\) for the p-adic topology thanks to [1, Proposition 3.4.4]. Therefore, it is sufficient to prove that there exists such an \(U\) such that \(U(\bar{\mathbb{Q}})\) is dense in \(U(\mathbb{C}_{p})\) for the p-adic topology. By working on each connected component of \(U\), one reduces to the case where \(U\) is smooth and integral. Moreover, up to shrinking \(U\), one can assume that there exists a finite etale morphism \(U\to V\) onto an open subset \(V\) of \(\mathbb{A}^{n}\) for some \(n\). Since \(\bar{\mathbb{Q}}\) is dense in \(\mathbb{C}_{p}\) for the \(p\)-adic topology, it is clear that \(V(\bar{\mathbb{Q}})\) is dense in \(V(\mathbb{C}_{p})\) for the p-adic topology, from what it follows that \(U(\bar{\mathbb{Q}})\) is dense in \(U(\mathbb{C}_{p})\) for the p-adic topology.
**Lemma 6.6**.: _Let \(p\) be a prime number and \(M\) be an affine scheme of finite type over \(\mathbb{Q}_{p}\). If the set \(M(K)\) equipped with its canonical p-adic topology is compact for every finite extension \(K\) of \(\mathbb{Q}_{p}\), then \(M\) has dimension zero._
Proof.: Since the irreducible components of \(M^{red}\) satisfy the assumptions of the lemma, it is sufficient to treat the case where \(M\) is integral. In that case, by Noether normalization lemma there exists a finite surjective morphism \(f\colon M\to\mathbb{A}_{\mathbb{Q}_{p}}^{n}\) for some non-negative integer \(n\). If \(f\) has degree \(d\), then every element of \(\mathbb{A}_{\mathbb{Q}_{p}}^{n}(\mathbb{Q}_{p})\) is the image of an element of \(M(K)\) for some extension \(K\) of \(\mathbb{Q}_{p}\) of degree \(\leq d\). Since there are finitely many extensions of \(\mathbb{Q}_{p}\) of degree \(\leq d\) in a fixed algebraic closure of \(\mathbb{Q}_{p}\), by taking their compositum one gets that \(\mathbb{A}_{\mathbb{Q}_{p}}^{n}(\mathbb{Q}_{p})\) is contained in the image of \(M(L)\) for a finite extension \(L\) of \(\mathbb{Q}_{p}\). Since \(M(L)\) is compact by assumption and \(f\) is proper, its image \(f(M(L))\) is a compact subset of \(\mathbb{A}_{\mathbb{Q}_{p}}^{n}(L)\). But \(\mathbb{A}_{\mathbb{Q}_{p}}^{n}(\mathbb{Q}_{p})\) is closed in \(\mathbb{A}_{\mathbb{Q}_{p}}^{n}(L)\), so that \(\mathbb{A}_{\mathbb{Q}_{p}}^{n}(\mathbb{Q}_{p})=\mathbb{A}_{\mathbb{Q}_{p}}^{ n}(\mathbb{Q}_{p})\cap f(M(L))\) must be compact and necessarily \(n=0\).
**Lemma 6.7**.: _Let \(\Gamma\) be a finitely generated group, \(n\) a positive integer and \(\Sigma\) a \(\mathbb{Q}\)-Zariski constructible subset of \(M_{B}(\Gamma,\mathrm{SL}_{n})\). Assume that for a prime number \(p\), the set \(\Sigma(\bar{\mathbb{Q}})\) is contained in \(M(\Gamma,\mathrm{SL}_{n})(\bar{\mathbb{Q}}_{p})^{o}\). Then \(\Sigma\) consists in finitely many points._
Proof.: Let \(M\) be the \(\mathbb{Q}\)-Zariski closure of \(\Sigma\) in \(M_{B}(\Gamma,\mathrm{SL}_{n})\). Since \(M(\Gamma,\mathrm{SL}_{n})(\bar{\mathbb{Q}}_{p})^{o}\) is closed in \(M_{B}(\Gamma,\mathrm{SL}_{n})(\bar{\mathbb{Q}}_{p})\), it follows from Lemma 6.5 that \(M(\bar{\mathbb{Q}}_{p})\) is contained in \(M(\Gamma,\mathrm{SL}_{n})(\bar{\mathbb{Q}}_{p})^{o}\). Therefore, thanks to Proposition 6.3, \(M(K)\) is compact for every finite extension \(K\) of \(\mathbb{Q}_{p}\), and the conclusion follows from Lemma 6.6.
**Theorem 6.8**.: _Let \(\Gamma\) be a finitely generated group, \(G\) be a reductive \(\mathbb{Q}\)-algebraic group and \(\Sigma\) be a \(\mathbb{Q}\)-Zariski constructible subset of \(M_{B}(\Gamma,G)\). Assume that for a prime number \(p\), all the conjugacy classes of representations corresponding to elements in the set \(\Sigma(\bar{\mathbb{Q}})\) have bounded image when seen as representations with values in \(G(\bar{\mathbb{Q}}_{p})\). Then \(\Sigma\) consists in finitely many points._
Proof.: Since \(G\) can be realized as a closed subgroup of \(\mathrm{GL}_{n}\left/\mathbb{Q}\right.\) for some integer \(n\) and since there is an embedding \(\mathrm{GL}_{n}\subset\mathrm{SL}_{n+1}\), \(G\) is a closed subgroup of \(\mathrm{SL}_{n+1}\left/\mathbb{Q}\right.\). The conjugacy class in \(M_{B}(\Gamma,G)(\bar{\mathbb{Q}}_{p})\) of a representation \(\Gamma\to G(\bar{\mathbb{Q}}_{p})\) with bounded image is sent to the class in \(M_{B}(\Gamma,\mathrm{SL}_{n+1})(\bar{\mathbb{Q}}_{p})\) of a representation \(\Gamma\to\mathrm{SL}_{n+1}(\bar{\mathbb{Q}}_{p})\) with bounded image through the natural map \(M_{B}(\Gamma,G)(\bar{\mathbb{Q}}_{p})\to M_{B}(\Gamma,\mathrm{SL}_{n+1})(\bar {\mathbb{Q}}_{p})\). Since this map is finite by Proposition 6.9, it is sufficient to consider the case where \(G=\mathrm{SL}_{n+1}\). But in that case, the assumptions imply that \(\Sigma(\bar{\mathbb{Q}})\) is contained in \(M(\Gamma,\mathrm{SL}_{n+1})(\bar{\mathbb{Q}}_{p})^{o}\) and Lemma 6.7 gives the conclusion.
**Proposition 6.9** (Simpson [25, Corollary 9.18]).: _Let \(\Gamma\) be a finitely generated group and \(G\to H\) be a homomorphism of complex reductive groups with finite kernel. Then the natural induced morphism \(M_{B}(\Gamma,G)\to M_{B}(\Gamma,H)\) is finite._
## 7 The Betti moduli space
Let \(X\) be a connected complex algebraic variety. For any reductive \(\mathbb{Q}\)-algebraic group \(G\) and any \(x\in X\), we set \(M_{B}(X,x,G):=M(\pi_{1}(X,x),G)\). This is an affine \(\mathbb{Q}\)-scheme of finite type. For another choice of base-point \(x^{\prime}\in X\), the \(\mathbb{Q}\)-scheme \(M_{B}(X,x^{\prime},G)\) is canonically isomorphic to \(M_{B}(X,x,G)\), hence we won't specify the base-point in the sequel. For any positive integer \(n\), we let \(M_{B}(X,n):=M_{B}(X,\mathrm{GL}_{n})\).
When \(X\) is smooth and quasi-projective, it follows from the Corlette-Simpson-Mochizuki non-abelian Hodge correspondence that \(M_{B}(X,n)(\mathbb{C})\) is equipped with a functorial continous action of \(\mathbb{C}^{*}\). The points fixed by this action are exactly the isomorphism classes of complex local system underlying a complex polarized variation of Hodge structures (\(\mathbb{C}\)-VHS). See [25, 19].
**Theorem 7.1**.: _Let \(X\) be a connected smooth quasi-projective algebraic variety and \(n\) a positive integer. Let \(M\) be a closed algebraic subset of \(M_{B}(X,n)\). If its set of complex points \(M(\mathbb{C})\) is \(\mathbb{C}^{*}\)-invariant, then it contains a point that corresponds to a complex local system underlying a \(\mathbb{C}\)-VHS._
In particular, every rigid semisimple local system underlies a \(\mathbb{C}\)-VHS.
Proof.: Thanks to Mochizuki, every complex local system can be deformed continuously to a \(\mathbb{C}\)-VHS [19, Theorem 10.5]. More precisely, he proves the following: if \(\rho\in M_{B}(X,n)(\mathbb{C})\), then \(\Delta^{*}\cdot\rho\) is a relatively compact subset of \(M_{B}(X,n)(\mathbb{C})\), so that there exists a sequence \(t_{i}\in\Delta^{*}\) converging to zero such that \(\rho_{1}=\lim_{i\to\infty}t_{i}\cdot\rho\) exists in \(M_{B}(X,n)(\mathbb{C})\). Moreover, after iterating this procedure a finite number of times, one gets a representation which is fixed by the \(\mathbb{C}^{*}\)-action. Clearly, if one starts with \(\rho\in M(\mathbb{C})\), the constructed \(\mathbb{C}\)-VHS belongs to \(M(\mathbb{C})\) too.
**Proposition 7.2**.: _Let \(X\) be a connected algebraic variety and \(G\) a reductive \(\mathbb{Q}\)-group. Let \(\Sigma\) be a subset of \(M_{B}(X,G)(\mathbb{C})\). Let \(M\subset M_{B}(X,G)\) be the \(\mathbb{Q}\)-Zariski closure of \(\Sigma\), i.e. the smallest closed \(\mathbb{Q}\)-subscheme of \(M_{B}(X,G)\)
_whose set of complex points contains \(\Sigma\). Then, the pair \((X,\Sigma)\) is maximal if and only if the pair \((X,M)\) is maximal._
Proof.: Let \(\bar{X}\) be a connected complex algebraic variety \(\bar{X}\) equipped with an open immersion \(X\to\bar{X}\). Let \(v\colon\Delta\to\bar{X}\) be a holomorphic map such that \(v(\Delta^{*})\subset X\) and \(v(0)\notin X\). Let \(x:=v(\frac{1}{2})\). The continuous map \(v_{|\Delta^{*}}\) induces a homomorphism of groups \(\mathbb{Z}=\pi_{1}(\Delta^{*},\frac{1}{2})\to\pi_{1}(X,x)\) and a \(G\)-equivariant \(\mathbb{Q}\)-algebraic map
\[R(\pi_{1}(X,x),G)\to R(\mathbb{Z},G)=G.\]
Its fiber at \(1_{G}\in G\) is a \(G\)-invariant closed \(\mathbb{Q}\)-subscheme of \(R(\pi_{1}(X,x),G)\), hence it is the preimage of a closed \(\mathbb{Q}\)-subscheme \(M_{v}\subset M_{B}(X,G)\), cf. [13, Theorem 1.1].
Let us now proceed to the proof of the proposition. If \((X,\Sigma)\) is maximal, then a fortiori the pair \((X,M)\) is maximal. If \((X,\Sigma)\) is not maximal, then by definition there exists a holomorphic map \(v\colon\Delta\to\bar{X}\) as above such that \(\Sigma\subset M_{v}(\mathbb{C})\). Since \(M_{v}\) is a closed \(\mathbb{Q}\)-subscheme of \(M_{B}(X,G)\), it follows from the definition of \(M\) that \(M\subset M_{v}\). Therefore, the pair \((X,M)\) is not maximal.
## 8 A generalization of a result of Lasell and Ramachandran
Let \(X\) be a connected smooth quasi-projective variety. Let \(Z\) be a smooth projective (but non-necessarily connected) variety equipped with a morphism \(Z\to X\). We assume that the image \(Y\) of \(Z\to X\) is a connected proper variety. (For example, \(Y\) could be a connected proper subvariety of \(X\) and \(Z\to Y\) be the disjoint union of a desingularization of every irreducible components of \(Z\).) We will keep these notations for the remaining of this section.
**Lemma 8.1** (Compare with [12, Lemma 2.1]).:
1. _Let_ \(\rho\colon\pi_{1}(X)\to\operatorname{GL}_{n}(\mathbb{C})\) _be a reductive representation. Assume that for every connected component_ \(Z_{i}\) _of_ \(Z\)_, the induced representation_ \(\rho_{Z_{i}}\colon\pi_{1}(Z_{i})\to\operatorname{GL}_{n}(\mathbb{C})\) _has finite image. Then_ \(\rho_{Y}\colon\pi_{1}(Y)\to\operatorname{GL}_{n}(\mathbb{C})\) _has bounded image._
2. _Let_ \(p\) _be a prime number. Let_ \(\rho\colon\pi_{1}(X)\to\operatorname{GL}_{n}(\bar{\mathbb{Q}}_{p})\) _be a reductive representation. Assume that for every connected component_ \(Z_{i}\) _of_ \(Z\)_, the induced representation_ \(\rho_{Z_{i}}\colon\pi_{1}(Z_{i})\to\operatorname{GL}_{n}(\bar{\mathbb{Q}}_{p})\) _has bounded image. Then_ \(\rho_{Y}\colon\pi_{1}(Y)\to\operatorname{GL}_{n}(\bar{\mathbb{Q}}_{p})\) _has bounded image._
Proof.:
1. Let \(\Omega\) denote the symmetric space of maximal compact subgroups of \(\mathrm{GL}_{n}(\mathbb{C})\), on which \(\mathrm{GL}_{n}(\mathbb{C})\) acts by conjugation. Thanks to Mochizuki [14, Theorem 25.28], there exists a \(\rho\)-equivariant pluriharmonic map \(f\colon\tilde{X}^{\rho}\to\Omega\). By assumption, the irreducible components of \(Z_{i}\times_{X}\tilde{X}^{\rho}\) are compact, therefore each of them is sent to a point in \(\Omega\) by the pluriharmonic map \(f\). It follows that the connected components of \(Y\times_{X}\tilde{X}^{\rho}\) are also to a point in \(\Omega\) by the pluriharmonic map \(f\). Moreover, due to the \(\rho\)-equivariance of \(f\), this point must be a fixed point for the induced action of \(\pi_{1}(Y)\) on \(\Omega\). This shows that \(\rho_{Y}\) is bounded.
2. Let \(\sigma_{X}^{\rho}\colon X\to S_{X}^{\rho}\) be a Katzarkov-Zuo reduction of the pair \((X,\rho)\), which exists thanks to Theorem 5.2. By assumption, the composite map \(Z_{i}\to X\to S_{X}^{\rho}\) is constant for every \(i\), hence the composite map \(Y\to X\to S_{X}^{\rho}\) is also constant. This implies in turn that \(\rho_{Y}\) has bounded image.
**Theorem 8.2** (Compare with [13, Theorem 4.1]).: _For every positive integer \(n\), there exists a finite quotient \(\Delta_{n}\) of \(\pi_{1}(Y)\) such, if \(\rho\colon\pi_{1}(X)\to\mathrm{GL}_{n}(\mathbb{C})\) is a reductive representation whose pull-back to every connected component of \(Z\) is the trivial representation, then the induced representation \(\rho_{Y}\colon\pi_{1}(Y)\to\mathrm{GL}_{n}(\mathbb{C})\) factorizes through \(\Delta_{n}\)._
Proof.: By Lemma 8.1, if \(\rho\colon\pi_{1}(X)\to\mathrm{GL}_{n}(\mathbb{C})\) is a reductive representation whose pull-back to every connected component \(Z_{i}\) of \(Z\) is the trivial representation, then \(\rho_{Y}\colon\pi_{1}(Y)\to\mathrm{GL}_{n}(\mathbb{C})\) has bounded image. In particular, \(\rho_{Y}\) is a reductive representation. Let \(M\subset M_{B}(X,n)\) be the intersection of the fibers at the trivial representation of the morphisms \(M_{B}(X,n)\to M_{B}(Z_{i},n)\). Therefore, \(M\) is a closed subscheme of \(M_{B}(X,n)\), whose \(\mathbb{C}\)-points parametrize the conjugacy classes of reductive representations \(\rho\colon\pi_{1}(X)\to\mathrm{GL}_{n}(\mathbb{C})\) that pull-back to the trivial representation on every \(Z_{i}\). Let \(N\) denote the image of \(M\) by the algebraic morphism \(M_{B}(X,n)\to M_{B}(Y,n)\). By Chevalley's Theorem, it is a constructible subset of \(M_{B}(Y,n)\). A priori, \(N(\mathbb{C})\) is the set of conjugacy classes of semisimplifications of representations pulled-back from \(M(\mathbb{C})\). However, it follows from the previous paragraph that \(N(\mathbb{C})\) is in fact exactly the set of conjugacy classes of representations of \(\pi_{1}(Y)\) pulled-back from by \(M(\mathbb{C})\). On the other hand, it follows from Lemma 8.1 that for every prime \(p\) one has \(N(\bar{\mathbb{Q}}_{p})\subset M_{B}(Y,n)(\bar{\mathbb{Q}}_{p})^{o}\). Therefore, thanks to Theorem 6.8, \(N\) is a closed subscheme of \(M_{B}(Y,n)\) of dimension zero. In particular, \(N(\mathbb{C})\) is a finite
set, showing that there are only finitely many conjugacy classes of reductive representations \(\rho\colon\pi_{1}(X)\to\operatorname{GL}_{n}(\mathbb{C})\) that pull-back to the trivial representation on every \(Z_{i}\). Finally, let \(\{\rho_{i}\colon\pi_{1}(Y)\to\operatorname{GL}_{n}(\bar{\mathbb{Q}})\}_{i=1 \ldots r}\) be a finite set of reductive representations stable by \(\operatorname{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\)-conjugation that lifts the finitely many points of \(N(\mathbb{C})=N(\bar{\mathbb{Q}})\). Let \(\rho_{N}\colon\pi_{1}(Y)\to\operatorname{GL}_{nr}(\bar{\mathbb{Q}})\) be the sum of the \(\rho_{i}\). It follows from the \(\operatorname{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\)-invariance that \(\rho_{N}\) takes its values in \(\operatorname{GL}_{nr}(\mathbb{Q})\). Moreover, thanks to Lemma 8.1, the image of \(\rho_{N}\) is bounded and conjugated to a subgroup of \(\operatorname{GL}_{nr}(\mathbb{Z})\). Therefore, the image \(\Delta_{n}\) of \(\rho_{N}\) is the searched finite quotient of \(\pi_{1}(Y)\).
**Corollary 8.3**.: _For every positive integer \(n\), let \(H_{n}\) be the intersection of the kernels of all reductive representations \(\rho\colon\pi_{1}(X)\to\operatorname{GL}_{n}(\mathbb{C})\) that pull-back to the trivial representation on every connected component of \(Z\). Then the image of \(\pi_{1}(Y)\) in \(\pi_{1}(X)/H_{n}\) is finite._
## 9 Proof of Theorem A
### A special case
Let \(X\) be a connected smooth quasi-projective algebraic variety. Let \(n\) be a positive integer. Let \(M\subset M_{B}(X,n)\) be a closed algebraic subvariety defined over \(\mathbb{Q}\), whose set of complex points \(M(\mathbb{C})\) is invariant under the \(\mathbb{C}^{*}\)-action on \(M_{B}(X,n)(\mathbb{C})\).
Assuming that the pair \((X,M)\) is maximal, we prove in the sequel the existence of the \(M\)-Shafarevich morphism.
For every \(\rho\in M(\bar{\mathbb{Q}})\) and every prime number \(p\), let \(\rho_{p}\) denote the image of \(\rho\) by the inclusion \(M(\bar{\mathbb{Q}})\subset M(\bar{\mathbb{Q}}_{p})\), and let \(\sigma_{X}^{\rho_{p}}\colon X\to S_{X}^{\rho_{p}}\) be a Katzarkov-Zuo reduction of \((X,\rho_{p})\). We denote by \(\sigma_{X}^{M}\colon X\to S_{X}^{M}\) the map \(X\to\prod_{\rho\in M(\bar{\mathbb{Q}})}\prod_{\operatorname{p\;prime}}S_{X}^{ \rho_{p}}\). Note that the fibers \(\sigma_{X}^{M}\) are algebraic (since they are the possibly infinite intersection of fibers of algebraic maps). In fact, by Noetherianity, the map obtained by composing \(\sigma_{X}^{M}\) with the projection onto finitely many well-choosen factors will have the same fibers as \(\sigma_{X}^{M}\).
Let \(M(\mathbb{C})^{\mathrm{VHS}}\) denote the subset of \(M(\mathbb{C})\) composed by the fixed points for the \(\mathbb{C}^{*}\)-action. It is non-empty thanks to Theorem 7.1. Fix a point \(x\in X\), and for every element of \(M(\mathbb{C})^{\mathrm{VHS}}\), choose a representative \(\rho\colon\pi_{1}(X,x)\to\operatorname{GL}_{n}(\mathbb{C})\). Let \(W_{\rho}/\mathbb{R}\) be the real Zariski closure of the image of \(\pi_{1}(X,x)\), so that \(\rho\) factorizes as \(\rho\colon\pi_{1}(X,x)\to W_{\rho}(\mathbb{R})\subset\operatorname{GL}_{n}( \mathbb{C})\). By assumption, the
real Lie group \(W_{\rho}(\mathbb{R})\) acts transitively on a period domain \(\mathcal{D}_{\rho}\), so that the stabilizer of a point in \(\mathcal{D}_{\rho}\) is a compact subgroup, and there exists a period map \(\tilde{X}\to\mathcal{D}_{\rho}\) which is equivariant with respect to \(\rho\colon\pi_{1}(X,x)\to W_{\rho}(\mathbb{R})\). By taking their products, we obtain a map \(\tilde{X}\to\prod_{[\rho]\in M(\mathbb{C})^{\mathrm{VHS}}}\mathcal{D}_{\rho}\), which is equivariant with respect to the homomorphism \(\pi_{1}(X,x)\to\prod_{[\rho]\in M(\mathbb{C})^{\mathrm{VHS}}}W_{\rho}( \mathbb{R})\).
Finally, putting the archimedean places and the non-archimedean places together, we get a map \(\tilde{X}\to\prod_{[\rho]\in M(\mathbb{C})^{\mathrm{VHS}}}\mathcal{D}_{\rho} \times S_{X}^{M}\). Letting \(H:=\cap_{[\rho]\in M(\mathbb{C})}\ker\rho\) and \(\tilde{X}^{M}:=\tilde{X}^{H}\), the preceding map factorizes through \(\tilde{X}\to\tilde{X}^{M}\), so that we obtain a map
\[\phi_{(X,x)}^{M}\colon\tilde{X}^{M}\to\prod_{[\rho]\in M(\mathbb{C})^{ \mathrm{VHS}}}\mathcal{D}_{\rho}\times S_{X}^{M}.\]
Observe that for another choice of base point \(x^{\prime}\in X\), the choice of a (homotopy class of a) path from \(x\) to \(x^{\prime}\) induces an isomorphism between \(\pi_{1}(X,x)\) and \(\pi_{1}(X,x^{\prime})\), and an automorphism of \(\tilde{X}^{M}\) over \(X\), such that the \(\pi_{1}(X,x)\)-equivariant map \(\phi_{(X,x)}\) gets identified with the \(\pi_{1}(X,x^{\prime})\)-equivariant map \(\phi_{(X,x^{\prime})}\).
In view of Theorem 4.2, Theorem A is a consequence of the following result.
**Theorem 9.1**.: _Consider the map_
\[\phi_{X}^{M}\colon\tilde{X}^{M}\to\prod_{[\rho]\in M(\mathbb{C})^{\mathrm{VHS }}}\mathcal{D}_{\rho}\times S_{X}^{M}.\]
_Then:_
1. _Every connected compact analytic subspace of_ \(\tilde{X}^{M}\) _is contained in a fibre of_ \(\phi_{X}^{M}\)_._
2. _The connected components of the fibres of_ \(\phi_{X}^{M}\) _are compact._
Proof of Theorem 9.1.: We start by proving the first assertion. Let \(Y\) be a connected compact analytic subspace of \(\tilde{X}^{M}\). It is harmless to assume that \(Y\) is irreducible. Let \(f\colon Y\to X\) be the holomorphic map defined as the composition of the inclusion \(Y\subset\tilde{X}^{M}\) and the projection \(\tilde{X}^{M}\to X\). Then \(f\) is proper (since \(Y\) is compact) and has discrete fibres, hence it is a finite holomorphic map. Its image \(Z:=f(Y)\) is a compact closed analytic subspace of the algebraic variety \(X\), hence it is a closed algebraic subvariety of
\(X\) by applying Chow's theorem to an algebraic compactification of \(X\). By GAGA, there exists a unique algebraic structure on \(Y\) such that the map \(Y\to Z\) is algebraic.
Let \(W\to Y\) be a desingularization of \(Y\). Since by definition the map \(f\colon Y\to X\) factorizes through \(\tilde{X}^{M}\), the image of \(M(\mathbb{C})\) under the natural algebraic map \(M_{B}(X,n)\to M_{B}(W,n)\) is trivial. It follows that for every (conjugacy class of) representation \(\rho\in M(\bar{\mathbb{Q}})\) and every prime number \(p\), the Katzarkov-Zuo reduction of \((X,\rho_{p})\) is constant on \(W\), hence on \(Y\). Moreover, since every period map with trivial monodromy on a connected compact complex manifold is constant, the first assertion follows.
Let us now prove the second assertion. Let \(F\) be a connected component of a fibre of \(\phi_{X}^{M}\); we want to prove that \(F\) is compact. Observe that \(F\) is a fortiori contained in a fibre of \(\tilde{X}^{M}\to S_{X}^{M}\). In fact, since \(F\) is connected, it is contained in the preimage of a connected component \(Y\) of a fibre of \(X\to S_{X}^{M}\). Note that \(Y\) is a connected closed algebraic subvariety of \(X\). As observed before, we can freely move the base point \(x\in X\) and assume that \(x\in Y\).
Let \(\{Y_{i}\}\) denote the irreducible components of \(Y\). For every \(i\), let \(Z_{i}\to Y_{i}\) be a proper desingularization of \(Y_{i}\). Let \(N_{i}\) be the image of \(M(\bar{\mathbb{Q}})\) under the natural algebraic map \(M_{B}(X,n)\to M_{B}(Z_{i},n)\). It is a \(\mathbb{Q}\)-constructible subset of \(M_{B}(Z_{i},n)\) by Chevalley's Theorem. Moreover, by definition of \(Y\), the representations corresponding to points in \(N_{i}\) have bounded image when seen as representations with values in \(\operatorname{GL}_{n}(\bar{\mathbb{Q}}_{p})\) for every prime \(p\). Therefore, thanks to Theorem 6.8, the \(\mathbb{Q}\)-constructible set \(N_{i}\) has dimension zero. It follows that there exists a number field \(K\subset\bar{\mathbb{Q}}\) and a finite set of representations \(\{\rho_{i}^{j}\colon\pi_{1}(Z_{i})\to\operatorname{GL}_{n}(K)\}_{j=1\dots r_{i}}\) stable by \(\operatorname{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\)-conjugation that lifts the finitely many points of \(N_{i}\).
Every point of \(N_{i}\) is the image of at least one connected component of \(M(\mathbb{C})\). Since \(M(\mathbb{C})\) is closed in \(M_{B}(X,n)(\mathbb{C})\) and \(\mathbb{C}^{*}\)-invariant, the connected components of \(M(\mathbb{C})\) are also closed in \(M_{B}(X,n)(\mathbb{C})\) and \(\mathbb{C}^{*}\)-invariant (since \(\mathbb{C}^{*}\) is connected). It follows from Theorem 7.1 that they all contain a \(\mathbb{C}^{*}\)-fixed point. Therefore, for every \(\rho_{i}^{j}\), there exists a representation \({}^{X}\rho_{i}^{j}\colon\pi_{1}(X)\to\operatorname{GL}_{n}(\mathbb{C})\) underlying a polarized complex variation of Hodge structures such that the composition with \(\pi_{1}(Z_{i})\to\pi_{1}(X)\) is equal to the composition of \(\rho_{i}^{j}\colon\pi_{1}(Z_{i})\to\operatorname{GL}_{n}(K)\) with the inclusion \(\operatorname{GL}_{n}(K)\subset\operatorname{GL}_{n}(\mathbb{C})\).
The image \(\Gamma_{i}\) of the representation \(\prod_{j=1}^{r_{i}}\rho_{i}^{j}\) is discrete. Indeed, it is contained in \(\prod_{j=1}^{r_{i}}\operatorname{GL}_{n}(K)\), stable by \(\operatorname{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\)-conjugation and bounded at every non-archimedean places. Therefore, through the canonical inclusion \(\prod_{j=1}^{r_{i}}\operatorname{GL}_{n}(\mathbb{C})\subset\operatorname{GL}_ {nr_{i}}(\mathbb{C})\), it is contained in a subgroup conjugated to \(\operatorname{GL}_{nr_{i}}(\mathbb{Z})\)[1]. Hence we can form the following commutative diagram:
Let \(W_{i}\) denote the fibre of the map \(Z_{i}\to\Gamma_{i}\setminus\left(\prod_{j=1}^{r_{i}}\mathcal{D}_{X}{}_{\rho_{ i}^{j}}\right)\) that contains the projection of \(F\times_{\tilde{X}^{M}}Z_{i}\subset Z_{i}\times_{X}\tilde{X}^{M}\). Since the pair \((Z_{i},N_{i})\) is maximal thanks to Proposition 3.3, \(W_{i}\) is proper thanks to Griffiths' criterion [10, Theorem 9.5]. By construction, the image of \(\pi_{1}(W_{i})\) in \(\pi_{1}(X)/H_{M}\) is finite. Let \(T_{i}\) be a (non-necessarily connected) smooth projective variety equipped with a surjective morphism \(T_{i}\to W_{i}\) such that the image of \(\pi_{1}(T_{i})\) in \(\pi_{1}(X)/H_{M}\) is trivial.
Let \(W\subset Y\) be the union of the images of the \(W_{i}\)'s. It is a proper subvariety of \(Y\) that contains the projection of \(F\) in \(X\). Up to replacing \(W\) by its connected component that contains the projection of \(F\), on can assume that \(W\) is connected. Thanks to Corollary 8.3, it follows that the image of \(\pi_{1}(W)\) in \(\pi_{1}(X)/H_{M}\) is finite. Therefore, the connected components of \(W\times_{X}\tilde{X}^{H}\) are compact. Since \(F\) is closed in one of the latter, it follows that \(F\) is compact.
### The general case
We start with a result of independant interest. Its proof is a direct consequence of Lemma 9.4 below.
**Proposition 9.2**.: _Let \(X\) be a connected smooth quasi-projective algebraic variety. Let \(n\) be a positive integer. Let \(\Sigma\) be a subset of \(M_{B}(X,n)(\mathbb{C})\). Let \(M\subset M_{B}(X,n)\) be the smallest closed subscheme defined over \(\mathbb{Q}\), whose set of complex points \(M(\mathbb{C})\) contain \(\Sigma\) and is \(\mathbb{C}^{*}\)-invariant. Then, the \(\Sigma\)-Shafarevich morphism \(\operatorname{sh}_{X}^{\Sigma}\) exists if and only if the \(M\)-Shafarevich morphism \(\operatorname{sh}_{X}^{M}\) exists. In that case, \(\operatorname{sh}_{X}^{\Sigma}=\operatorname{sh}_{X}^{M}\)._
**Lemma 9.3**.: _Let \(Y\) be a connected smooth projective variety equipped with a morphism \(Y\to X\). Then, the image of \(\pi_{1}(Y)\) in \(\pi_{1}(X)/H_{\Sigma}\) is trivial if and only if the image of \(\pi_{1}(Y)\) in \(\pi_{1}(X)/H_{M}\) is trivial._
Proof.: Assume that the image of \(\pi_{1}(Y)\) in \(\pi_{1}(X)/H_{\Sigma}\) is trivial. Let \(N\) be the preimage of the trivial representation by the algebraic morphism \(M_{B}(X,n)\to M_{B}(Y,n)\). Then \(N\) is a closed subscheme of \(M_{B}(X,n)\), whose set of complex points \(N(\mathbb{C})\) contain \(\Sigma\) and is \(\mathbb{C}^{*}\)-invariant. Therefore, \(M\subset N\). In other words, the image of \(\pi_{1}(Y)\) in \(\pi_{1}(X)/H_{M}\) is trivial. Since \(H_{M}\subset H_{\Sigma}\), the converse is obvious.
**Lemma 9.4**.: _Let \(Z\) be a connected proper variety equipped with a morphism \(Z\to X\). Then, the image of \(\pi_{1}(Z)\) in \(\pi_{1}(X)/H_{\Sigma}\) is finite if and only if the image of \(\pi_{1}(Z)\) in \(\pi_{1}(X)/H_{M}\) is finite._
Proof.: If the image of \(\pi_{1}(Z)\) in \(\pi_{1}(X)/H_{M}\) is finite, since \(H_{M}\subset H_{\Sigma}\), the image of \(\pi_{1}(Z)\) in \(\pi_{1}(X)/H_{\Sigma}\) is a fortiori finite.
Conversely, assume that the image of \(\pi_{1}(Z)\) in \(\pi_{1}(X)/H_{\Sigma}\) is finite. Let \(Z^{\prime}\to Z\) be a connected finite etale cover such that the image of \(\pi_{1}(Z^{\prime})\) in \(\pi_{1}(X)/H_{\Sigma}\) is trivial. Let \(Y\to Z^{\prime}\) be a proper desingularization of an irreducible component of \(Z^{\prime}\). Then the image of \(\pi_{1}(Y)\) in \(\pi_{1}(X)/H_{\Sigma}\) is trivial. It follows by the Lemma 9.3 that the image of \(\pi_{1}(Y)\) in \(\pi_{1}(X)/H_{M}\) is trivial. Finally, thanks to Corollary 8.3, the image of \(\pi_{1}(Z)\) in \(\pi_{1}(X)/H_{M}\) is finite.
Let \(X\) be a normal connected complex algebraic variety. Let \(H\subset\pi_{1}(X)\) be a normal subgroup which is commensurable to the intersection of the kernels of the monodromy representations of a collection of semisimple complex local systems \(\mathcal{L}_{i}\) on \(X\) of bounded rank. Assume that the pair \((X,H)\) is maximal. In the remaining of this section, we prove the existence of the \(H\)-Shafarevich morphism, therefore ending the proof of Theorem A.
Thanks to Proposition 3.3 and Proposition 4.4, we can assume that \(H\) is equal to the intersection of the kernels of the monodromy representations of the \(\mathcal{L}_{i}\)'s. Let \(X^{\prime}\to X\) be a birational fibration from a smooth quasi-projective variety and let \(H^{\prime}\subset\pi_{1}(X^{\prime})\) be the intersection of the kernels of the monodromy representations of the pull-back of the \(\mathcal{L}_{i}\)'s on \(X^{\prime}\). Thanks to Proposition 3.3 and Proposition 4.5, it is sufficient to prove the existence of the Shafarevich morphism for the pair \((X^{\prime},H^{\prime})\). Therefore, we can assume from the beginning that \(X\) is smooth quasi-projective. Up to adding some trivial local systems to some of the \(\mathcal{L}_{i}\)'s, one can assume in addition that all
the \(\mathcal{L}_{i}\)'s have the same rank \(n\geq 1\). Let \(\Sigma\subset M_{B}(X,n)(\mathbb{C})\) be the subset of the corresponding conjugacy classes of monodromy representations. Let \(M\subset M_{B}(X,n)\) be the smallest closed \(\mathbb{Q}\)-subscheme whose set of complex points \(M(\mathbb{C})\) contain \(\Sigma\) and is \(\mathbb{C}^{*}\)-invariant. Since the pair \((X,\Sigma)\) is maximal by assumption, the pair \((X,M)\) is maximal a fortiori. Therefore, the results of section 9.1 implies the existence of the \(M\)-Shafarevich morphism. Finally, by Proposition 9.2, the \(M\)-Shafarevich morphism is also a \(\Sigma\)-Shafarevich morphism.
## 10 Additional results
**Proposition 10.1**.: _Let \(X\) be a normal connected complex algebraic variety. Let \(\Sigma\) be collection of semisimple complex local systems of bounded rank on \(X\) and let \(\operatorname{sh}_{X}^{\Sigma}\) be the corresponding \(\Sigma\)-Shafarevich morphism. Then there exists a semisimple complex local system \(\mathcal{L}\) on \(X\) such that the \(\mathcal{L}\)-Shafarevich morphism is equal to \(\operatorname{sh}_{X}^{\Sigma}\)._
Proof.: Arguing as in section 9.2, one can assume in addition that \(X\) is smooth quasi-projective and that all the elements in \(\Sigma\) have same rank \(n\geq 1\). Let \(M\) be the smallest closed subscheme of \(M_{B}(X,n)\) defined over \(\mathbb{Q}\), whose set of complex points \(M(\mathbb{C})\) contain \(\Sigma\) and is \(\mathbb{C}^{*}\)-invariant. Let \(\Phi\subset M(\mathbb{C})\) be a finite subset which is \(\mathbb{Q}\)-Zariski-dense. Then, thanks to Proposition 9.2, the pair \((X,\Phi)\) is maximal. Moreover, the \(\Phi\)-Shafarevich morphism and the \(\Sigma\)-Shafarevich morphism are both equal to the \(M\)-Shafarevich morphism. Therefore, one can take for \(\mathcal{L}\) the sum of one local system in each conjugacy classes that belong to \(\Phi\).
The following result seems to have been unnoticed already in the compact case.
**Theorem 10.2**.: _Let \(X\) be a normal connected complex algebraic variety. Let \(\rho\colon\pi_{1}(X)\to\Gamma\) be a surjective homomorphisms of groups with torsion-free image. Assume that the Shafarevich morphism \(\operatorname{sh}_{X}^{\rho}\colon X\to\operatorname{Sh}_{X}^{\rho}\) for the pair \((X,\rho)\) exists. Then the representation \(\rho\) factorizes through the homomorphism \(\pi_{1}(X)\to\pi_{1}(\operatorname{Sh}_{X}^{\rho})\)._
Proof.: Let \(\tilde{X}^{\rho}\to X\) be the Galois etale cover associated to \(\rho\), equipped with its \(\Gamma\)-action. Thanks to Proposition 4.2 and its proof, there exists a holomorphic fibration \(\tilde{X}^{\rho}\to S\), with \(S\) a normal complex analytic space which does not have any positive-dimensional compact analytic subspace. By unicity, the Galois action of \(\Gamma\) on \(\tilde{X}^{\rho}\) descends to an action on \(S\) which
is still properly discontinuous. Moreover, the map induced on the quotients \(\operatorname{sh}_{X}^{\rho}\colon X\to\operatorname{Sh}_{X}^{\rho}\) is the \(\rho\)-Shafarevich morphism. Since \(\Gamma\) is torsion-free and the action of \(\Gamma\) on \(S\) is proper, the action of \(\Gamma\) on \(S\) is also free. Therefore, the map \(S\to\operatorname{Sh}_{X}^{\rho}\) is a Galois etale cover with Galois group \(\Gamma\). To conclude the proof, we need to check that the following commutative diagram is cartesian:
But, in the commutative diagram
the map \(\tilde{X}^{\rho}\to X\times_{\operatorname{Sh}_{X}^{\rho}}S\) is a \(\Gamma\)-equivariant etale map between two connected \(\Gamma\)-Galois etale covers of \(X\), hence it is an isomorphism.
## 11 The abelian case
In this section, we describe an alternative construction of the Shafarevich morphism associated to a representation whose image is abelian.
Let \(X\) be a connected smooth algebraic variety. Let \(\rho\colon\pi_{1}(X)\to\Gamma\) be a homomorphism of groups, with \(\Gamma\) abelian. Up to replacing \(\Gamma\) by the quotient of the image of \(\rho\) by its torsion subgroup, one can assume that \(\Gamma\) is a free \(\mathbb{Z}\)-module. Since \(\Gamma\) is abelian, \(\rho\) factorizes through the abelianization \(\pi_{1}(X)\to\operatorname{H}_{1}(X,\mathbb{Z})\) of \(\pi_{1}(X)\). Let \(\operatorname{alb}\colon X\to\operatorname{Alb}(X)\) denotes the Albanese mapping, where \(\operatorname{Alb}(X)\) is a semi-abelian variety (cf. [10]). If \(\bar{X}\) is a smooth compactification of \(X\) such that \(D:=\bar{X}-X\) is a simple normal crossing divisor, then \(\operatorname{Alb}(X):=\operatorname{H}^{0}(\bar{X},\Omega^{1}_{\bar{X}}( \log D))^{\vee}/\operatorname{H}_{1}(X,\mathbb{Z})\). Let \(\operatorname{Alb}(X)\to A_{\rho}\) be the quotient of \(\operatorname{Alb}(X)\) by the biggest semi-abelian subvariety \(T\) of \(\operatorname{Alb}(X)\) such that \(H_{1}(T,\mathbb{Z})\subset\ker(H_{1}(X,\mathbb{Z})\to\Gamma)\).
**Theorem 11.1**.: _If the pair \((X,\rho)\) is maximal, then the composition of the Albanese morphism \(X\to\operatorname{Alb}(X)\) with \(\operatorname{Alb}(X)\to A_{\rho}\) is a proper morphism, and its Stein factorization is the \(\rho\)-Shafarevich morphism._
Proof.: Let \(K\) be the biggest sub-\(\mathbb{Z}\)-mixed Hodge structure of \(H_{1}(X,\mathbb{Z})\) such that the underlying \(\mathbb{Z}\)-module is contained in the kernel of the morphism \(H_{1}(X,\mathbb{Z})\to\Gamma\) induced by \(\rho\). Let \(H\) be the quotient of \(H_{1}(X,\mathbb{Z})\) by \(K\), so that \(H\) is a \(\mathbb{Z}\)-mixed Hodge structure, whose underlying \(\mathbb{Z}\)-module is torsion-free. Since the pair \((X,\rho)\) is maximal, the homomorphism \(\pi_{1}(X)\to H_{1}(X,\mathbb{Z})\to H\) is also maximal. Moreover, for every connected complex algebraic variety \(Z\) equipped with a morphism \(Z\to X\), the induced homomorphism \(\pi_{1}(Z)\to\Gamma\) factorizes as \(\pi_{1}(Z)\to H_{1}(Z,\mathbb{Z})\to H_{1}(X,\mathbb{Z})\to\Gamma\). Therefore, the homomorphism \(\pi_{1}(Z)\to\Gamma\) is zero if and only if the homomorphism \(\pi_{1}(Z)\to H\) is zero. In other words, the \(\rho\)-Shafarevich morphism exists if and only if the Shafarevich morphism of the homomorphism \(\pi_{1}(Z)\to H\) exists. Therefore, one can assume that \(\Gamma\) is the \(\mathbb{Z}\)-module of a quotient \(\mathbb{Z}\)-mixed Hodge structure \(H_{1}(X,\mathbb{Z})\).
Fix \(x\in X\). For every \(y\in X\) distinct from \(x\), there is an exact sequence of \(\mathbb{Z}\)-mixed Hodge structures:
\[0\to H_{1}(X,\mathbb{Z})\to H_{1}(X,\{x,y\},\mathbb{Z})\to H_{0}(\{x,y\}, \mathbb{Z})\to H_{0}(X,\mathbb{Z}).\]
The kernel of morphism \(H_{0}(\{x,y\},\mathbb{Z})\to H_{0}(X,\mathbb{Z})\) is isomorphic to \(\mathbb{Z}(0)\), by sending \(1\) to \(y-x\). Therefore, the preceding exact sequence yields the exact sequence:
\[0\to H_{1}(X,\mathbb{Z})\to H_{1}(X,\{x,y\},\mathbb{Z})\to\mathbb{Z}(0)\to 0. \tag{1}\]
This provides a map
\[a\colon X-\{x\}\to\operatorname{Ext}^{1}_{\mathbb{Z}-MHS}(\mathbb{Z}(0),H_{1} (X,\mathbb{Z}))=J(H_{1}(X,\mathbb{Z})),\]
where, for any \(\mathbb{Z}\)-mixed Hodge structure \(H\) whose weights are non-positive, \(J(H):=H_{\mathbb{C}}/(F^{0}+H_{\mathbb{Z}})\) is the corresponding intermediate Jacobian [10]. Therefore, the map \(a\) is the period map associated to a \(\mathbb{Z}\)-variation of mixed Hodge structure of geometric origin (hence admissible). In particular, it is holomorphic. If \(\dim X\geq 2\), then this implies that the map extends as a holomorphic map \(a\colon X\to J(H_{1}(X,\mathbb{Z}))\). An easy computation shows that this is also the case if \(\dim X=1\).
The push-out along the morphism \(H_{1}(X,\mathbb{Z})\to H\) yields a map
\[\operatorname{Ext}^{1}_{\mathbb{Z}-MHS}(\mathbb{Z}(0),H_{1}(X,\mathbb{Z}))\to \operatorname{Ext}^{1}_{\mathbb{Z}-MHS}(\mathbb{Z}(0),H),\]
or equivalently a map \(J(H_{1}(X,\mathbb{Z}))\to J(H)\) which is holomorphic. The composite map \(X\to J(H)\) is the period map associated to an admissible \(\mathbb{Z}\)-variation of mixed Hodge structure with monodromy representation
\(\pi_{1}(X)\to H\). Thanks to [1, Lemma 2.3], the period map \(X\to J(H)\) is proper if and only if the pair \((X,\rho)\) is maximal. Moreover, by definition of \(H\), for every connected complex algebraic variety \(Z\) equipped with a morphism \(Z\to X\), the induced homomorphism \(\pi_{1}(Z)\to H\) is zero if and only if the composite map \(Z\to X\to J(H)\) is constant. This completes the proof of Theorem 11.1.
|
2306.01895 | Topological comparison of some dimension reduction methods using
persistent homology on EEG data | In this paper, we explore how to use topological tools to compare dimension
reduction methods. We first make a brief overview of some of the methods often
used dimension reduction such as Isometric Feature Mapping, Laplacian
Eigenmaps, Fast Independent Component Analysis, Kernel Ridge Regression,
t-distributed Stochastic Neighbor Embedding. We then give a brief overview of
some topological notions used in topological data analysis, such as, barcodes,
persistent homology, and Wasserstein distance. Theoretically, these methods
applied on a data set can be interpreted differently. From EEG data embedded
into a manifold of high dimension, we apply these methods and we compare them
across persistent homologies of dimension 0, 1, and 2, that is, across
connected components, tunnels and holes, shells around voids or cavities. We
find that from three dimension clouds of points, it is not clear how distinct
from each other the methods are, but Wasserstein and Bottleneck distances,
topological tests of hypothesis, and various methods show that the methods
qualitatively and significantly differ across homologies. | Eddy Kwessi | 2023-06-02T20:01:04Z | http://arxiv.org/abs/2306.01895v1 | # Topological comparison of some dimension reduction methods using persistent homology on EEG data
###### Abstract
In this paper, we explore how to use topological tools to compare dimension reduction methods. We first make a brief overview of some of the methods often used dimension reduction such as Isometric Feature Mapping, Laplacian Eigenmaps, Fast Independent Component Analysis, Kernel Ridge Regression, t-distributed Stochastic Neighbor Embedding. We then give a brief overview of some topological notions used in topological data analysis, such as, barcodes, persistent homology, and Wasserstein distance. Theoretically, these methods applied on a data set can be interpreted differently. From EEG data embedded into a manifold of high dimension, we apply these methods and we compare them across persistent homologies of dimension 0, 1, and 2, that is, across connected components, tunnels and holes, shells around voids or cavities. We find that from three dimension clouds of points, it is not clear how distinct from each other the methods are, but Wasserstein and Bottleneck distances, topological tests of hypothesis, and various methods show that the methods qualitatively and significantly differ across homologies.
## 1 Introduction
In topological data analysis, one is interested in understanding high dimensional structures from low dimensional ones and how discrete structures can be aggregated to form a global structure. It can be a difficult task to even think or believe that high dimensional object exist beyond three dimensions since we can not visualize objects beyond a three-dimensional space. However, embedding theorems, for instance Whitney (1936) and Takens (1981) embedding theorems, clearly show that these high dimension structures do in fact exist. From a practical point of view, to make inferences on structures embedded in high dimensional ambient spaces, some kind of dimensional reduction needs to occur. From a data analysis point of view, dimension reduction amounts to data compression where a certain amount of information may be lost. This dimension reduction is part of manifold learning, which can be understood as a collection of algorithms for recovering low dimension manifolds embedded into high dimensional ambient spaces, while preserving meaningful information, see Ma and Fu (2012). The algorithms for dimension
reduction may be classified into linear and nonlinear methods or parametric or nonparametric methods, where the goal is to select or extract coarse features from high dimensional data. Among the pioneering linear methods is the Principal Component Analysis (PCA) introduced by Hotelling (1933). Its primary goal is to reduce the data to a set of orthogonal linear projections ordered by decreasing variances. Another linear method is the multidimensional scaling (MSD) where the data are aggregated using a measure of proximity, which could be a distance, or a measure of association such as correlation, or any other method describing how close entities can be. Linear Discriminant Analysis (LDA) is a linear method similar to PCA consisting of writing a categorical dependent variable as a linear combination of continuous independent variables. As such, it is opposite to an Analysis of Variance (ANOVA) where the dependent variable is continuous and the independent variables are categorical. The focus of this paper will be on nonlinear techniques, which as their linear counterparts, aim to extract or select low dimensional features while preserving important information. Since there are many such methods, our focus will be on Isometric Feature Mapping (ISOMAP), Laplacian Eigenmaps, Fast Independent Component Analysis, (Fast-ICA), T-distributed Stochastic Neighbor Embedding (t-SNE). We will compare them using Persistent homology (PH). PH is one the many techniques of topological data analysis (TDA) that can be used to identify features in data that remain persistent over multiple and different scales. This tool can provide new insights into seemingly known or unknown data and has the potential to uncover interesting hidden information embedded within data. For instance, PH has been used to provide new insights on the topology of deep neural networks, see Naizait et al. (2020). PH has successfully been used to provide new perspectives on viral evolution, see Chan et al. (2013). The following examples of successful applications can be found in Otter et al. (2017) including but not limited to better understanding of sensor-network coverage, see de Silva and Ghrist (2007), proteins, see Gameiro et al. (2015); Xia and Wei (2014), dimensional structure of DNA, see Emmett et al. (2016), see cells development, Rizvi et al. (2017), robotics, see Bhattacharya et al. (2015); Pokorny et al. (2016); Vasudevan et al. (2013), signal processing, see Chung et al. (2009); Guillemard et al. (2013), spread of contagions, see Taylor et al. (2015), financial networks, see Leibon et al. (2008), applications in neuroscience, see Giusti et al. (2016); Sizemore et al. (2019), time-series output of dynamical systems, see Maletic et al. (2016), on EEG Epilepsy, see Chung et al. (2023). The approach is the last reference is of particular interest to us. Indeed that paper, authors considered EEG measured on the healthy person during sleep. They used the method of false nearest neighbors to estimate the embedding dimension. From there, persistent barcodes diagrams are obtained and reveal that topological noise persists at certain dimension and vanish some others. This paper has a similar approach and is organized as follows: in Section 2, we review theories behind some dimension reduction methods, then in Section 3, we give an overview of the essentials of persistent homology, in Section 4, we discuss how to apply persistent homology to data, and compare the methods on an EEG dataset, using persistent homology. Finally in Section 5, we make some concluding remarks.
Review of a selected dimension reduction methods
Let us note that some of the review methods below are extensively described in Ma and Fu (2012). To have all of our ideas self-contained, let us re-introduced a few concepts. In the sequel, \(\left\|\cdot\right\|\) is the euclidian norm in \(\mathbb{R}^{d}\), for some \(d\geq 3\). In the sequel, topological spaces \(\mathscr{M}\) will considered to be second-countable Hausdorff, that is, (a) Every pair of distinct points has a corresponding pair of disjoint neighborhoods. (b) Its topology has a countable basis of open sets. This assumption is satisfied in most topological spaces and seems reasonable.
### Preliminaries
**Definition 1**.: _A topological space \(\mathscr{M}\) is called a (topological) manifold if locally, it resembles a real \(n\)-dimensional Euclidian space, that is, there exists \(n\in\mathbb{N}\) such that for all \(x\in\mathscr{M}\), there exists a neighborhood \(U_{x}\) of \(x\) and a homeomorphism \(f:U_{x}\to\mathbb{R}^{n}\). The pair \((U_{x},f)\) is referred to as a chart on \(\mathscr{M}\) and \(f\) is called a parametrization at \(x\)._
**Definition 2**.: _Let \(\mathscr{M}\) be a manifold. \(\mathscr{M}\) is said to be smooth if given \(x\in\mathscr{M}\), the parametrization \(f\) at \(x\) has smooth or continuous partial derivatives of any order and can be extended to a smooth function \(F:\mathscr{M}\to\mathbb{R}^{n}\) such that \(F\big{|}_{\mathscr{M}\cap U_{x}}=f\)._
**Definition 3**.: _Let \(\mathscr{M}\) and \(\mathscr{N}\) be differentiable manifolds and consider \(\psi:\mathscr{M}\to\mathscr{N}\) a function. \(\psi\) is said to be an immersion if \(\psi\) is a differentiable function and its derivative is everywhere injective. In other words, \(\psi:\mathscr{M}\to\mathscr{N}\) is an immersion if \(D_{x}\psi:T_{x}\mathscr{M}\to T_{\psi(x)}\mathscr{N}\) is an injective function at every point \(x\) of \(\mathscr{M}\), where \(T_{x}\mathscr{M}\) represents the tangent plane to \(\mathscr{M}\) at \(x\)._
**Definition 4**.: _Let \(\mathscr{M}\) and \(\mathscr{N}\) be differentiable manifolds. A function \(\psi:\mathscr{M}\to\mathscr{N}\) is an embedding if \(\psi\) is an injective immersion._
Let us introduce the notion of boundary of topological manifold that will be important in the sequel.
**Definition 5**.: _Consider a Hausdorff topological manifold \(\mathscr{M}\) homeomorphic to an open subsets of the half-euclidian space \(\mathbb{R}^{n}_{+}\). Let the interior \(Int(\mathscr{M})\) of \(\mathscr{M}\) be the subspace of \(\mathscr{M}\) formed by all points \(s\) that have a neighborhood homeomorphic to \(\mathbb{R}^{n}\). Then the boundary of \(\mathscr{M}\) is defined as complement of \(Int(\mathscr{M})\) in \(\mathscr{M}\), that is, \(\mathscr{M}\setminus Int(\mathscr{M})\), which is an \(n-1\)-dimensional topological manifold._
### Isomap
Isometric Feature Mapping (Isomap) was introduced by Tenenbaum et al. (2000). The data are considered to be a finite sample \(\{\mathbf{v}_{i}\}\) from a smooth manifold \(\mathscr{M}\). The two key assumptions are: (a) there exists of an isometric embedding \(\psi:\mathscr{M}\to\mathscr{X}\) where \(\mathscr{X}=\mathbb{R}^{d}\), where the distance on \(\mathscr{M}\) is the geodesic distance or the shortest curve connecting two points, (b) the smooth manifold \(\mathscr{M}\) is a convex region of \(\mathbb{R}^{m}\), where \(m<<d\). The implementation phase has three main steps.
1. For a fixed integer \(K\) and real number \(\epsilon>0\), perform an \(\epsilon-K\)-nearest neighbor search using the fact that the geodesic distance \(D^{\mathscr{M}}(v_{i},v_{j})\) between two points on \(\mathscr{M}\) is the same (by isometry) as their euclidian distance \(\left\|v_{i}-v_{j}\right\|\) in \(\mathbb{R}^{d}\). \(K\) is the number of data points selected within a ball of radius \(\epsilon\).
2. Having calculated the distance between points as above, the entire data set can be considered as a weighted graph with vertices \(\mathbf{v}=\{v_{i}\}\) and edges \(\mathbf{e}=\{e_{ij}\}\), where \(e_{ij}\) connects \(v_{i}\) with \(v_{j}\) with a distance \(w_{ij}=D^{\mathscr{M}}(v_{i},v_{j})\) considered as an associated weight. The geodesic distance between two data points \(v_{i}\) and \(v_{j}\) is estimated as the graph distance between the two edges, that is, the number of edges in a shortest path connecting them. We observe that this shortest path is found by minimizing the sum of the weights of its constituent edges.
3. Having calculated the geodesic distances \(D^{G}=\{w_{ij}\}\) as above, we observe that \(D^{G}\) is a symmetric matrix, so we can apply the classical Multidimensional Scaling algorithm (MDS) (see Torgerson (1952)) to \(D^{G}\) by mapping (embedding) them into a feature space \(\mathscr{Y}\) of dimension \(d\) while preserving the geodesic distance on \(\mathscr{M}\). \(\mathscr{Y}\) is generated by a \(d\times m\) matrix whose \(i\)-th column represents the coordinates of \(v_{i}\) in \(\mathscr{Y}\).
### Laplacian Eigenmaps
The Laplacian Eigenmaps (LEIM) algorithm was introduced by Belkin and Niyogi (2002). As above, the data \(\mathbf{v}=\{v_{i}\}\) are supposed to be from a smooth manifold \(\mathscr{M}\). It also has three main steps:
1. For a fixed integer \(K\) and real number \(\epsilon>0\), perform an \(\epsilon-K\)-nearest neighbor search on symmetric neighborhoods. Note that given two points \(v_{i},v_{j}\), their respective \(K\)-neighborhood \(N_{i}^{K}\) and \(N_{j}^{K}\) are symmetric if and only \(v_{i}\in N_{j}^{K}\Longleftrightarrow v_{j}\in N_{i}^{K}\).
2. For a given real number \(\sigma>0\) and each pair of points \((v_{i},v_{j})\), calculate the weight \(w_{ij}=e^{-\frac{\left\|u_{i}-v_{j}\right\|^{2}}{2\sigma^{2}}}\) if \(v_{i}\in N_{j}^{K}\) and \(w_{ij}=0\) if \(v_{i}\notin N_{j}^{K}\). Obtain the adjacency matrix \(\mathbf{W}=(w_{ij})\). The data now form a weighted graph with vertices \(\mathbf{v}\), with edges \(\mathbf{e}=\{e_{ij}\}\), and weights \(\mathbf{W}=\{w_{ij}\}\), where \(e_{ij}\) connects \(v_{i}\) with \(v_{j}\) with distance \(w_{ij}\).
3. Consider \(\mathbf{\Lambda}=\{\lambda_{ij}\}\) be a diagonal matrix with \(\lambda_{ii}=\sum_{j}w_{ij}\) and define the graph Laplacian as \(\mathbf{L}=\mathbf{\Lambda}-\mathbf{W}\). Then \(\mathbf{L}\) is positive definite so let \(\widehat{\mathbf{Y}}\) be the \(d\times n\) matrix that minimizes \(\sum_{i,j}w_{ij}\left\|\mathbf{y}_{i}-\mathbf{y}_{j}\right\|^{2}=tr(\mathbf{TLY^{T}})\). Then \(\widehat{\mathbf{Y}}\) can used to embed \(\mathscr{M}\) into a \(d\)-dimensional space \(\mathscr{Y}\), whose \(i\)-th column represents the coordinates of \(v_{i}\) in \(\mathscr{Y}\).
### Fast ICA
The Fast Independent Component Analysis (Fast-ICA) algorithms were introduced by Hyvarinen (1999). As above, the data \(\mathbf{v}\) is considered to be from a smooth manifold
\(\mathscr{M}\). It is assumed that the data \(\mathbf{v}\) is represented as an \(n\times m\) matrix \((v_{ij})\) that can be flattened into a \(n\times m\) vector. As in Principal Component Analysis (PCA), in Factor Analysis, Projection Pursuit, or Independent Component Analysis (ICA), by considering the data as an \(n\times m\)-dimensional observed random variable, the goal is to determine a matrix \(\mathbf{W}\) such that \(\mathbf{s=W^{T}v}\), where \(\mathbf{s}\) is a \(n\times m\)-dimensional random variables having desirable properties such as optimal dimension reduction, or other interesting statistical properties such as minimal variance. Optimally, the components of \(\mathbf{s}\) should provide source separation (the original data source \(\mathbf{v}\) is assumed corrupted with noise) and feature extraction and be independent of each other. In a regular ICA, the matrix \(\mathbf{W}\) is found by minimizing the mutual information, a measure of dependence between given random variables. In fast ICA algorithms, the matrix \(\mathbf{W}\) is found by using a Newton fixed point approach, with an objective function taken as the differential entropy given as \(J_{G}(\mathbf{W})=\left(\mathbb{E}[G(\mathbf{W^{T}W})]-\mathbb{E}[G(z)]\right)^{2}\), where it is assumed that \(\mathbf{W}\) is such that \(\mathbb{E}[(\mathbf{W^{T}W})^{2}]=1\), and \(z\) is standard normal distribution. \(G\) is a function referred to as the contrast function that include but is not limited to \(G(u)=\alpha^{-1}\log(\cosh(\alpha u)),G(u)=-\sigma^{-1}e^{-0.5\sigma u^{2}},G(u )=0.25u^{4}\), where \(\alpha\in[1,2]\) and \(\sigma\approx 1\). From a dynamical system point of view, the fixed point is locally asymptotically stable with the exception of \(G(u)=0.25u^{4}\) where stability becomes global. For simplification purposes, let \(g(x)=G^{\prime}(x)\). The key steps are:
1. Data preparation: it consists of centering the data \(\mathbf{v}\) with respect to the column to obtain \(\mathbf{v}^{c}\). That is, \(v^{c}_{ij}=v_{ij}-\dfrac{1}{m}\sum_{j=1}^{m}v_{ij}\), for \(i=1,2,\cdots,n\). The centered data are then whitened, that is, \(\mathbf{v}^{c}\) is linearly transformed into \(\mathbf{v}^{c}_{w}\), a matrix of uncorrelated components. This is accomplished through an eigenvalue decomposition of the covariance matrix \(\mathbf{C=v^{c}(v^{c})^{T}}\) to obtain two matrices \(\mathbf{V},\mathbf{E}\), respectively of eigenvectors and eigenvalues so that \(\mathbb{E}[\mathbf{C}]=\mathbf{VEV^{T}}\). The whitened data are found as \(\mathbf{v^{c}_{w}=E^{-1/2}V^{T}v^{c}}\) and simply referred to again as \(\mathbf{v}\) for simplicity.
2. Component extraction: Let \(F(\mathbf{W})=\mathbb{E}[\mathbf{v}g(\mathbf{W^{T}v})]-\beta\mathbf{W}\) for a given constant \(\beta=\mathbb{E}[\mathbf{W^{T}v}g(\mathbf{W^{T}v})]\), where \(W_{a}\) is the optimal weight matrix. Applying the Newton scheme \((x_{n+1}=x_{n}-F(x_{n})[F^{\prime}(x_{n})]^{-1})\) to the differentiable function \(J_{G}\), we have * Select a random starting vector \(\mathbf{W}_{0}\). * For \(n\geq 0\), \(\mathbf{W}_{n+1}=\mathbb{E}[\mathbf{v}g(\mathbf{W^{T}_{n}v})]-\mathbb{E}[g^{\prime}(\mathbf{W^ {T}_{n}v})]\mathbf{W}_{n}\). * We then normalize \(\mathbf{W}_{n+1}\) as \(\dfrac{\mathbf{W}_{n+1}}{\left\lVert\mathbf{W}_{n+1}\right\rVert}\). * We repeat until a suitable convergence level is reached. * From the last matrix \(\mathbf{W}\) obtained, we let \(\mathbf{s=W^{T}v}\).
### Kernel Ridge Regression
The Kernel Ridge Regression (KRR) is constructed as follows: as above, the data \(\mathbf{v}\) is considered to be from a smooth manifold \(\mathscr{M}\) of dimension say \(d\). It is assumed that the data \(\mathbf{v}\) is represented as an \(n\times m\) matrix \(\{v_{ij}\}\) that can be flattened into a \(n\times m\) vector. Suppose we are in possession of \(\mathbf{u}=(u_{1},u_{2},\cdots,u_{n})\) data corresponding to a
response variable and covariates given as \(\mathbf{v}=\left(\mathbf{v_{1},v_{2},\cdots,v_{n}}\right)\) where \(\mathbf{v_{i}}=(v_{ij})^{T}\) for \(j=1,2,\cdots,m\). With the Least Square method, on can find the best linear model between the covariates \(\mathbf{v}=(v_{i})\) and the response \(\mathbf{u}=(u_{i})\), by minimizing the objective function \(L(\mathbf{W})=\dfrac{1}{2}\sum_{i=1}^{L}(u_{i}-\mathbf{W^{T}}\mathbf{v}_{i})^{2}\), where \(\mathbf{W}\) is a \(1\times n\) vector. Least square methods are notorious for overfitting. The Ridge regression is a compromise that uses a penalized objective function such as \(L(\mathbf{W})=\dfrac{1}{2}\sum_{i=1}^{L}(u_{i}-\mathbf{W^{T}}\mathbf{v}_{i})^{2}+ \dfrac{\lambda}{2}\left\|\mathbf{W}\right\|^{2}\). The solution can be found as \(\mathbf{W}=\left(\lambda I+\sum_{i=1}^{n}\mathbf{v_{i}}\mathbf{v}_{i}^{T} \right)^{-1}\left(\sum_{i=1}^{n}u_{i}\mathbf{v}_{i}\right)\). In case the true nature of the relationship between the response and covariates is nonlinear, we can replace \(\mathbf{v}_{i}\) with \(\varphi(\mathbf{v}_{i})\) where \(\varphi\) is a nonlinear function \(\mathbb{R}^{m}\rightarrow\mathbb{R}\). In particular, if the response is qualitative, that is, say labels, then we have a classification problem and \(\varphi\) is referred to as feature map. Note that when using \(\varphi\), the number of dimensions of the problem is considerably high. Put \(\Phi=\varphi(\mathbf{v})=(\varphi(v_{1}),\varphi(v_{2}),\cdots,\varphi(v_{n}))\). Replacing \(\mathbf{v_{i}}\) with \(\varphi(\mathbf{v_{i}})\), the solution above becomes \(\mathbf{W}=\left(\lambda I+\sum_{i=1}^{n}\varphi(\mathbf{v_{i}})\varphi(\mathbf{v_{i}})^{ T}\right)^{-1}\left(\sum_{i=1}^{n}u_{i}\varphi(\mathbf{v}_{i})\right)=(\lambda I+ \Phi\Phi^{T})^{-1}\Phi\mathbf{u}^{T}\). Consider the following identity \(AB^{T}(C+BAB^{T})^{-1}=(A^{-1}+B^{T}C^{-1}B)^{-1}B^{T}C^{-1}\) for given invertible matrices \(A,C\) and a matrix \(B\). Applying this with \(A=C=I\) and \(B=\Phi\) we have \(\mathbf{W^{T}}=\mathbf{u}\left[\Phi^{T}(\lambda I+\Phi^{T}\Phi)^{-1}\right]=\mathbf{u} \left[(\lambda I+\Phi^{T}\Phi)^{-1}\Phi^{T}\right]\). Therefore, given a new value \(\mathbf{v}_{n}\), the predicted value is \(y_{n}=\mathbf{W}^{T}\Phi(\mathbf{v}_{n})=\mathbf{u}(\Phi^{T}\Phi+\lambda I)^{-1}\Phi^{T} \Phi(\mathbf{v}_{n})=\mathbf{u}(K+\lambda I)^{-1}\kappa(\mathbf{v}_{n})\), where \(K=K(\mathbf{v}_{i},\mathbf{v}_{i})=\Phi^{T}\Phi=\sum_{i=1}^{n}\varphi(\mathbf{v}_{i})^{T} \varphi(\mathbf{v}_{i})\) and \(\kappa(\mathbf{v}_{n})=K(\mathbf{v}_{i},\mathbf{v}_{n})\). K is referred to as the kernel, which is the only quantity needed to be calculated, thereby significantly reducing the computational time and dimensionality of the problem. In practice, we may use a linear kernel \(K(\mathbf{x},\mathbf{y})=\mathbf{x}^{T}\mathbf{y}\), or a Gaussian kernel \(K(\mathbf{x},\mathbf{y})=e^{-\sigma\left\|\mathbf{x}-\mathbf{y}\right\|^{2}}\), for some \(\sigma>0\), where \(\left\|\cdot\right\|\) is a norm in \(\mathbb{R}^{m}\) and \(\sigma\) is given real constant.
### t-Sne
Stochastic Neighbor Embedding (SNE) was proposed by Hinton and Roweis (2002). t-SNE latter followed and was proposed by van der Maaten and Hinton (2008). t-distributed stochastic neighbor embedding (t-SNE) is a dimension reduction method that amounts to assigning data two or three dimensional maps. As above, we consider the data \(\mathbf{v}=(v_{ij})=(v_{k})\) (\(k=1,2,\cdots,N\) with \(N=n\times m\)) to be from a smooth manifold \(\mathscr{M}\) of high dimension, say \(d\). The main steps of the method are:
* Calculate the asymmetrical probabilities \(p_{kl}\) as \(p_{kl}=\frac{e^{-\delta_{kl}}}{\sum_{k\neq l}e^{-\delta_{kl}}}\), where \(\delta_{kl}=\frac{\left\|v_{k}-v_{l}\right\|^{2}}{2\sigma_{i}}\) represents the dissimilarity between \(v_{k}\) and \(v_{l}\) and \(\sigma_{i}\) is a parameter selected by the experimenter or by binary search. \(p_{kl}\) represents the conditional probability that datapoint \(v_{l}\) is the neighborhood of datapoint \(v_{k}\), if neighbors were selected proportionally to their probability density under a normal distribution centered at \(v_{k}\) and variance \(\sigma_{i}\).
* Assuming that the low dimensional data are \(\mathbf{u}=(u_{k}),\ k=1,2,\cdots,N\), the corresponding dissimilarity probabilities \(q_{kl}\) are calculated under constant variance as \(q_{kl}=\frac{e^{-d_{kl}}}{\sum_{k\neq l}e^{-d_{kl}}}\), where \(d_{kl}=\left\|u_{k}-u_{l}\right\|^{2}\) in the case of SNE and as \(q_{kl}=\frac{(1+d_{kl})^{-1}}{\sum_{k\neq l}\left(1+d_{kl}\right)^{-1}}\), for t-SNE.
* Then, we minimize the Kullback-Leibler divergence between \(p_{kl}\) and \(q_{kl}\) given as \(L=\sum_{k=1}^{N}\sum_{l=1}^{N}p_{kl}\log\left(\frac{p_{kl}}{q_{kl}}\right)\), using the gradient descent method with a momentum term with the scheme \(\mathbf{w}^{t}=\mathbf{w}^{t-1}+\eta\frac{\partial L}{\partial\mathbf{u}}+\alpha(t)(\mathbf{ w}^{t-1}-\mathbf{w}^{t-2})\) for \(t=2,3,\cdots,T\) for some given \(T\). Note that \(\mathbf{w}^{0}=(u_{1},u_{2},\cdots,u_{N})\sim N(0,10^{-4}\mathbf{I})\), where \(\mathbf{I}\) is the \(N\times N\) identity matrix, \(\eta\) is a constant representing a learning rate, and \(\alpha(t)\) is \(t\)-th momentum iteration. We note that \(\frac{\partial L}{\partial\mathbf{u}}=\left(\frac{\partial L}{\partial u_{k}}\right)\) for \(k=1,2,\cdots,N\) where \(\frac{\partial L}{\partial u_{k}}=4\sum_{l=1}^{N}(p_{kl}-q_{kl})(u_{k}-u_{l}) (1+d_{kl})^{-1}\).
* Then we use \(\mathbf{u}=\mathbf{w}^{T}\) as the low dimensional representation of \(\mathbf{v}\).
## 3 Persistent Homology
In the sequel, we will introduce the essential ingredients needed to understand and compute persistent homology.
### Simplex complex
**Definition 6**.: _A real \(d\)-simplex \(S\) is a topological manifold of dimension \(d\) that represents the convex hull of \(d+1\) points. In other words:_
\[S=\left\{(t_{0},t_{1},\cdots,t_{d})\in\mathbb{R}^{d}:t_{i}\geq 0\quad\text{and }\sum_{i=1}^{d}t_{i}=1\right\}\;. \tag{3.1}\]
Example 1.A \(0\)-simplex is a point, a \(1\)-simplex is an edge, a \(2\)-simplex is a triangle, a \(3\)-simplex is a tetahedron, a \(4\)-simplex is a pentachoron, etc.
**Remark 7**.: _We observe that a \(d\)-simplex \(S\) can also be denoted as_
\[S=[V_{0},V_{1},\cdots,V_{d}],\quad\text{where $V_{i}=\left\{\text{vertices of $V_{i}$ }\right\},i=0,1,\cdots,d$}\;.\]
_We also note that the dimension of \(V_{i}\) is \(i\)._
**Definition 8**.: _Given a simplex \(S\), a face of \(S\) is another simplex \(R\) such that \(R\subseteq S\) and such that the vertices of \(R\) also the vertices of \(S\)._
**Example 2**.: Given a \(3\)-simplex (a tetrahedron), it has \(4\) different \(2\)-simplex or \(2\) dimensional faces, each of them with three \(1\)-simplex or \(1\)-dimensional faces, each with three \(0\)-simplex or \(0\)-dimensional faces.
**Definition 9**.: _A simplicial complex \(\Sigma\) is a topological space formed by different simplices not necessarily of the same dimension which have to satisfy the gluing condition, that is:_
1. _Given_ \(S_{i}\in\Sigma\)_, its face_ \(R_{i}\in\Sigma\)_._
2. _Given_ \(S_{i},S_{j}\in\Sigma\)_, either_ \(S_{i}\cap S_{j}=\emptyset\) _or_ \(S_{i}\cap S_{j}=R_{i}=R_{j}\)_, the faces of_ \(S_{i}\) _and_ \(S_{j}\) _respectively._
Figure 1: An illustration of \(0\), \(1\), \(2\), \(3\), and \(4\)-simplices.
It is important to observe that a simplicial complex can be defined very abstractly. Indeed,
**Definition 10**.: _A simplicial complex \(\Sigma=\left\{S:S\subseteq\Omega\right\}\) is a collection of non-empty subsets of a set \(\Omega\) such that_
1. _For all_ \(\omega\in\Omega\)_, then_ \(\left\{\omega\right\}\in\Sigma\)_._
2. _For any set_ \(U\) _such that_ \(S\subset U\) _for some_ \(S\in\Sigma\)_, then_ \(U\in\Sigma\)_._
Example 3.Let \(\Omega=\left\{1,2,3,4\right\}\). We can define the following simplicial complexes on \(\Omega\).
1. \(\Sigma_{1}=\left\{\left\{1\right\},\left\{2\right\},\left\{3\right\},\left\{4 \right\},\left\{1,2\right\},\left\{1,3\right\},\left\{2,3\right\},\left\{1,2, 3\right\}\right\}\;.\)
2. \(\Sigma_{2}=\mathscr{P}(\Omega)\setminus\left\{\emptyset\right\}\), where \(\mathscr{P}(\Omega)\) is the set of all subsets of \(\Omega\).
### Homology and persistent homology
**Definition 11**.: _Let \(\Sigma\) be a simplicial complex. We define the Abelian group generated by the \(j\)-simplices of \(\Sigma\) as \(C_{j}(\Sigma)\). We define a boundary operator associated with \(C_{j}(\Sigma)\) as a homomorphism_
\[\partial_{j}:C_{j}(\Sigma)\to C_{j-1}(\Sigma)\;.\]
_We define the chain complex associated with \(\Sigma\) as the collection of pairs_
\[C(\Sigma)=\left\{(C_{j}(\Sigma),\partial_{j}),j\in\mathbb{Z}\right\}\;.\]
Now we can define a homology group associated with a simplicial complex.
Figure 2: Example of a simplicial complex. \(J\) is \(0\)-simplex, \(A\) and \(D\) are \(1\)-simplices, \(B,C,G\), and \(H\) are \(2\)-simplices, \(E\) and \(F\) are \(3\)-simplices, and \(I\) is a \(4\)-simplex. We note that \(A\cap B\) is a \(0\)-simplex. \(B\cap C\) is a \(1\)-simplex and a face of \(B\) and \(C\) respectively. \(E\cap F\) is a \(2\)-simplex and a face of \(E\) and \(F\). \(G\cap H\) is a \(1\)-simplex and \(I\cap H\) is \(1\)-simplex.
**Definition 12**.: _Given a simplicial complex \(\Sigma\), put \(A_{j}(\Sigma):=kern(\partial_{j})\) and \(B_{j}(\Sigma):=Im(\partial_{j+1})\). Then the \(j\)th homology group \(\mathbb{H}_{j}(\Sigma)\)of \(\Sigma\) is defined as quotient group between \(A_{j}(\Sigma)\) and \(B_{j}(\Sigma)\), that is,_
\[\mathbb{H}_{j}(\Sigma)=\frac{A_{j}(\Sigma)}{B_{j}(\Sigma)}\;.\]
_What this reveals is the presence of "holes" in a given shape._
**Remark 13**.: _It is important to observe that \(\mathbb{H}_{j}(\Sigma)=\frac{<j\text{-dimensional cycles}>}{<j\text{-dimensional boundaries}>}\), where \(<U>\) stands for the span of \(U\), and a cycle is simply a shape similar to a loop but without necessarily a starting point. Another important remark is that the boundary operator can indeed be defined as_
\[\partial_{j}(\Sigma):=\sum_{k=0}^{j}(-1)^{k}[V_{0},\cdots,\widehat{V}_{-k}, \cdots,V_{j}]\;,\]
_where \(\widehat{V}_{-k}\) means not counting the vertices of \(V_{k}\). This shows that \(\partial_{j}(\Sigma)\) lies in a \(j-1\)-simplex. Another remark is that \(\partial_{j-1}\circ\partial_{j}=0\) for \(0\leq j\leq d\)._
Now that we know that homology reveals the presence of "holes", we need to find a way of determining how to count these "holes".
**Definition 14**.: _Given a simplicial complex \(\Sigma\), the \(j\)th Betti number \(b_{j}(\Sigma)\) is the rank of \(\mathbb{H}_{j}(\Sigma)\) or_
\[b_{j}(\Sigma)=dim(A_{j}(\Sigma))-dim(B_{j}(\Sigma))\;.\]
_In other words, the smallest cardinality of a generating set of the group \(\mathbb{H}_{j}(\Sigma)\). In fact since the elements of \(A_{j}(\Sigma)\) are \(j\)-dimensional cycles and that of \(B_{j}(\Sigma)\) are \(j\)-dimensional boundaries, the Betti number counts the number of independent \(j\)-cycles not representing the boundary of any collection of simplices of \(\Sigma\)._
**Example 4**.:
1. \(b_{0}\) is the number of connected components of the complex.
2. \(b_{1}\) is the number of tunnels and holes.
3. \(b_{2}\) is the number of shells around cavities or voids.
**Definition 15**.: _Let \(\Sigma\) be simplicial complex and let \(N\) be a positive integer. A filtration of \(\Sigma\) is a a nested family \(\Sigma_{N}^{F}:=\{\Sigma_{p},0\leq p\leq N\}\) of sub-complexes of \(\Sigma\) such that_
\[\Sigma_{0}\subseteq\Sigma_{1}\subseteq\Sigma_{2}\subseteq\cdots\subseteq \Sigma_{N}=\Sigma\;.\]
Now let \(\mathbb{F}_{2}\) be the field with two elements and let \(0\leq p\leq q\leq N\) be two integers. Since \(\Sigma_{p}\subseteq\Sigma_{q}\), the inclusion map \(Incl_{pq}:\Sigma_{p}\to\Sigma_{q}\) induces an \(\mathbb{F}_{2}\)-linear map \(g_{pq}:\mathbb{H}_{j}(\Sigma_{p})\to\mathbb{H}_{j}(\Sigma_{q})\). We can now define, for any \(0\leq j\leq d\), the \(j\)-th persistent homology of a simplicial complex \(\Sigma\).
**Definition 16**.: _Consider a simplicial complex \(\Sigma\) with filtration \(\Sigma_{N}^{F}\), for some positive integer \(N\). The \(j\)-th persistent homology \(\mathbb{H}_{j}^{p\to q}(\Sigma)\) of \(\Sigma\) is defined as the pair:_
\[\mathbb{H}_{j}^{p\to q}(\Sigma,\mathbb{F}_{2}):=\left(\left\{\mathbb{H}_{j}( \Sigma_{p}),0\leq p\leq N\right\},\left\{g_{pq},0\leq p\leq q\leq N\right\} \right)\,.\]
In a sense, the \(j\)-th persistent homology provides a more refined information than the homology of the simplicial complex in that it informs us of the changes of features such as connected components, tunnels and holes, shells around voids through the filtration process. It can be visualized using a "barcode" or a persistent diagram. The following definition is borrowed from Ghrist (2008):
**Definition 17**.: _Consider a simplicial complex \(\Sigma\), a positive integer \(N\), and two integers \(0\leq p\leq q\leq N\). The barcode of the \(j\)-th persistent homology \(\mathbb{H}_{j}^{p\to q}(\Sigma,\mathbb{F}_{2})\) of \(\Sigma\) is a graphical representation of \(\mathbb{H}_{j}^{p\to q}(\Sigma,\mathbb{F}_{2})\) as a collection of horizontal line segments in a plane whose horizontal axis corresponds to a parameter and whose vertical axis represents an arbitrary ordering of homology generators._
We finish this section with the introduction of the Wasserstein and Bottleneck distances, used for the comparison of persistent diagrams.
**Definition 18**.: _Let \(p>1\) be a real number. Given two persistent diagrams \(X\) and \(Y\), the \(p\)-th Wasserstein distance \(W_{p}(X,Y)\) between \(X\) and \(Y\) is defined as_
\[W_{p}(X,Y):=\inf_{\eta:X\to Y}\sum_{x\in X}\left\|x-\eta(x)\right\|_{\infty}^{ p}\,,\]
_where \(\eta\) is a perfect matching between the intervals of \(X\) and \(Y\). The Bottleneck distance is obtained when \(p=\infty\), that is, it is given as_
\[W_{\infty}(X,Y):=\inf_{\eta:X\to Y}\sup_{x\in X}\left\|x-\eta(x)\right\|_{ \infty}\,.\]
## 4 Applications to Data
In the presence of data, simplicial complexes will be replaced by sets of data indexed by a parameter, therefore transforming these sets into parametrized topological entities. On these parametrized topological entities, the notions of persistent homology introduced above can be computed, especially the Betti number, in the form of "barcode". To see how this could be done, let us consider the following definitions:
**Definition 19**.: _For a given collection of points \(\left\{s_{\delta}\right\}\) in a manifold \(\mathscr{M}\) of dimension \(n\), its Cech complex \(C_{\delta}\) is a simplicial complex formed by \(d\)-simplices obtained from a subcollection \(\left\{x_{\delta,k},0\leq k\leq d,0\leq d\leq n\right\}\) of points such that taken pairwise, their \(\delta/2\)-ball neighborhoods have a point in common._
**Definition 20**.: _For a given collection of points \(\left\{s_{\delta}\right\}\) in a manifold \(\mathscr{M}\) of dimension \(n\), its Rips complex \(R_{\delta}\) is a simplicial complex formed by \(d\)-simplices obtained from a subcollection \(\left\{x_{\delta,k},0\leq k\leq d,0\leq d\leq n\right\}\) of points which are pairwise within a distance of \(\delta\)._
**Remark 21**.: _1. It is worth noting that in practice, Rips complexes are easier to compute than Cech complexes, because the exact definition of the distance on \(\mathscr{M}\) may not be known. 2. More importantly, from a data analysis point of view, Rips complexes are good approximations (estimators) of Cech complexes. Indeed, a result from de Silva and Ghrist (2007) shows that given \(\delta>0\), there exits a chain of inclusions \(R_{\delta}\hookrightarrow C_{\delta/\sqrt{2}}\hookrightarrow R_{\delta/\sqrt{2}}\). 3. Though Rips complexes and barcodes seems like challenging objects to wrap one's head around, there is an ever growing list of algorithms from various languages that can be used for their visualization. All the analysis below has been done using R, in particular the TDA package in R._
### Randomly generated data
We generated 100 data points sampled randomly in the square \([-5,5]\times[-5,5]\). In Figure 3 and Figure 4 below, we illustrate the Rips and barcode changes through a filtration.
Figure 3: Example of the evolution of Rips Complexes \(\{R_{\delta}\}\) through a filtration with parameter \(\delta\). As we move from left to right, it shows how samples points (blue dots), first form 0-simplices, then 1-simplices, and so on. In particular, it shows how connected components progressively evolve to form different types of holes.
### EEG epilepsy data
#### 4.2.1 Data Description
The main purpose of the manuscript it to analyze EEG data. We will consider a publicly available (at [http://www.meb.unibonn.de/epileptic/science/physik/eegdata.html](http://www.meb.unibonn.de/epileptic/science/physik/eegdata.html)) epilepsy data set called here EDATA for simplicity. The data consist of five sets A, B, C, D, and E. Each containing 100 single-channel EEG segments of 23.6 seconds, each of which was selected after visual inspection for artifacts (such as acoustic and electrical shielding, separation of earth ground for laboratory, interconnectivity of devices on the same phase and ground centrally and locally) and has passed a weak stationarity criterion. Sets A and B were obtained from surface EEG recordings of five healthy subjects with eyes open and closed, respectively. Data were obtained in seizure-free intervals from five patients in the epileptogenic zone for set D and from the hippocampal formation of the opposite hemisphere of the brain for set C. Set E contains seizure activity, selected from all recording sites exhibiting ictal activity. Sets A and B have been recorded extracranially, whereas sets C, D, and E have been recorded intracranially. All EEG signals were recorded with the same 128-channel amplifier system, using an average common reference omitting electrodes containing pathological activity (C,D, and E) or strong eye movement artifacts (A and B). After 12 bit analog-to-digital conversion, the data were written continuously onto the disk of a data acquisition computer system at a sampling rate of 173.61 Hz. Band-pass filtersettings were 0.53-40 Hz (12 dB/oct.)
Figure 4: Example of the evolution of barcodes through a filtration with parameter \(\delta\) for the same data as above. As we move from left to right, from top to bottom, it shows appearance and disappearance of lines (\(\mathbb{H}_{0}\)) and holes (\(\mathbb{H}_{1}\)) as the parameter \(\delta\) changes. It shows that certain lines and holes persist through the filtration process.
#### 4.2.2 Data analysis
The approach is to first embed the data into a manifold of high dimension. This was already done in Kwessi and Edwards (2021). The dimension \(d=12\) was found using the method of false nearest neighbors. Depending on the set used, the size of the data can be very large: for example (\(4097\times 100\times 5=2,048,500\)) making it very challenging to analyze holistically. In Kwessi and Edwards (2021), we proposed to construct a complex structure (using all 100 channels for all 5 groups) whose volume changes per group. We would like to analyze the data further from a persistent homology point of view. This would mean analyzing 500 different persistent diagrams and making an inference. We note that simplicial complexes of this data sets are very large (2 Millions+). Fortunately, we can use the Wasserstein distance to compare persistent diagrams. To clarify, we will use each of the dimension reduction method introduced earlier, then proceed with construction of persistent diagrams. We will then compare them by method and by sets (A, B, C, D, and E).
**Single-channel Analysis:**
Suppose we select at random one channel among the 100 from set D. Figure 5 below represents a 3 dimensional representation of the embedded data using Takens embedding method (Tak), plotted using the 3 first three delayed coordinates \(x=x(t),y=x(t-\rho),z=x(t-2\rho)\) where \(\rho=1\Delta t\), with \(\Delta t=\frac{1}{fs}=5.76\) ms in Figure 5 (a), then the first three coordinates in the case of Kernel Ridge Regression (KRR\({}_{i}\)) in Figure 5 (b), Isomap (iso.\(i\)) in Figure 5 (c), Laplacian Eigen Maps (LEIM\({}_{i}\)) in Figure 5 (d), Fast Independent Component Analysis (ICA\({}_{i}\)) in Figure 5 (e), and t-Distributed Stochastic Neighbor Embedding (t-SNE\({}_{i}\)) Figure 5 (f). From these three dimensional scatter plots, we can visually observe that the t-SNE plot (Figure 5 (f)) is relatively different from the other five since it seems to have more larger voids. How different is difficult to tell from the naked-eye. Figure 6 represents their corresponding barcodes. It is much clearer looking at the the persistent diagram for t-SNE (Figure 6 (f)) that it is very different from the other five, when looking at \(\mathbb{H}_{0},\mathbb{H}_{1}\) and \(\mathbb{H}_{2}\). Now, a visual comparison is not enough to really assert a significant difference. Using the Bottleneck distance, we calculate the distance between the respective persistent diagrams, for \(\mathbb{H}_{0}\) and \(\mathbb{H}_{1}\) in Table 1 (a) and \(\mathbb{H}_{2}\) in Table 1 (b) below. We observe from the first table that the Bottleneck distance at \(\mathbb{H}_{0}\) and \(\mathbb{H}_{2}\) for t-SNE are almost twice as large as for the other methods. They are comparable to that of LEIM at \(\mathbb{H}_{1}\). The other methods have comparable Bottleneck distances at \(\mathbb{H}_{0},\mathbb{H}_{1}\), and \(\mathbb{H}_{2}\), confirming what we already suspected visually in Figure 5 and Figure 6.
Figure 5: Scatterplots for a Takens projection method (a), KRR method (b), Isomap (c), LEIM (d), ICA (e), and t-SNE (f).
Figure 6: Barcodes for a Takens projection method(a), KRR method (b), Isomap (c), LEIM (d), ICA (e), and t-SNE (f).
The analysis above was done using a single channel, selected at random from the set D. It seems to suggest that the t-SNE method is different from the other five dimension reduction methods discussed above. Strictly speaking, non zero Bottleneck distances are indication of structural topological differences. What they do not say however is, if the differences observed are significant. To address the issue of significance, we will perform a pairwise permutation test. Practically, from set \(j\) and channel \(i\), we will obtain a persistent diagram \(\mathscr{D}_{i}^{(j)}\sim\mathscr{P}^{(j)}\) where \(j\in\left\{1,2,3,4,5\right\},i\in\left\{1,2,\cdots,15\right\}\), and \(\mathscr{P}^{(j)}\) is the true underlying distribution of persistent diagrams, see Mileyko et al. (2011) for the existence of these distributions. We will conduct a pairwise permutation test with null hypothesis \(H_{0}:\mathscr{P}^{(j)}=\mathscr{P}^{(j^{\prime})}\) and alternative hypothesis \(H_{1}:\mathscr{P}^{(j)}\neq\mathscr{P}^{(j^{\prime})}\). We will use landscape functions (see Berry et al. (2020)) to obtain test statistics. The p.values obtained were found to be very small, suggesting that the differences above are indeed all significative across \(\mathbb{H}_{0},\mathbb{H}_{1}\), and \(\mathbb{H}_{2}\).
#### Multiple-channel Analysis:
(a) Within set analysis
In each set, we make a random selection of 15 channels, and we compare the Bottleneck distances obtained. This means having 15 tables of distance such as Table 1 (b) above. There will be consistency if the cell value \(k(i,j)\) in Table \(k\), where \(k\in\left\{1,2,\cdots,15\right\}\) and \(i,j\in\left\{1,2,3,4,5\right\}\) is barely different from \(k^{\prime}(i,j)\) of Table \(k^{\prime}\). Large differences will be an indication of topological differences between the methods within the sets. In Figure 7 below, the \(y\)-axis represents Bottleneck distances and the \(x\)-axis represents channels indices. The red color is indicative of the Bottleneck distance between persistent diagrams on \(\mathbb{H}_{1}\) and the blue color on \(\mathbb{H}_{2}\) from data generated from each of the methods above. We see that overall, while there are small fluctuations from channels to channels on \(\mathbb{H}_{1}\), the largest fluctuations actually occur on \(\mathbb{H}_{2}\). A deeper analysis reveals that in fact, the large fluctuations are due to large distance between t-SNE and the other five methods. This confirms the earlier observations (refer to Figure 6 and Table 1 above) that persistent diagrams are really different on \(\mathbb{H}_{2}\). Topologically, this means that shells around cavities or voids that persist are not the same when using different dimension reduction methods. However, the small fluctuations on \(\mathbb{H}_{1}\) do not mean that tunnels and holes that persist are the same. Rather, they do indicate is that they may not be all very different.
\begin{table}
\begin{tabular}{|l|r|r|r|r|r|r|} \multicolumn{7}{c}{**(a)**} \\ \hline \(\mathbb{H}_{0}\) & Tak & Iso & KRR & ICA & LEM & TSNE \\ \hline Tak & & & & & & \\ \hline Iso & 0.0945019 & & & & & \\ \hline KRR & 0.0957546 & 0.0200305 & & & & \\ \hline ICA & 0.0882795 & 0.0157002 & 0.0071899 & & & \\ \hline LEM & 0.1678820 & 0.18128565 & 0.1247918 & 0.1205499 & & \\ \hline TSNE & 0.2238167 & 0.1730406 & 0.1817924 & 0.1759454 & 0.1162392 & \\ \hline \end{tabular}
\begin{tabular}{|l|r|r|r|r|r|r|} \multicolumn{7}{c}{**(b)**} \\ \hline \(\mathbb{H}_{1}/\mathbb{H}_{2}\) & Tak & Iso & KRR & ICA & LEM & TSNE \\ \hline Tak & & & 0.0363205 & 0.0301992 & 0.0293631 & 0.0291247 & 0.0551774 \\ \hline Iso & 0.0340282 & & 0.03303687 & 0.0290406 & 0.0236890 & 0.0598517 \\ \hline KRR & 0.0317261 & 0.0273460 & & 0.0207599 & 0.0212138 & 0.0647935 \\ \hline ICA & 0.0310771 & 0.0270919 & 0.0208086 & & 0.0242277 & 0.0611060 \\ \hline LEM & 0.0607389 & 0.0725585 & 0.0702695 & 0.0682761 & & 0.0542615 \\ \hline TSNE & 0.0757815 & 0.095921 & 0.0864587 & 0.0861522 & 0.0785030 \\ \hline \end{tabular}
\end{table}
Table 1: Bottleneck distance between the persistent diagrams above at \(\mathbb{H}_{0}\) (black), at \(\mathbb{H}_{1}\) (blue), and at \(\mathbb{H}_{2}\) (red).
(b) Between set analysis
To analyze the data of Bottleneck distances between sets, we need a summary statistics for each set from the data above. It is clear from Figure 7 that the mean would not be a great summary statistics on \(\mathbb{H}_{1}\), as there seems to be too many outliers. We will use the median instead and perform a pairwise Wilcoxon-Mann-Whitney test. Table 2 below shows the p.value on \(\mathbb{H}_{1}\) and and \(\mathbb{H}_{2}\). The take-away is that the last row of the table suggests that set E is statistically topologically different from others on \(\mathbb{H}_{1}\), at significance level 0.05. In a way, this is a confirmation of the results obtained in Kwessi and Edwards (2021) where set E (seizure) was already shown to be statistically different from other sets.
Figure 7: Bottleneck distances between the persistent diagrams for 15 channels within each set A, B, C, D, and E on \(\mathbb{H}_{1}\) and \(\mathbb{H}_{2}\) for each of the methods introduced above.
## 5 Concluding remarks
In this paper, we have revisited the mathematical descriptions of six dimension reduction methods. We have given a brief introduction to the very vast topic of persistent homology. We discussed how to apply persistent homology to data. In the presence of data (say in three dimension) obtained either by projecting the data from high dimension into smaller dimension (as in Takens) or by performing some sort of dimension reduction, it is not always clear what we see or how different one method is compared to another. From their mathematical description, they seem to represent different objects. Further, obtaining theoretically a clear discrimination procedure between these procedures seems a daunting if not an outright impossible task. Topology may offer a solution by looking at persistent artifacts through filtration. From Figure 5, it seems clear that the methods were different but Figure 6 offers a different perspective. In the end, through calculation of Bottleneck distances and hypothesis tests, we can safely conclude that the methods are different topologically speaking, in that, the connected components, the tunnels and holes, the shells around cavities or voids do not match perfectly. Since these methods are indiscriminately used in many applications, the message is that replication of results from one method to the next may not be guaranteed in the grand scheme of things. It does not however render them useless. In fact, our analysis is limited to one data set, meaning that another data set may yield different conclusions. Further, due to cost in calculation, we were limited to only a handful of samples. More, Wasserstein distance for \(p<\infty\) are extremely costly in time to calculate on a regular computer. Even for \(p=\infty\), the Bottleneck distance is also very costly in time to calculate, especially for \(\mathbb{H}_{0}\). This explain why at some point, we did not provide the comparison on \(\mathbb{H}_{0}\). We can infer from this analysis that topological persistent homologies do change dramatically at seizure, a finding already obtained in previous analyses, see Kwessi and Edwards (2021). This suggests that looking at changes in homology landscapes could be a predictor of seizure. Given that some EEG epilepsy data are known to contain some deterministic chaos, it be might worthwhile to study whether persistent homology can also be used for better understanding of chaotic data in dynamical systems.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\mathbb{H}_{1}/\mathbb{H}_{2}\) & A & B & C & D & E \\ \hline A & & 0.1975936 & 0.3049497 & 0.2467548 & 0.7432987 \\ \hline B & 0.3202554 & & 0.3835209 & 0.5066311 & 0.1707835 \\ \hline C & 0.0832231 & 0.1322987 & & 0.8356690 & 0.7088614 \\ \hline D & 0.2012797 & 0.6292608 & 0.6292608 & & 0.5067258 \\ \hline E & 0.0049325 & 0.0157855 & 0.0157855 & 0.0114901 & \\ \hline \end{tabular}
\end{table}
Table 2: P.values of Wilcoxon-Mann-Whitney tests between sets of median Bottleneck distances. |
2304.09468 | Secure Mobile Payment Architecture Enabling Multi-factor Authentication | The rise of smartphones has led to a significant increase in the usage of
mobile payments. Mobile payments allow individuals to access financial
resources and make transactions through their mobile devices while on the go.
However, the current mobile payment systems were designed to align with
traditional payment structures, which limits the full potential of smartphones,
including their security features. This has become a major concern in the
rapidly growing mobile payment market. To address these security concerns,in
this paper we propose new mobile payment architecture. This architecture
leverages the advanced capabilities of modern smartphones to verify various
aspects of a payment, such as funds, biometrics, location, and others. The
proposed system aims to guarantee the legitimacy of transactions and protect
against identity theft by verifying multiple elements of a payment. The
security of mobile payment systems is crucial, given the rapid growth of the
market. Evaluating mobile payment systems based on their authentication,
encryption, and fraud detection capabilities is of utmost importance. The
proposed architecture provides a secure mobile payment solution that enhances
the overall payment experience by taking advantage of the advanced capabilities
of modern smartphones. This will not only improve the security of mobile
payments but also offer a more user-friendly payment experience for consumers. | Hosam Alamleh, Ali Abdullah S. AlQahtani, Baker Al Smadi | 2023-04-19T07:30:18Z | http://arxiv.org/abs/2304.09468v1 | # Secure Mobile Payment Architecture Enabling Multi-factor Authentication
###### Abstract
The rise of smartphones has led to a significant increase in the usage of mobile payments. Mobile payments allow individuals to access financial resources and make transactions through their mobile devices while on the go. However, the current mobile payment systems were designed to align with traditional payment structures, which limits the full potential of smartphones, including their security features. This has become a major concern in the rapidly growing mobile payment market. To address these security concerns,in this paper we propose new mobile payment architecture. This architecture leverages the advanced capabilities of modern smartphones to verify various aspects of a payment, such as funds, biometrics, location, and others. The proposed system aims to guarantee the legitimacy of transactions and protect against identity theft by verifying multiple elements of a payment. The security of mobile payment systems is crucial, given the rapid growth of the market. Evaluating mobile payment systems based on their authentication, encryption, and fraud detection capabilities is of utmost importance. The proposed architecture provides a secure mobile payment solution that enhances the overall payment experience by taking advantage of the advanced capabilities of modern smartphones. This will not only improve the security of mobile payments but also offer a more user-friendly payment experience for consumers.
Mobile payment, NFC,Architecture, Authentication
## I Introduction
Mobile payment is defined as using mobile devices such as smartphones, PDAs, smart watches, or any Near field communication (NFC) enabled-devices to make payments [1]. It was also define as a financial process involving electronic mobile communication devices to initiate, authorize and complete financial transactions [2]. A successful economic transaction is defined as a business activity using an electronic device connected to a mobile network [3]. The third definition is closest to current digital wallet based mobile payment systems. Mobile payment systems are categorized based on proximity and business model. Proximity payments are grouped based on consumer location, e.g. in-store or remote, while the business model is differentiated by consumer relationships or business-consumer relationships. This paper introduces an architecture with focuses on business to consumer mobile payment technologies. Other mobile payment systems include SMS and QR. But the most popular used among digital wallets is NFC [4]. NFC usage is growing due to the increasing popularity of smartphones and their various applications. Unlike SMS payments, NFC payments are made in-person at a store or compatible terminal by simply bringing the mobile device close to the terminal. This technology has garnered significant attention due to its ease of use for data exchange and its potential for integration into various features, as NFC technology allows for limitless possibilities [5].
Mobile payment systems allow customers to make electronic transactions using their smartphones or other mobile devices. These payments can be made in-store, online, or through mobile apps, and can include purchases made with credit or debit cards, as well as digital wallets and other forms of mobile money. Mobile payments have become increasingly popular in recent years, as they offer a convenient and secure alternative to traditional payment methods. They also enable new forms of commerce, such as mobile banking and peer-to-peer transactions. The utilization of mobile payment systems has been on the rise in the past few years. The mobile payment market was valued at USD 43.11 billion in the year 2021 and USD 55.34 billion in the year 2022 it is expected to grow to USD 587.52 billion by the year 2030 growing at a compound annual growth rate of 37.1 % during the forecast period. [6]. Globally, east Asian countries has high adoption of mobile payment Apps. In the US, 43.2 % of the population use mobile payments. With Apple pay, Google pay, Samsung pay, and Starbucks among the top vendors [7].
In general, mobile payment is considered more secure because extra protection added to the mobile app compared to standard credit card payments. To evaluate a payment system, There are several factors to consider including Authentication, Encryption, and Fraud detection. Authentication is essential to ensure that only authorized users can access the payment functionality. A system should use strong and multi-factor authentication methods, such as fingerprint or facial recogni
tion. Alternately, encryption ensures that payment information data such as credit card numbers and account information are stored securely and transmitted over the network securely. Finally, a good payment system must have risk management features, such as fraud detection and prevention, to identify and stop suspicious activity.
Current mobile payment systems are built to be compatible with existing payment networks infrastructure, primarily for physical card payments using chips or magnetic strips. However, these systems lack robust user authentication, relying mainly on possession of cards and, sometimes, pin numbers. Security is crucial for mobile payments. Despite, standards like PCI DSS (Payment Card Industry Data Security Standard) are employed to maintain the CIA triad, security breaches can still occur, putting personal and payment card information at risk. To address this, a new payment architecture that incorporates authentication methods available in smartphones like location and biometrics, and others is introduced in this paper.
## II Background
The paper [8]. highlights the growing attention that mobile commerce, specifically mPayment, is receiving from both business and academic communities. The authors note that the proliferation of mobile commerce, especially in the business-to-consumer sector, requires secure and easy-to-use payment methods that are widely available and globally accepted. The authors define mPayment as the ability to make payments using mobile devices such as smartphones and personal digital assistants that are equipped with radio frequency (RF) or near field communication (NFC) technology. The paper recognizes that while mPayment is still in its early stages, its acceptance is expected to grow rapidly in the coming years, particularly in Europe and Asia. However, the authors observe that the adoption of mPayment methods in the US has been slow, largely due to a lack of unified standards, security and privacy concerns, and the slow diffusion of mCommerce. The authors aim to provide a clear understanding of the state of mPayment and to explore the factors that will determine its adoption by US consumers. Moreover,, the paper offers a blueprint for a cross-industry and cross-platform mPayment solution that offers consumers fast and convenient payment processes for both online and in-person transactions. This solution is intended to address the challenges currently facing mPayment adoption in the US and to promote its wider acceptance by consumers.
In the recent years, the interest in mobile payment has grown significantly among consumers, leading to the potential for wider adoption in the near future. This PhD dissertation [9] groups four studies that aim to understand the factors that influence the decision to adopt mobile payments and the variations across different environments and technologies. The research draws from the classic theories of new technology adoption, such as the Technology Acceptance Model (TAM), Theory of Planned Behavior (TPB), and the Theory of Reasoned Action (TRA), as well as more recent theories related to mobile services adoption. The study evaluates four models that incorporate various variables related to both the payments and the users. The research was conducted in Spain, Brazil, and Germany through self-administered web surveys with a sample size of 168, 871, 423, and 2,210 individuals respectively. The findings of this thesis reveal that the models have a high predictive power of the intention to use mobile payments, with values ranging from 56 to 71%. The results indicate that all variables play an important role in the adoption process, but attitude towards use and perceived usefulness stand out as the most significant. Personal innovation is relevant as an antecedent of intention and as a moderator of behavioral intention, and consumer interest in mobile services is also proven to be a relevant precedent of mobile payment adoption.The originality and value of this thesis lie in its contribution to the academic field as one of the first studies to empirically test the determinants of consumer acceptance of payments through QR codes, NFC, and SMS, as well as the role of mobile marketing services in the adoption of mobile payment. Additionally, this study provides a consumer-focused perspective and offers recommendations and strategies to enhance the adoption of mobile payment and integrated mobile payment services.
The paper [10] explores the topic of mobile payment in virtual social networks (VSN) and the factors that determine its level of acceptance by consumers. The study employs a modified version of the classical technological acceptance models (TRA and TAM) to analyze the level of acceptance of mobile payment in VSN. The study proposes an integrated theoretical model named MPAM-VSN which takes into account the relative importance of external influences, ease of use, usefulness, attitude, trust, and risk in terms of the acceptance of mobile payment systems in VSN. The study also analyzes the potential moderating effect of users' experience with similar tools on their acceptance of mobile payment systems in VSN. The empirical results of the study showed that the proposed behavioral model MPAM-VSN was well-adjusted, suggesting that users' previous experience with similar tools increases their intention to use mobile payment systems in VSN. The results of the study have interesting implications for the diffusion of mobile payment systems in VSN.
There are several types of mobile payments types such as SMS, QR and NFC [11, 12, 13, 14]; however, the most popular used among digital wallets is NFC [15]. NFC, or Near Field Communication, is a type of wireless communication that utilizes RFID technology. It allows for two-way communication between devices within close proximity, usually less than 20 cm. This technology enables consumers to easily make payments by simply holding their mobile device close
to a merchant's POS terminal, allowing for the exchange of payment information [16, 17, 18]. A user may need to enter a secure PIN or password to approve the transaction. It is estimated that NFC mobile payments can be 15 - 30 seconds faster than swiping a traditional card and signing the receipt or entering a PIN [19]. NFC usage is growing due to the increasing popularity of smartphones and their various applications. Unlike SMS payments, NFC and QR payments are made in-person at a store or compatible terminal by simply bringing the mobile device close to the terminal. This technology has garnered significant attention due to its ease of use for data exchange and its potential for integration into various features, as NFC technology allows for limitless possibilities [9]. NFC mobile payments offer several advantages over traditional mobile payment method. For consumers, these advantages include reliability, security, ease of use, convenience, ability to use as a digital wallet, wide acceptance, availability on a variety of devices, and additional value-added applications. [20], and economic attractiveness due to open standards with no licensing fees [15].
In mobile payment systems a card is added to a digital wallet to be used for future transactions. When a credit or debit card is added to a digital wallet, as shown in figure 1, the digital wallet App sends the primary account number (PAN) to the card network, which verifies the PAN with the card issuer. Then, it generates a token to the digital wallet App. This token is then sent to the device to be used for future transaction.
When a transaction is initiated, the original token(OT) issued by the card network is used to generate a one-time transaction code (OTTC). As shown in Figure 2. This _OTTC_ is sent from the smartphone to the point of sale (POS) terminal via NFC. The POS then forwards the token to the card network for validation. The card network has a copy of the _OT_ issued to the digital wallet, thus, it will be able to regenerate the _OTTC_ and verify if the received _OTTC_ is authentic. After token validation, the card network verifies the funds availability with the card issuer and sends transaction approval decision the POS terminal. In these systems, _OTTC_ is generated using a (HMAC) function as shown in equation 1:
\[\textit{OTTC =HMAC(OT, t)} \tag{1}\]
Where _OT_ is the original token issued to the digital wallet by the card network and \(t\) is the time of transaction. Including the time in the _OTTC_ generation gives the _OTTC_ a limited validity to defend against replay attacks.
As seen from the process discussed above, current mobile payment systems are designed to integrate with existing infrastructure. Current payments systems infrastructure was mainly build to support payments using physical cards that uses chips or magnetic strips. However, these systems does not provide extended user authentication. In other words, being in possession of the card itself and sometimes knowing the pin number is how a user is authenticated [21]. Security is essential for mobile payment systems. Many security standards such as PCI-DSS (Payment Card Industry DataSecurity Standard) [22], which was first released in 2004, is used to maintain the CIA triad. The people or merchants who use payment cards follow PCI-DSS standards but security violations can still occur [23]. When security violations occur,
Fig. 1: Mobile payment system-generating a token
Fig. 2: Transaction overview
personal information, payment card information such. With digital wallets, there are several ways to authenticate users such as location [24] and biometrics [25]. Therefore, a new architecture that is tailored for mobile payment and utilizes smartphones' authentication capabilities is needed. There has been several research that proposes mobile payment systems that utilizes a decentralized architecture such has blockchain [26][27][28]. However, such decentralized systems does not work well with current banking systems [29]. SWAPEROO [30] proposes a digital wallet architecture that can support more protocols and can be interfaced to several applications. However, it did not discuss adding more authentication factors.
In this paper, a new architecture is introduced that the leverage user authentication capabilities in smartphones to increase the security proximity-based mobile payment systems.
## III Proposed System
This section discussed the proposed architecture. As shown in Fig. 3, in the proposed architecture system. The digital wallet App generates a payment token that has several pieces of information that can be used for authentication. This token is forwarded from the POS to the proposed system to be authenticated. In the proposed mobile payment system architecture, multiple nodes of authentication are deployed, where each point receives the payment token and validates its the portion of the token that it is specialized in. Then, it sends the validation outcome to the card network, which makes the final design and send it to the POS.
To explain the transaction process in more detail. The operations are explained in the steps below take place:
1. When the digital wallet is set up it receives authentication tokens from the participating authentication nodes. These tokens will be used in generating payment tokens.
2. The digital wallet generates a token that has the time, biometrics, and location information and possibly other verification a follows: Where \(\textit{{{MTTC}}}\) is (2) biometrics authentication tokens, \(\textit{{{FT}}}\) is funds authentication token. \(\textit{{{denotes}}}\) concatenation. Details on the construct of these tokens is discussed below in separate subsections. This token is forwarded to all of the corresponding authentication node.
3. The Digital wallet App sends the token generated in equation 2 to the POS via NFC.
4. This token is forwarded to the participating authentication nodes.
5. Each node verifies there corresponding portion of the token. As will be discussed below, each authentication node will be able to verify its corresponding node as information inside token is hidden using the HMAC function.
6. Each authentication nodes forwards authentication results to the card network.
7. The card network makes the final decision of approving or declining the transaction and sends this decison to the POS.
More details about how each authentication nodes verify its portion of the OTTC is discussed below.
### _Fund verification_
Fund verification transactions is a process that ensures that there are enough funds available in the account. Usually, fund verification is done by the card issuer. The FT received in yhe payment token is as follows:
\[\textit{{FT}}=\textit{{{Amt}}}\left|\textit{{{{{{{{{{{{{{{{{{{{ \leftleftleftleftleft({ \left({{\leftleft({{ { }} } {\leftleft({{{ } { } {{ }} {{{}}}{{{ }{{}}{{ }{{}{}{}{{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{ {}}{{}{{}}{{}{}{{}}{{}{}{}{{}}{{}{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{ {}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{ }{{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{ {}{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{ {}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{ }{{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{ {}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{}{{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{{}{}{{}{}{{}{}{}{}{{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{{}{}{}{{}{{}{}{{}{}{{}{}{}{{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{{}{}{{}{}{}{{}{}{{}{}{{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{{}{}{{}{}{}{{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{{}{{}{}{{}{}{{}{{}{}{}{{}{{}{}{{}{{}{}{}{{}{}{{}{{}{}{{}{{}{{}{}{}{{{}{}{{}{{}{{}{{}{{}{}{{}{{}{}{{}{{{}{{}{}{{}{{{}{}{{}{{}{{}{{}{{}{{}{{}{{}{{}{{{}{{{}{{{}{{{{}{{{{}{{{{{{}{{{{{{{}{}{}{{{}{{{}{{}{{{{{}{{{{{{{{{{{{{{{ 0.}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}\ \ \ \ \ \ \ {\ \
### _Biometrics verification_
Biometrics on smartphones refers to the use of unique physical characteristics, such as fingerprints, facial features, or iris patterns, to authenticate a user's identity. biometrics authentication methods are becoming increasingly popular on smartphones as they provide a convenient and secure way for users to verify their identity. Biometrics in some smartphone today are stored locally [31]. In the proposed architecture, biometrics or information that represents biometrics are stored on a server. This to verify that biometrics submitted by the user in payment token matches the biometrics stored in the node. Biometrics authentication token is as shown below:
\[BT=HMAC(B,\,HMAC(t,\,BPS)) \tag{4}\]
Where \(B\) is the submitted biometrics at the time of payment. \(BPS\) is biometrics pre-shared token. Which is shared by the biometrics authentication node to the digital wallet App to use in future transactions. To verify the biometrics submitted at the time of payment. The biometrics authentication node regenerates the token. By using biometrics submitted for the user during the sign up process. If the token generated at the node equals the token received, the authentication passes, else the authentication fails. The authentication result is sent to the card network.
### _Location_
There are several ways to calculate the location of smartphones (GPS, Wi-Fi, IP address, BLE, etc). In the proposed architecture, The location node has access to the current and previous locations of the device that are obtained using one or more of the localization methods discussed above. The location token is generated in the digital wallet as follows:
\[LT=HMAC(L,\,HMAC(t,\,LPS)) \tag{5}\]
Where \(L\) is the submitted location at the time of payment. \(LPS\) is location pre-shared token, which is shared by the location authentication node to the digital wallet App to use in future transactions. To verify the location submitted at the time of payment. The location authentication node regenerates the token. By using biometrics registers for the user at the sign up. If the token generated equals the token received, the authentication passes, else the authentication fails. The authentication result is sent to the card network.
### _Other verification_
The proposed architecture allows implementing other nodes. For example, behavioral authentication, smart keys, etc. The proposed architecture allows submitting information to be verified in the payment token, when is is forwarded to a cores spending authentication node.
Finally, the card network receives the authentication results from corresponding authentication nodes and makes the final decision of approving or declining the transaction based on the rules configured in the system.
## IV Discussion
Secure mobile payments are important because they protect sensitive financial information, such as credit card numbers and bank account information, from being intercepted or stolen by hackers. This helps to prevent identity theft and financial fraud. Additionally, secure mobile payments can provide added convenience for consumers, as they can make purchases without having to carry cash or credit cards with them. The proposed architecture provide a methodology for secure verification of multiple elements of a mobile payment. The proposed architecture shows multiple items to be verified and to ensure the multiple aspects to prevent fraud
1. Fund verification: Fund verification is important because it helps to ensure that transactions are completed smoothly and prevents issues like "bounced" payments or chargebacks, which occur when a transaction is completed but the funds are not available to cover it. Additionally, fund verification can also help to prevent fraud by alerting financial institutions if an account has been compromised or if there are any suspicious transactions.
2. Biometrics verification: The use of biometrics is crucial in linking transactions to the individual conducting them. There are various methods available on smartphones and wearable devices, such as FaceID, fingerprint, and blood pressure, to gather biometrics. During enrollment, the biometrics verification node is provided with the user's biometric information. Although "something you have" authentication is rarely utilized in payment systems, linking biometrics to payments ensures that only the payment method's rightful owner can use it, thereby reducing the risk of fraud and identity theft.
3. Location verification: Location verification for credit card transactions is important because it helps to prevent fraud by confirming that the person using the credit card is physically present at the location of the transaction. This can help to prevent "card-not-present" fraud, which occurs when someone uses a stolen credit card number to make a purchase over the phone or online. By verifying the location of the transaction, merchants and financial institutions can be more confident that the person using the card is the legitimate cardholder. Additionally, location verification can also help to prevent "card skimming", which is when a fraudster attaches a small device to a card reader in order to steal credit card information.
4. Other verification's: This allows implementing verification techniques that can optimize the payment process to fit certain scenario for specific cases.
## V Conclusion
Mobile payment is a fast-growing alternative to traditional payment methods that uses mobile devices for transactions. The current mobile payment systems in digital wallets, are designed to work with existing payment networks but may still pose security risks. A new payment architecture with improved authentication methods is proposed to address these risks. When evaluating mobile payment systems, authentication, encryption, and fraud detection should be considered for maximum security. The mobile payment market is expected to grow significantly in the next decade, highlighting the importance of ensuring security for customers. Secure mobile payments are crucial for protecting sensitive financial information from theft and fraud. The proposed architecture provides a methodology for secure verification of multiple elements of a mobile payment including fund verification, biometrics verification, location verification, and other verification techniques. By verifying these elements, the proposed architecture helps to ensure the legitimacy of transactions, prevent identity theft, and provide added convenience for consumers. This shows that multiple aspects of secure verification are crucial to prevent fraud in mobile payments and enhance the overall payment process.
|
2304.11227 | Categories enriched over symmetric closed multicategories | We construct a machine which takes as input a locally small symmetric closed
complete multicategory $\mathsf V$. And its output is again a locally small
symmetric closed complete multicategory $\mathsf V\text-\mathcal{C}at$, the
multicategory of small $\mathsf V$-categories and multi-entry $\mathsf
V$-functors.
An example of such $\mathsf V$ is provided by short spaces (vector spaces
with a system of seminorms) and short maps. When the ground multicategory
$\mathsf V$ is $\mathsf{Set}$ we obtain strict 2-categories and their
surroundings by iterating twice the construction of categories. | Volodymyr Lyubashenko | 2023-04-21T19:28:56Z | http://arxiv.org/abs/2304.11227v5 | # Categories enriched over symmetric closed multicategories
###### Abstract
We construct a machine which takes as input a locally small symmetric closed complete multicategory \(\mathsf{V}\). And its output is again a locally small symmetric closed complete multicategory \(\mathsf{V}\)-\(\mathcal{C}\!at\), the multicategory of small \(\mathsf{V}\)-categories and multi-entry \(\mathsf{V}\)-functors. An example of such \(\mathsf{V}\) is provided by short spaces (vector spaces with a system of seminorms) and short maps. When the ground multicategory \(\mathsf{V}\) is \(\mathsf{Set}\) we obtain strict \(2\)-categories and their surroundings by iterating twice the construction of categories. 1
Footnote 1: _Key words:_ closed multicategories; complete multicategories; categories enriched in a multicategory; multi-entry functors.
2020 _Mathematics Subject Classification:_ 18M65
## 1 Introduction
A complete multicategory \(\mathsf{V}\) is a multicategory (=colored operad) which has all small products and all equalizers. Warning: to say that the underlying category \(\mathsf{V}_{1}\) has all small products and all equalizers is not enough. One has to take into account the multicategory structure (Definitions 1.3.1 and 1.3.2). In fact, we view multicategories as monoidal categories for which the monoidal product does not exist. Instead of monoidal products finite sequences of objects are used as an input. Hence, conditions for products and equalizers have to be written for a finite sequence of objects, not only for a single object. This point of view is supported by an adjunction between symmetric multicategories and colored props, see Section 2.1. We assume also that \(\mathsf{V}\) is a closed multicategory (that with internal homs, see around (1.3.3)). This notion was defined by Lambek [1, p. 106] (see also [1, Definition 4.7] for enriched case). Furthermore, we assume that \(\mathsf{V}\) is a symmetric multicategory (see the beginning of Section 1.3).
We start with a symmetric closed complete multicategory \(\mathsf{V}\). There is a technical notion of a small \(\mathsf{V}\)-quiver, which is a small quiver where instead of set of arrows between two vertices an object of \(\mathsf{V}\) is used (Definition 2.2.1). A multi-entry \(\mathsf{V}\)-quiver morphism has several \(\mathsf{V}\)-quivers as a source and one as target (Definition 2.2.2). Collection of such morphisms is a symmetric multicategory \(\mathsf{V}\)-\(\mathcal{Q}u\) (Proposition 2.2.3).
However, what we really need are small \(\mathsf{V}\)-categories - \(\mathsf{V}\)-quivers equipped with composition and identity morphisms (Definition 2.3.1). Using composition we construct the evaluation multi-entry \(\mathsf{V}\)-quiver morphism in Proposition 2.3.2 and Definition 2.3.3. Previously mentioned features (completeness and closedness of \(\mathsf{V}\) and composition in the target) are used to define internal hom - certain end in \(\mathsf{V}\), which replaces the set of natural transformations. When dealing with \(\mathsf{V}\)-categories, we use multi-entry \(\mathsf{V}\)-functors instead of multi-entry \(\mathsf{V}\)-quiver morphisms (Definition 2.4.1). They form a symmetric multicategory \(\mathsf{V}\)-\(\mathcal{C}\!at\) (Proposition 2.4.5). The multi-entry \(\mathsf{V}\)-functors are identified with \(\mathsf{FV}\)-functors \(\mathbb{S}^{i\in I}\mathcal{A}_{i}\to\mathcal{B}\) (Proposition 2.4.2), where \(\mathsf{FV}\) is the colored prop associated with the symmetric multicategory \(\mathsf{V}\) (Proposition 2.1.1). We define also natural \(\mathsf{V}\)-transformations (Definition 2.5.1) and show that their set can be recovered from the internal hom (Proposition 2.5.2).
In the case of \(\mathsf{V}\)-categories the evaluation morphism is a multi-entry \(\mathsf{V}\)-functor (Proposition 2.6.1). Furthermore, the symmetric multicategory \(\mathsf{V}\)-\(\mathcal{C}\!at\) is closed (Proposition 2.6.2).
We prove that the multicategory \(\mathsf{V}\)-\(\mathcal{C}\!at\) has small products (Proposition 2.7.1). It also has equalizers (Proposition 2.7.2), thus, it is complete. All mentioned results are summarized in
**Theorem 2.8.1**.: Let \(\mathsf{V}\) be a locally small symmetric closed complete multicategory. Then so is \(\mathsf{V}\)-\(\mathcal{C}at\), the multicategory of small \(\mathsf{V}\)-categories and multi-entry \(\mathsf{V}\)-functors.
We deduce whiskerings from the closed multicategory structure of \(\mathsf{V}\)-\(\mathcal{C}at\) in Section 3.1. The example of representable multicategory \(\mathsf{V}\) is discussed in Section 3.2. The examples of categories and strict \(2\)-categories are presented in Section 3.3.
An example of such multicategory \(\mathsf{V}\) is provided by short spaces (vector spaces over \(\mathbb{R}\) or \(\mathbb{C}\) with a system of seminorms) and short maps. Seminorms are indexed by an element of a commutative partially ordered monoid \(\mathbb{L}\). Further conditions on \(\mathbb{L}\) are listed in Section 4. There is symmetric multicategory \(\mathsf{Short}_{\mathbb{L}}\) with short spaces as objects. Morphisms are short multilinear maps (see Definition 4.1.4). This multicategory is closed (Proposition 4.1.5). The internal hom object is a vector space of multilinear maps. The symmetric multicategory \(\mathsf{Short}_{\mathbb{L}}\) has products (Proposition 4.2.2) and kernels (equalizers) (Proposition 4.2.3). Summing up, the multicategory \(\mathsf{Short}_{\mathbb{L}}\) is complete (Corollary 4.2.5).
We do not include explicitly in the definition the action of symmetric groups on symmetric multicategories. So we have to deduce it in Corollary A.1.2. Further interplay between the action of symmetric groups and the compositions in a symmetric multicategory is described in Proposition A.1.4.
###### Contents
* 1 Introduction
* 1.1 Conventions
* 1.2 Lax symmetric monoidal categories and functors: recollection
* 1.3 Multicategories: recollection
* 2 About \(\mathsf{V}\)-categories
* 2.1 Adjunction between symmetric multicategories and colored props
* 2.2 Multicategory of \(\mathsf{V}\)-quivers
* 2.3 \(\mathsf{V}\)-categories
* 2.4 Multicategory of \(\mathsf{V}\)-categories
* 2.5 Natural \(\mathsf{V}\)-transformations
* 2.6 Closedness of the multicategory of \(\mathsf{V}\)-categories
* 2.7 Completeness of the multicategory of \(\mathsf{V}\)-categories
* 2.8 Summary
* 3 First examples
* 3.1 Compositions and whiskerings
* 3.1.2 Compositions
* 3.1.3 Left whiskering
* 3.1.6 Right whiskering
* 3.2 Representable multicategories
* 3.3 Strict \(2\)-categories
* 4 Short spaces
* 4.1 First properties
* 4.2 Completeness of the multicategory of short spaces
* A Symmetric groups and symmetric multicategories
* A.1 Action of symmetric groups on a symmetric multicategory
**Acknowledgement.** I am cordially grateful to the staff of the University of Zurich who created an excellent environment for work. I am grateful for clarifying discussions to Prof. Dr. Anna Beliakova and to Prof. Dr. Alberto Cattaneo. All this was due to the generosity of the National Centre of Competence in Research SwissMAP of the Swiss National Science Foundation (grant number
205607), to whom I express my deep gratitude. The author is grateful for the financial support within the program of support for priority research and technical (experimental) developments of the Section of Mathematics of the NAS of Ukraine for 2022-2023 Project "Innovative methods in the theory of differential equations, computational mathematics and mathematical modeling", No. 7/1/241 (State Registration No. 0122U000670). I am really grateful to the Armed Forces of Ukraine who gave me the possibility to work quietly on the subject.
### Conventions
We work with a locally small closed symmetric multicategory \(\mathsf{V}\) in the sense of [1, Definitions 3.7, 4.7]. Locally small means that \(\mathsf{V}\big{(}(X_{i})_{i\in I};Y\big{)}\) are small.
When we write \(\mathsf{V}((X_{i})_{i\in I};Y)\), we mean that \(I\) is an object of \(\mathcal{O}_{\mathsf{sk}}\), the skeletal category of finite totally ordered sets with objects \(\mathbf{n}=\{1<2<\cdots<n\}\), \(n\geqslant 0\), whose morphisms are non-decreasing maps. A subset \(J\subset I\) means a monomorphism in \(\mathcal{O}_{\mathsf{sk}}\). We freely use the notation style of [1]. We use also the skeletal category \(\mathcal{S}_{\mathsf{sk}}\) of finite totally ordered sets, \(\operatorname{Ob}\mathcal{S}_{\mathsf{sk}}=\operatorname{Ob}\mathcal{O}_{ \mathsf{sk}}\cong\mathbb{N}\), whose morphisms are _all_ maps \(\mathbf{n}\to\mathbf{m}\) (ignoring the ordering). Let \(f:I\to J\in\mathcal{S}_{\mathsf{sk}}\). An element \(j\in J\) is a monomorphism \(\hat{j}:\mathbf{1}\to J\) (\(1\mapsto j\)). Its preimage \(f^{-1}(j)\) is the monomorphism \(\iota:\mathbf{k}\to I\in\mathcal{O}_{\mathsf{sk}}\), \(k=|f^{-1}(j)|\), which is the pullback of \(\hat{j}\) along \(f\) in the category \(\mathcal{S}_{\mathsf{sk}}\)
### Lax symmetric monoidal categories and functors: recollection
We reproduce definition of lax symmetric monoidal categories from [1, Definition 2.5] (see also [13, Definition 1.2.14] for symmetric monoidal categories and [11], [10, Definition 3.1.1]) in a simplified form. Namely, instead of considering all finite sets we contend ourselves with the category \(\mathcal{S}_{\mathsf{sk}}\)of finite ordinals \(\mathbf{n}=\{1<\cdots<n\}\) and arbitrary maps of those.
**1.2.1 Definition**.: A _lax symmetric monoidal category_\((\mathcal{V},\otimes_{\mathcal{V}}^{I},\lambda_{\mathcal{V}}^{I})\) consists of the following data:
1. A category \(\mathcal{V}\).
2. A functor \(\otimes^{I}=\otimes_{\mathcal{V}}^{I}:\mathcal{V}^{I}\to\mathcal{V}\), for every set \(I\in\operatorname{Ob}\mathcal{S}_{\mathsf{sk}}\). In particular, a map \(\otimes_{\mathcal{V}}^{I}:\prod_{i\in I}\mathcal{V}(X_{i},Y_{i})\to\mathcal{V} (\otimes^{i\in I}X_{i},\otimes^{i\in I}Y_{i})\) is given. It is required that \(\otimes^{\mathbf{1}}=\otimes_{\mathcal{V}}^{\mathbf{1}}:\mathcal{V}^{\mathbf{ 1}}\to\mathcal{V}\) is the identification of \(\mathcal{V}^{\mathbf{1}}\) and \(\mathcal{V}\). For a map \(f:I\to J\) in \(\operatorname{Mor}\mathcal{S}_{\mathsf{sk}}\) introduce a functor \(\otimes^{f}=\otimes_{\mathcal{V}}^{f}:\mathcal{V}^{I}\to\mathcal{V}^{J}\) which to a function \(X:I\to\operatorname{Ob}\mathcal{V}\), \(i\mapsto X_{i}\) assigns the function \(J\to\operatorname{Ob}\mathcal{V}\), \(j\mapsto\otimes^{i\in f^{-1}(j)}X_{i}\). The linear order on \(f^{-1}(j)\) is induced by the embedding \(f^{-1}(j)\hookrightarrow I\). The functor \(\otimes_{\mathcal{V}}^{f}:\mathcal{V}^{I}\to\mathcal{V}^{J}\) acts on morphisms via the map \[\prod_{i\in I}\mathcal{V}(X_{i},Y_{i})\xrightarrow{\sim}\prod_{j\in J}\prod_{ i\in f^{-1}j}\mathcal{V}(X_{i},Y_{i})\xrightarrow{\prod_{j\in J}\otimes^{f^{-1}j}} \prod_{j\in J}\mathcal{V}(\otimes^{i\in f^{-1}j}X_{i},\otimes^{i\in f^{-1}j} Y_{i}).\]
3. A morphism of functors \[\lambda^{f}:\otimes^{I}\to\otimes^{J}\circ\otimes^{f}:\mathcal{V}^{I}\to \mathcal{V},\qquad\lambda^{f}:\otimes^{i\in I}X_{i}\to\otimes^{j\in J} \otimes^{i\in f^{-1}j}X_{i},\] for every map \(f:I\to J\) in \(\operatorname{Mor}\mathcal{S}_{\mathsf{sk}}\).
These data are subject to the following axioms:
1. for all sets \(I\in\operatorname{Ob}\mathcal{S}_{\mathsf{sk}}\)\(\lambda^{\operatorname{id}_{I}}=\operatorname{id}\) and \(\lambda^{I\to\mathbf{1}}=\operatorname{id}\);
2. for any pair of composable maps \(I\xrightarrow{f}J\xrightarrow{g}K\) from \(\mathcal{S}_{\mathsf{sk}}\) the following equation holds: (1.2.1)
A symmetric monoidal category is a lax one for which all \(\lambda^{f}\) are isomorphisms. A symmetric strict monoidal category \((\mathcal{V},\otimes_{\mathcal{V}}^{I},\lambda_{\mathcal{V}}^{J})\) is lax symmetric monoidal one where \(\lambda_{\mathcal{V}}^{f}:\otimes_{\mathcal{V}}^{I}\to\otimes_{\mathcal{V}}^{ f}\cdot\otimes_{\mathcal{V}}^{J}\) are identity morphisms for all isotonic maps \(f:I\to J\).
**1.2.2 Definition** (cf. Definition 2.6 of [1]).: A _lax symmetric monoidal functor_ between lax symmetric monoidal categories
\[(F,\phi^{I}):(\mathcal{C},\otimes_{\mathcal{C}}^{I},\lambda_{\mathcal{C}}^{f} )\to(\mathcal{D},\otimes_{\mathcal{D}}^{I},\lambda_{\mathcal{D}}^{f})\]
consists of
1. a functor \(F:\mathcal{C}\to\mathcal{D}\),
2. a functorial morphism for each set \(I\in\operatorname{Ob}\mathcal{S}\) \[\phi^{I}:\otimes_{\mathcal{D}}^{I}\circ F^{I}\to F\circ\otimes_{\mathcal{C}}^{ I}:\mathcal{C}^{I}\to\mathcal{D},\qquad\phi^{I}:\otimes_{\mathcal{D}}^{i \in I}FX_{i}\to F\otimes_{\mathcal{C}}^{i\in I}X_{i},\]
such that \(\phi^{\mathbf{1}}=\left(\otimes^{\mathbf{1}}FX=FX=F\otimes^{\mathbf{1}}X \right)=\operatorname{id}\), and for every map \(f:I\to J\) of \(\mathcal{S}_{\mathsf{sk}}\) and all families \((X_{i})_{i\in I}\) of objects of \(\mathcal{C}\) the following equation holds: \[\otimes_{\mathcal{D}}^{i\in I}FX_{i}\xrightarrow{\phi^{I}}F \otimes_{\mathcal{C}}^{i\in I}X_{i}\] \[\otimes_{\mathcal{D}}^{j\in J}\otimes_{\mathcal{D}}^{i\in f^{-1}j}FX_{i} \xrightarrow{\otimes_{\mathcal{D}}^{j\in J}\phi^{I}F}\otimes_{\mathcal{C}}^{i \in f^{-1}j}X_{i}\xrightarrow{\phi^{J}}F\otimes_{\mathcal{C}}^{j\in J}\otimes _{\mathcal{C}}^{i\in f^{-1}j}X_{i}\]
A lax symmetric monoidal functor \((F,\phi^{I})\) is _strict_ if all \(\phi^{I}=\operatorname{id}\).
The category of lax symmetric monoidal categories with lax symmetric monoidal functors as morphisms is denoted \(lsm\mathcal{C}at\).
There is also an appropriate definition of a morphism of lax symmetric monoidal functors [1, Definition 2.7]. It is proven in [11, Proposition 1.2.15] that the \(2\)-categories of symmetric strict monoidal categories in the above sense and of symmetric strict monoidal categories in conventional sense (aka permutative categories [1, Definition 3.1], topological version is in [11, Definition 1]) are isomorphic when we consider strict symmetric monoidal functors. In particular, there is a correspondence assigning to each permutative category \(P=(P,\otimes,\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1},c)\) a symmetric strict monoidal category \(P^{\clubsuit}=(P,\otimes^{I},\lambda^{f})\) with \(\otimes^{\varnothing}=\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1},\otimes^{I}=\leavevmode \hbox{\small 1\kern-3.8pt\normalsize 1}\) iterated \(\otimes\), \(\lambda^{f}=\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}\) if the map \(f:I\to J\in\mathcal{S}_{\mathsf{sk}}\) is order preserving. If \(f:I\to I\in\mathcal{S}_{\mathsf{sk}}\) is a bijection, then \(\lambda^{f}:\otimes^{i\in I}X_{i}\to\otimes^{i\in I}X_{f^{-1}i}\) is an element of the symmetric group generated by \(1^{\otimes a}\otimes c\otimes 1^{\otimes b}\). The general map \(I\to K\in\mathcal{S}_{\mathsf{sk}}\) can be presented as \(fg\) where \(f:I\to I\) is a bijection and \(g:I\to K\) is order preserving. Then \(\lambda^{fg}\) can be found from (1.2.1) as the composition
\[\otimes^{i\in I}X_{i}\xrightarrow{\lambda^{f}}\otimes^{i\in I}X_{f^{-1}i}= \otimes^{k\in K}\otimes^{i\in g^{-1}k}X_{f^{-1}i}\xrightarrow{\otimes^{k\in K} (\lambda^{f|f^{-1}g^{-1}k\to g^{-1}k)^{-1}}}\otimes^{k\in K}\otimes^{i\in f^{ -1}g^{-1}k}X_{i}.\]
Being an isomorphism of \(2\)-categories \(\clubsuit\) is also isomorphism of categories.
### Multicategories: recollection
By [1, Definition 3.7] the structure maps of symmetric multicategory \(\mathsf{V}\) are the following. This is an intermediate notion between the ordinary definition of symmetric multicategory and Leinster's notion of fat symmetric multicategories [10, Definition A.2.1]. Of course, it is equivalent to both, being a skeletal version of Leinster's notion.
* for each map \(\phi:I\to J\) from \(\mathcal{S}_{\mathsf{sk}}\) and objects \(X_{i},Y_{j},Z\in\operatorname{Ob}\mathsf{V}\), \(i\in I\), \(j\in J\), the composition map \[\mu_{\phi}:\bigl{[}\prod_{j\in J}\mathsf{V}\bigl{(}(X_{i})_{i\in\phi^{-1}(j)}; Y_{j}\bigr{)}\bigr{]}\times\mathsf{V}\bigl{(}(Y_{j})_{j\in J};Z\bigr{)} \to\mathsf{V}\bigl{(}(X_{i})_{i\in I};Z\bigr{)};\]
* an element \(1_{X}\in\mathsf{V}(X;X)\).
The above data have to satisfy the associativity equation and two unitality equations, see [1, Definition 3.7].
* (Associativity) For each pair of composable maps \(I\xrightarrow{\phi}J\xrightarrow{\psi}K\) from \(\mathcal{S}_{\mathsf{sk}}\), the diagram shown on the following page commutes. Here \(\phi_{k}=\phi|_{(\phi\psi)^{-1}(k)}:(\phi\psi)^{-1}(k)\to\psi^{-1}(k)\), \(k\in K\), and \(\psi^{-1}(k)\) is understood as the pullback of the diagram \(\mathbf{1}=\{k\}\hookrightarrow K\xleftarrow{\psi}J\). We define an operation \(\sqcup:\mathcal{S}_{\mathsf{sk}}\times\mathcal{S}_{\mathsf{sk}}\to\mathcal{S}_ {\mathsf{sk}}\), \((\mathbf{m},\mathbf{n}\mapsto\mathbf{m}+\mathbf{n})\) (addition of finite ordinals) in an obvious way on morphisms. Thus, the set \(I\sqcup J\) is a disjoint union of sets \(I\) and \(J\). For all \(i\in I\) and \(j\in J\) we have \(i<j\) in \(I\sqcup J\), and the embeddings \(I\hookrightarrow I\sqcup J\hookhook J\) are increasing.
* (Identity) For \(\phi=\triangledown:I\to\mathbf{1}\) the equation \[\bigl{[}\mathsf{V}((X_{i})_{i\in I};Z)\xrightarrow{1\times\downarrow} \mathsf{V}((X_{i})_{i\in I};Z)\times\mathsf{V}(Z;Z)\xrightarrow{\mu_{\triangledown I \to\mathbf{1}}}\mathsf{V}((X_{i})_{i\in I};Z)\bigr{]}=\operatorname{id}\] (1.3.1) holds true. If \(\phi=\operatorname{id}:I\to I\), then the equation \[\bigl{[}\mathsf{V}((X_{i})_{i\in I};Z)\xrightarrow{(\prod_{i\in I}\mathrm{i }_{X_{i}})\times 1}\bigl{(}\prod_{i\in I}\mathsf{V}(X_{i};X_{i})\bigr{)}\times \mathsf{V}((X_{i})_{i\in I};Z)\xrightarrow{\mu_{\operatorname{id}_{I}}} \mathsf{V}((X_{i})_{i\in I};Z)\bigr{]}=\operatorname{id}\] (1.3.2) holds true.
Here \(\hat{1}_{Z}:\mathbf{1}\to\mathsf{V}(Z;Z)\), \(1\mapsto 1_{Z}\), distinguishes the element \(1_{Z}\).
Recall [1, p. 106] (see also [1, Definition 4.7] for \(\mathsf{V}\)-multicategories) that a plain multicategory \(\mathsf{V}\) is _closed_ if for any collection \(((X_{i})_{i\in I},Z)\), \(I\in\operatorname{Ob}\mathcal{S}_{\mathsf{sk}}\), of objects of \(\mathsf{V}\) there is an object \(\underline{\mathsf{V}}((X_{i})_{i\in I};Z)\) of \(\mathsf{V}\) and an evaluation element
\[\operatorname{ev}_{(X_{i})_{i\in I};Z}\in\mathsf{V}\bigl{(}(X_{i})_{i\in I}, \underline{\mathsf{V}}((X_{i})_{i\in I};Z);Z\bigr{)},\]
such that the composition
\[\varphi_{(X_{i})_{i\in I};(Y_{j})_{j\in J};Z}=\bigl{\{}\mathsf{V} \bigl{(}(Y_{j})_{j\in J};\underline{\mathsf{V}}((X_{i})_{i\in I};Z)\bigr{)} \xrightarrow{\mathrm{i}_{X_{1}}\times\cdots\times\mathrm{i}_{X_{I}}\times \mathrm{id}\times\mathrm{ev}_{(X_{i})_{i\in I};Z}}\\ \times\mathsf{V}\bigl{(}(X_{i})_{i\in I},\underline{\mathsf{V}} ((X_{i})_{i\in I};Z\bigr{)}\bigr{)}\times\mathsf{V}\bigl{(}(X_{i})_{i\in I}, \underline{\mathsf{V}}((X_{i})_{i\in I};Z);Z\bigr{)}\\ \xrightarrow{\mu_{\operatorname{id}\sqcup\varphi:I\sqcup J\to I \sqcup}}\mathsf{V}\bigl{(}(X_{i})_{i\in I},(Y_{j})_{j\in J};Z\bigr{)}\bigr{\}} \tag{1.3.3}\]
is bijective for an arbitrary sequence \((Y_{j})_{j\in J}\), \(J\in\operatorname{Ob}\mathcal{S}_{\mathsf{sk}}\), of objects of \(\mathsf{V}\).
Let \(g:(X_{i})_{i\in I}\to Z\) be a morphism in a closed symmetric multicategory \(\mathsf{V}\). Generalizing the previous notation denote by \(\dot{g}:()\to\underline{\mathsf{V}}((X_{i})_{i\in I};Z)\) the morphism \(\varphi_{(X_{i})_{i\in I};(Z)}^{-1}(g)\in\mathsf{V}(;\underline{\mathsf{V}}(( X_{i})_{i\in I};Z))\). Equation (1.3.3) for \(J=\varnothing\) implies that
\[\bigl{[}\prod_{i\in I}\mathsf{V}(X_{i};X_{i})\bigr{]}\times \mathsf{V}\bigl{(};\underline{\mathsf{V}}\bigl{(}(X_{i})_{i\in I};Z\bigr{)} \bigr{)}\times\mathsf{V}\bigl{(}(X_{i})_{i\in I},\underline{\mathsf{V}} \bigl{(}(X_{i})_{i\in I};Z\bigr{)};Z\bigr{)}\xrightarrow{\mu_{\lim_{1}};I\hookrightarrow I \sqcup\mathsf{V}}\mathsf{V}\bigl{(}(X_{i})_{i\in I};Z\bigr{)},\\ \bigl{(}(1_{X_{i}})_{i\in I},\dot{g},\operatorname{ev}_{(X_{i}) _{i\in I};Z}\bigr{)}\mapsto g. \tag{1.3.4}\]
\[[\prod_{j\in J}\mathsf{V}((X_{i})_{i\in\phi^{-1}j};Y_{j})]\times[\prod_{k\in K} \mathsf{V}((Y_{j})_{j\in\psi^{-1}k};Z_{k})]\times\mathsf{V}((Z_{k})_{k\in K};W)\]
\[[\prod_{k\in K}([\prod_{j\in\psi^{-1}k}\mathsf{V}((X_{i})_{i\in\phi^{-1}_{k}};Y _{j})]\times\mathsf{V}((Y_{j})_{j\in\psi^{-1}k};Z_{k}))]\times\mathsf{V}((Z_{k })_{k\in K};W)\]
**1.3.1 Definition**.: A multicategory \(\mathsf{V}\) has small products if the underlying category \(\mathsf{V}_{1}\) has small products \(\operatorname{pr}_{j}:\prod_{k\in J}M_{k}\to M_{j}\in\mathsf{V}\), \(j\in J\in\mathcal{S}et\), and for each family of morphisms \(\big{(}f_{j}:(X_{i})_{i\in I}\to M_{j}\in\mathsf{V}\big{)}_{j\in J}\) there is a unique morphism \(f:(X_{i})_{i\in I}\to\prod_{j\in J}M_{j}\in\mathsf{V}\) such that for all \(j\in J\)
\[f_{j}=\big{[}(X_{i})_{i\in I}\xrightarrow{f}\prod_{j\in J}M_{j}\xrightarrow{ \operatorname{pr}_{j}}M_{j}\big{]}.\]
For \(I=\mathbf{1}\) this property is equivalent to \(\prod_{j\in J}M_{j}\) being a product in ordinary category \(\mathsf{V}_{1}\). In the following we **assume** that the multicategory \(\mathsf{V}\) has small products.
**1.3.2 Definition**.: A multicategory \(\mathsf{V}\) has equalizers (of pairs of parallel morphisms) if for all pairs \(A\xrightarrow{f}B\in\mathsf{V}\) there is an object \(K\) and a morphism \(e:K\to A\) which is an equalizer of \((f,g)\) in ordinary category \(\mathsf{V}_{1}\) and, moreover, for each morphism \(h:(X_{i})_{i\in I}\to A\in\mathsf{V}\) such that \(h\centerdot f=h\centerdot g\) there exists a unique \(q:(X_{i})_{i\in I}\to K\) such that \(h=q\centerdot e\):
The equalizer for ordinary category \(\mathsf{V}_{1}\) is a particular case for \(I=\mathbf{1}\). In the following we **assume** that the multicategory \(\mathsf{V}\) has equalizers.
**1.3.3 Corollary**.: Let multicategory \(\mathsf{V}\) have products and equalizers. For any diagram \(J\to\mathsf{V}_{1}\), \(j\mapsto M_{j}\) (\(\mathsf{V}_{1}\) is an ordinary category \(\mathsf{V}\)), the limit \(\lim(J\to\mathsf{V}_{1})\in\operatorname{Ob}\mathsf{V}\) satisfies also: for any morphism \(h=(h_{j}):(X_{i})_{i\in I}\to\prod_{j\in J}M_{j}\) such that for all \(j\to k\in J\) the equation holds
\[h_{k}=\big{[}(X_{i})_{i\in I}\xrightarrow{h_{j}}M_{j}\to M_{k}\big{]}\]
there exists a unique morphism \(g:(X_{i})_{i\in I}\to\lim(J\to\mathsf{V}_{1})\) such that
\[h=\big{[}(X_{i})_{i\in I}\xrightarrow{g}\lim(J\to\mathsf{V}_{1})\to\prod_{j \in\operatorname{Ob}J}M_{j}\big{]}.\]
When the above holds, we say that multicategory \(\mathsf{V}\) is complete and **assume** this from now on.
## 2 About \(\mathsf{V}\)-categories
### Adjunction between symmetric multicategories and colored props
**2.1.1 Proposition** ([1, Theorem 4.2], [16, Proposition 11], see also [21, Theorem 2.3.2], [11, Proposition 9.2]).: There is an adjunction
\[\mathbb{F}:s\mathcal{M}\mathcal{C}at\rightleftarrows\operatorname{cProp}: \mathsf{U}.\]
It seems that in all cited sources the definition of symmetric multicategories uses explicit action of symmetric groups. We use a different definition and give a different proof.
Proof.: As any prop, the constructed \(\mathbb{F}\mathsf{V}\) has the monoid of objects \((\operatorname{Ob}\mathbb{F}\mathsf{V},\otimes)=(\operatorname{Ob}\mathsf{V })^{*}\), the monoid with the operation \(\otimes\) freely generated by \(\operatorname{Ob}\mathsf{V}\). Objects of \(\mathbb{F}\mathsf{V}\) are denoted \(\otimes^{i\in I}X_{i}=(X_{i})_{i\in I}\), \(I\in\mathcal{S}_{\Bbbk}\).
The morphism sets are
\[\mathbb{F}\mathsf{V}\big{(}(X_{i})_{i\in I},(Y_{j})_{j\in J}\big{)}=\coprod_{ \phi:I\to J\in\mathcal{S}_{\Bbbk}}\prod_{j\in J}\mathsf{V}\big{(}(X_{i})_{i \in\phi^{-1}j};Y_{j}\big{)}.\]
The composition is
\[\mathbb{FV}\big{(}(X_{i})_{i\in I},(Y_{j})_{j\in J}\big{)}\times \mathbb{FV}\big{(}(Y_{j})_{j\in J},(Z_{k})_{k\in K}\big{)}\\ \cong\coprod_{I\xrightarrow{\phi}J\xrightarrow{\psi}K\in\mathcal{S }_{\Bbbk}}\coprod_{k\in K}\{\big{[}\prod_{j\in\psi^{-1}k}\mathsf{V}\big{(}(X_{ i})_{i\in\phi^{-1}j};Y_{j}\big{)}\big{]}\times\mathsf{V}\big{(}(Y_{j})_{j\in\psi^{-1}k};Z_{ k}\big{)}\}\\ \underline{\coprod_{(\phi,\psi)\to\phi\psi}\coprod_{k\in K}\mu_{ \phi\downarrow\phi^{-1}\psi^{-1}k\to\psi^{-1}k}}\underline{\coprod_{k\in K} \mathsf{V}\big{(}(X_{i})_{i\in\xi^{-1}k};Z_{k}\big{)}=\mathbb{FV}\big{(}(X_{i}) _{i\in I},(Z_{k})_{k\in K}\big{)}.}\]
Its associativity on summand indexed by \(I\xrightarrow{\phi}J\xrightarrow{\psi}K\xrightarrow{\xi}L\) follows from equation at Figure 1 written for maps \(\phi^{-1}\psi^{-1}\xi^{-1}I\xrightarrow{\phi\downarrow}\psi^{-1}\xi^{-1}I \xrightarrow{\phi\downarrow}\xi^{-1}I\), \(l\in L\).
The identity morphism \(1\) in \(\mathbb{FV}\big{(}(X_{i})_{i\in I},(X_{i})_{i\in I}\big{)}\) is \((1_{X_{i}})_{i\in I}\in\prod_{i\in I}\mathsf{V}(X_{i};X_{i})\) indexed by the identity map \(\mathrm{id}_{I}\). The right unit property of \(1\) on the summand indexed by equation (1.3.1) for \(\triangledown:\phi^{-1}j\to\mathbf{1}\), \(j\in J\). The left unit property of \(1\) on the summand indexed by \(\phi:I\to J\) follows from equation (1.3.2) for \(\mathrm{id}:\phi^{-1}j\to\phi^{-1}j\), \(j\in J\).
The tensor multiplication on objects is the concatenation. On morphisms the tensor multiplication \(\otimes^{K}\) is the map (determined by maps \(I\xrightarrow{f}K\xrightarrow{\phi}J\in\mathcal{O}_{\Bbbk}\))
\[\otimes^{K}:\prod_{k\in K}\mathbb{FV}\big{(}(X_{i})_{i\in f^{- 1}k},(Y_{j})_{j\in g^{-1}k}\big{)}\cong\coprod_{(\phi_{k}:f^{-1}k\to g^{-1}k) _{k\in K}}\prod_{k\in K}\prod_{j\in g^{-1}k}\mathsf{V}\big{(}(X_{i})_{i\in \phi_{k}^{-1}j};Y_{j}\big{)}\\ \underline{\coprod_{(\phi_{k})\to\phi}1}\,\coprod_{\xi:I\to J \in\mathcal{S}_{\Bbbk}}\prod_{j\in J}\mathsf{V}\big{(}(X_{i})_{i\in\xi^{-1} j};Y_{j}\big{)}=\mathbb{FV}\big{(}(X_{i})_{i\in I},(Y_{j})_{j\in J}\big{)},\]
where \(\phi:I\to J\) is the only map, which satisfies the condition \(\phi|_{f^{-1}k}=\phi_{k}\). All such maps \(\phi\) are characterized by the condition \((I\xrightarrow{\phi}J\xrightarrow{g}K)=f\). We shall see that the tensor multiplication is strictly associative.
The unit object \(\mathds{1}\) (the image of \(\otimes^{\mathbf{9}}\)) is the empty sequence \(()=()_{\varnothing}\). The left and the right unitors for this unit object are identity maps. We are going to prove that \((\mathbb{FV},\otimes,\mathds{1})\) is a strict monoidal category.
Let \(h:K\to J\in\mathcal{S}_{\Bbbk}\). The set \(\coprod_{j\in J}h^{-1}j=\{(j,k)\in J\times K\ |\ h(k)=j\}\) has a lexicographic ordering (for all \(k,k^{\prime}\in K\) inequality \(hk<hk^{\prime}\) implies \((hk,k)<(hk^{\prime},k^{\prime})\), and if \(hk=hk^{\prime}\), then \(k<k^{\prime}\) implies \((hk,k)<(hk^{\prime},k^{\prime})\)). It follows that the map
\[t(h)=\big{(}\coprod_{j\in J}h^{-1}j=\{(j,k)\ |\ h(k)=j\}\subset J\times K \xrightarrow{\mathrm{pr}_{1_{\downarrow}}}J\big{)}\]
preserves the ordering. On the other hand, the map
\[\big{(}\coprod_{j\in J}h^{-1}j=\{(j,k)\ |\ h(k)=j\}\subset J\times K \xrightarrow{\mathrm{pr}_{2_{\triangleright}}}K\big{)}\]
is a bijection. Inverse to it bijection is denoted \(\sigma(h):K\to\coprod_{j\in J}h^{-1}j\). We adopt the point of view on this bijection as a permutation of elements of \(\{1<2<\cdots<n\}=K\), sending \(k\in K\) to \(k\in K\), but the second \(K\) has a different total ordering. Or we could view \(\sigma(h)\) as a self-bijection \(K\to K\), \(k\mapsto\sum_{j<h(k)}|h^{-1}j|+|\{k^{\prime}\leqslant k\ |\ h(k^{\prime})=h(k)\}|\), but we shall not do it. Clearly,
\[\big{(}K\xrightarrow{\sigma(h)}\ \coprod_{j\in J}h^{-1}j\xrightarrow{t(h)}J \big{)}=h. \tag{2.1.1}\]
For any colored prop \(P\) the identity (1.2.1) can be applied to the pair \((\sigma(h),t(h))\) from (2.1.1). Since \(\sigma(h)|:h^{-1}j=\sigma(h)^{-1}t(h)^{-1}j\to t(h)^{-1}j=h^{-1}j\) is an order-preserving bijection, it is the identity map. Hence, equation (1.2.1) can be written as \(\lambda_{P}^{h}\centerdot\otimes^{J}1=\lambda_{P}^{\sigma(h)}\centerdot 1\). We conclude that \(\lambda_{P}^{h}=\lambda_{P}^{\sigma(h)}\).
In order to make \(\mathbb{FV}\) a lax symmetric monoidal category in the sense of Definition 1.2.1 we assume given maps \(K\xrightarrow{g}I\xrightarrow{f}J\), where \(g\in\mathcal{O}_{\Bbbk}\) and \(f\in\mathcal{S}_{\Bbbk}\). And we exhibit a natural transformation \(\lambda^{f}:(X_{k})_{k\in K}=\otimes^{i\in I}(X_{k})_{k\in g^{-1}i}\to\otimes^{ j\in J}\otimes^{i\in f^{-1}j}(X_{k})_{k\in g^{-1}i}=\big{(}(X_{k})_{k\in g^{-1}f^{-1}j} \big{)}_{j\in J}\)
This is a morphism in \(\mathbb{F}\!\!\mathsf{V}\) indexed by bijection \(\sigma(g\centerdot f):K\to\coprod_{j\in J}g^{-1}f^{-1}j\). The element \(\lambda^{f}\in\prod_{j\in J}\prod_{k\in g^{-1}f^{-1}j}\mathsf{V}(X_{k};X_{k})\) is \(\lambda^{f}=\left((1_{X_{k}})_{k\in g^{-1}f^{-1}j}\right)_{j\in J}\).
Naturality of \(\lambda^{f}\), \(f\in\mathcal{S}_{\mathsf{sk}}\), amounts to commutative square
(2.1.2)
for each pair of maps \(g,h\in\mathcal{O}_{\mathsf{sk}}\) from
and all collections of morphisms \(u_{i}:(X_{k})_{k\in g^{-1}i}\to(Y_{l})_{l\in h^{-1}i}\). Assume that \(u_{i}\) is indexed by \(\phi_{i}:g^{-1}i\to h^{-1}i\). There is a unique map \(\phi:K\to L\) such that \(\phi|_{g^{-1}i}=\phi_{i}\). Necessarily \(\phi\centerdot h=g\). Hence, \(u_{i}=(v_{l})_{l\in h^{-1}i}\in\prod_{l\in h^{-1}i}\mathsf{V}\!\left((X_{k})_ {k\in\phi^{-1}l};Y_{l}\right)\). The diagram, formed by indexing maps for diagram (2.1.2)
commutes, since both compositions map \(k\in K\) to the same \(f(gk)=f(h\phi k)\). This is the only diagonal map of this square, independently of the ordering of source and target. One can verify that the diagonal map in (2.1.2), represented by the family \(\left((v_{l})_{l\in h^{-1}f^{-1}j}\right)_{j\in J}=\left((u_{i})_{i\in f^{-1}j} \right)_{j\in J}\), equals the composition in the left-bottom path due to unitality (1.3.1) of multicategory V, and equals the composition in the top-right path due to unitality property (1.3.2). Therefore, (2.1.2) commutes and \(\lambda^{f}\) is natural.
Assume given maps \(L\xrightarrow{h}I\xrightarrow{f}J\xrightarrow{g}K\), \(h\in\mathcal{O}_{\mathsf{sk}}\), \(f,g\in\mathcal{S}_{\mathsf{sk}}\). All vertices of the diagram
(2.1.3)
are \(L\) with various total orderings. All arrows map \(i\) to \(i\). Therefore diagram (2.1.3) commutes. Also diagram (1.2.1) commutes, since \(1\centerdot 1=1\).
In particular, \(\lambda^{\mathrm{id}_{I}}:(X_{i})_{i\in I}\to(X_{i})_{i\in I}\), \(\lambda^{\mathrm{id}_{I}}=(1_{X_{i}})_{i\in I}\in\prod_{i\in I}\mathsf{V}(X_{i };X_{i})\), that is, \(\lambda^{\mathrm{id}_{I}}\) is the identity morphism of \((X_{i})_{i\in I}\). Similarly, \(\lambda^{\triangledown i\cdot I\to 1}:(X_{i})_{i\in I}\to(X_{i})_{i\in I}\), is indexed by \(\sigma(\triangledown)=\mathrm{id}_{I}\) and \(\lambda^{\triangledown i\cdot I\to 1}:(X_{i})_{i\in I}\to(X_{i})_{i\in I}\), hence, \(\lambda^{\triangledown}=(1_{X_{i}})_{i\in I}\in\prod_{i\in I}\mathsf{V}(X_{i };X_{i})\) is the identity map. Summing up, \((\mathbb{F}\!\mathsf{V},\otimes^{I},\lambda^{f})\) is a lax symmetric monoidal category.
Furthermore, if \(f\in\mathcal{O}_{\mathsf{sk}}\), then \(\lambda^{f}:(X_{k})_{k\in K}\to(X_{k})_{k\in K}\), determined by \(K\xrightarrow{g}I\xrightarrow{f}J\in\mathcal{O}_{\mathsf{sk}}\) is indexed by \(\mathrm{id}_{K}\) and equals \((1_{X_{k}})_{k\in K}\in\prod_{k\in K}\mathsf{V}(X_{k};X_{k})\). Therefore, \(\lambda^{f}=\mathrm{id}\) if \(f\) preserves ordering. Thus \(\mathbb{F}\!\mathsf{V}\) is a colored prop.
In particular, it is symmetric with the symmetry \(c:(X_{i})_{i\in I}\sqcup(Y_{j})_{j\in J}\to(Y_{j})_{j\in J}\sqcup(X_{i})_{i\in I}\) lying in the summand indexed by the block-wise permutation \(\sigma:I\sqcup J\to J\sqcup I\). For \(1\leqslant k\leqslant|I|+|J|\)
\[\sigma(k)=\begin{cases}|J|+k,&\text{ for }k\leqslant|I|,\\ k-|I|,&\text{ for }k>|I|.\end{cases}\]
The symmetry is \(\big{(}(1_{Y_{j}})_{j\in J},(1_{X_{i}})_{i\in I}\big{)}\in\big{[}\prod_{j\in J} \mathsf{V}(Y_{j};Y_{j})\big{]}\times\big{[}\prod_{i\in I}\mathsf{V}(X_{i};X_{i}) \big{]}\).
The above construction being functorial, we get a functor \(\mathbb{F}:s\mathcal{MC}at\to\operatorname{cProp}\), where the latter category has symmetric strict monoidal functors \(F:P\to Q\) as morphisms such that \(\operatorname{Ob}F:\operatorname{Ob}P=(\operatorname{Col}P)^{*}\to( \operatorname{Col}Q)^{*}=\operatorname{Ob}Q\) is the morphism \((\operatorname{Col}F)^{*}\) of monoids induced by a map \(\operatorname{Col}F:\operatorname{Col}P\to\operatorname{Col}Q\).
A functor \(\mathbb{U}:\operatorname{cProp}\to s\mathcal{MC}at\) is constructed as the composition
\[\operatorname{cProp}\xrightarrow{\mathsf{\Delta}}\mathit{lsmCat}\xrightarrow{ \widehat{\Sigma}}s\mathcal{MC}at,\]
where the last functor is constructed in [1, Proposition 3.22]. On object (prop) \(P\) the functor \(\mathbb{U}\) takes the value with \(\operatorname{Ob}\mathbb{U}P=\operatorname{Col}P\), \(\mathbb{U}P\big{(}(X_{i})_{i\in I};Y\big{)}=P\big{(}(X_{i})_{i\in I};Y\big{)}\), the units \(1_{X}\in P(X;X)\) and the composition
\[\mu_{f}=\big{\{}\big{[}\prod_{j\in J}P\big{(}(X_{i})_{i\in f^{-1} j};Y_{j}\big{)}\big{]}\times P\big{(}(Y_{j})_{j\in J};Z\big{)}\xrightarrow{ \lambda^{f}\times\otimes^{J}\times 1}\\ P\big{(}(X_{i})_{i\in I};((X_{i})_{i\in f^{-1}j})_{j\in J} \big{)}\times P\big{(}(X_{i})_{i\in f^{-1}j})_{j\in J};(Y_{j})_{j\in J}\big{\} \times P\big{(}(Y_{j})_{j\in J};Z\big{)}\\ \xrightarrow{\text{composition}}P\big{(}(X_{i})_{i\in I};Z\big{)} \big{\}}\]
for an arbitrary map \(f:I\to J\in\mathcal{S}_{\mathsf{sk}}\). Here \(\lambda^{f}\) is that of \(P^{\clubsuit}\).
What is the natural bijection \(G\in\operatorname{cProp}(\mathbb{F}\mathsf{V},P)\cong s\mathcal{MC}at(\mathsf{ V},\mathbb{U}P)\ni F\)? (Multi)functors from the both sides have as the mapping on objects the same map \(\operatorname{Ob}F=\operatorname{Ob}G:\operatorname{Ob}\mathsf{V}\to \operatorname{Col}P\), \(X\mapsto FX\) which we fix now. An element \(F\) in the right hand side is the collection of mappings \(F_{(X_{i})_{i\in I};Y}:\mathsf{V}\big{(}(X_{i})_{i\in I};Y\big{)}\to P\big{(}( FX_{i})_{i\in I};FY\big{)}\) such that \((1_{X}^{\mathsf{V}})F_{X;X}=1_{FX}^{P}\) and for any mapping \(f:I\to J\)
\[\big{[}\prod_{j\in J}P\big{(}(FX_{i})_{i\in f^{-1}j};FY_{j}\big{)}\big{]} \times P\big{(}(FY_{j})_{j\in J};FZ\big{)}\\ \times P\big{(}(FX_{i})_{i\in I};((FX_{i})_{i\in f^{-1}j})_{j\in J }\big{)}\\ \times P\big{(}((FX_{i})_{i\in I};Z)\xrightarrow{F_{(X_{i})_{i \in I};Z}}P\big{(}(FX_{i})_{i\in I};FZ\big{)} \tag{2.1.4}\]
An element \(G\) in the left hand side is the collection of mappings
\[G^{\phi}:\prod_{j\in J}\mathsf{V}\big{(}(X_{i})_{i\in\phi^{-1}j};Y_{j}\big{)} \to P\big{(}(GX_{i})_{i\in I};(GY_{j})_{j\in J}\big{)},\]
where mapping \(\phi:I\to J\) runs over \(\mathcal{S}_{\mathsf{sk}}\), such that \(G\) is strictly compatible with the composition, the identities, the tensor products and \(\lambda^{f}\).
Maps \(G_{(X_{i})_{i\in\phi^{-1}j};Y_{j}}^{\forall;I\to 1}\) are identified with \(F_{(X_{i})_{i\in I};Y}\). This assignment determines all maps \(G\) in a unique way. For general \(\phi:I\to J\in\mathcal{S}_{\mathsf{sk}}\) we must have
\[G^{\phi}=\big{[}\prod_{j\in J}\mathsf{V}\big{(}(X_{i})_{i\in \phi^{-1}j};Y_{j}\big{)}\xrightarrow{\prod_{j\in J}F_{(X_{i})_{i\in\phi^{-1}j} ;Y_{j}}}\prod_{j\in J}P\big{(}(FX_{i})_{i\in\phi^{-1}j};FY_{j}\big{)} \xrightarrow{\lambda_{P}^{\phi}\times\otimes^{J}}\\ P\big{(}(FX_{i})_{i\in I};((FX_{i})_{i\in\phi^{-1}j})_{j\in J} \big{)}\times P\big{(}((FX_{i})_{i\in\phi^{-1}j})_{j\in J};(FY_{j})_{j\in J} \big{)}\\ \xrightarrow{\text{composition}}P\big{(}(FX_{i})_{i\in I};(FY_{j})_{j \in J}\big{)}\big{]}. \tag{2.1.5}\]
Equation (2.1.4) and unitality are the only conditions imposed on \(F\) by conditions on \(G\).
Recall that \(\lambda_{P}^{\sigma(g,f)}=\lambda_{P}^{g,f}\) as noticed below (2.1.1). Since \(g|:g^{-1}f^{-1}j\to f^{-1}j\) is order-preserving, equation (1.2.1) for the pair \((g,f)\) gives \(\lambda_{P}^{g,f}\llcorner\otimes^{J}1=1\,\lrcorner\lambda_{P}^{f}\). Hence, for any \(K\xrightarrow{g}I\xrightarrow{f}J\)
where \(g\in\mathcal{O}_{\mathsf{sk}}\) and \(f\in\mathcal{S}_{\mathsf{sk}}\), and any family \((Z_{k})_{k\in K}\) of objects of \(P\) there is an equality \(\lambda^{\prime\left(g,f\right)}_{(Z_{k})_{k\in K}}=\lambda^{\prime}_{((Z_{k}) _{k\in g^{-1}})_{k\in I}}:(Z_{k})_{k\in K}\to((Z_{k})_{k\in g^{-1}f^{-1}j})_{j \in J}\). We conclude that \(G\) sends \(\lambda^{f}_{\mathsf{FV}}\) to \(\lambda^{f}_{P}\).
Compatibility of \(G\) with the composition follows from commutativity of the diagram which reduces to several equations (2.1.4) (one for each map \(\phi|:\phi^{-1}\psi^{-1}k\to\psi^{-1}k\), \(k\in K\)) and the following diagram which uses only structure maps of \(P\):
\[\prod_{k\in K}\left\{\{\prod_{j\in\psi^{-1}k}P\big{(}(U_{i})_{i\in \phi^{-1}j};V_{j}\big{)}\big{\}}\times P\big{(}(V_{j})_{j\in\psi^{-1}k};W_{k} \big{)}\right\}\] \[\prod_{k\in K}\left\{\left\{\pm\phi|\cdot\phi^{-1}\psi^{-1}k\to \psi^{-1}k\times\psi^{-1}k\times\uparrow\right\}\right.\] \[\left.\prod_{j\in J}P\big{(}(V_{i})_{i\in\phi^{-1}j};V_{j}\big{)} \right]\times\prod_{k\in K}P\big{(}(V_{j})_{j\in\psi^{-1}k};W_{k}\big{)}\] \[P\big{(}(U_{i})_{i\in\phi^{-1}j}\big{)}_{j\in\psi^{-1}k};(V_{j}) _{j\in\psi^{-1}k}\big{\rangle}\times P\big{(}(V_{j})_{j\in\psi^{-1}k};W_{k} \big{)}\right\}\] \[\prod_{K\text{ composition}}P\big{(}(U_{i})_{i\in I};((U_{i})_{i \in\phi^{-1}j})_{j\in J}\big{)}\times P\big{(}((U_{i})_{i\in\phi^{-1}j})_{j \in J};(V_{j})_{j\in J}\big{)}\times\] \[\prod_{k\in K}P\big{(}(U_{i})_{i\in\phi^{-1}\psi^{-1}k};W_{k} \big{)}\] \[P\big{(}(U_{i})_{i\in I};((U_{i})_{i\in\phi^{-1}\psi^{-1}k})_{k \in K}\big{)}\times P\big{(}((U_{i})_{i\in\phi^{-1}\psi^{-1}k})_{k\in K};(W_{k} )_{k\in K}\big{)} \tag{2.1.6}\]
In order to prove its commutativity consider morphisms \(f_{j}:(U_{i})_{i\in\phi^{-1}j}\to V_{j}\), \(g_{k}:(V_{j})_{j\in\psi^{-1}k}\to W_{k}\) of \(P\). Diagram (2.1.6) is equivalent to commutativity of exterior of
These equations hold due to equation (1.2.1) and naturality of \(\lambda^{\psi}\). Thus a natural map \(\theta:s\mathcal{M}\mathcal{C}\mathcal{A}\mathcal{U}(\mathsf{V},\mathsf{U}P) \to\operatorname{cProp}(\mathbb{FV},P)\), \(F\mapsto G\) is constructed.
For \(\phi=\triangledown:I\to\mathbf{1}\) we have \(G^{\triangledown}=F_{(X_{i})_{i\in I};Y}\). Hence, the map \(\theta\) is injective. It is also surjective, as obligatory formula (2.1.5) shows. Therefore, \(\theta\) is a natural bijection.
As on any free monoid there is a length function \(l:\operatorname{Ob}\mathbb{FV}=(\operatorname{Ob}\mathsf{V})^{*}\to\mathbb{N}\) on objects of \(\mathbb{FV}\). Thus, \(\operatorname{Ob}\mathsf{V}=\{A\in\operatorname{Ob}\mathbb{FV}\mid l(X)=1\}\).
### Multicategory of \(\mathsf{V}\)-quivers
**2.2.1 Definition**.: Let \(\mathsf{V}\) be a plain multicategory. A small \(\mathsf{V}\)-quiver \(\mathcal{A}\) is
* a small set \(\operatorname{Ob}\mathcal{A}\) of objects;
* for each pair of objects \((X,Y)\) of \(\mathcal{A}\) an object \(\mathcal{A}(X,Y)\) of \(\mathsf{V}\), that is, an object \(\mathcal{A}(X,Y)\in\operatorname{Ob}\mathbb{FV}\) such that \(l(\mathcal{A}(X,Y))=1\).
**2.2.2 Definition**.: Let \(\mathsf{V}\) be a locally small multicategory. Let \(\mathcal{B}\), \(\mathcal{A}_{i}\), \(i\in I\in\mathcal{O}_{\mathsf{sk}}\), be small \(\mathsf{V}\)-quivers. A multi-entry \(\mathsf{V}\)-quiver morphism \(F:(\mathcal{A}_{i})_{i\in I}\to\mathcal{B}\) is
* a function \(F=\operatorname{Ob}F:\operatorname{Ob}\mathcal{A}_{1}\times\cdots\times \operatorname{Ob}\mathcal{A}_{I}\to\operatorname{Ob}\mathcal{B}\);
* a collection of elements \(F=F_{(A_{i}),(D_{i})}\in\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I };\mathcal{B}((A_{i})_{i\in I}F,(D_{i})_{i\in I}F)\big{)}\).
The small set of multi-entry \(\mathsf{V}\)-quiver morphisms \((\mathcal{A}_{i})_{i\in I}\to\mathcal{B}\) is denoted
\[\mathsf{V}\text{-}\mathcal{Q}u\big{(}(\mathcal{A}_{i})_{i\in I};\mathcal{B} \big{)}=\bigsqcup_{\operatorname{Ob}F:\prod_{i\in I}\operatorname{Ob} \mathcal{A}_{i}\to\operatorname{Ob}\mathcal{B}}\ \bigcap_{(A_{i},D_{i}\in\operatorname{Ob}\mathcal{A}_{i})_{i\in I}}\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**2.2.3 Proposition**.: Let \(\mathsf{V}\) be a locally small (symmetric) multicategory. Small \(\mathsf{V}\)-quivers and multi-entry \(\mathsf{V}\)-quiver morphisms form a locally small (symmetric) multicategory \(\mathsf{V}\)-\(\mathcal{Q}u\).
Proof.: Let \(\phi:I\to J\in\mathcal{O}_{\mathsf{sk}}\) (\(\phi:I\to J\in\mathcal{S}_{\mathsf{sk}}\)). Let \((\mathcal{A}_{i})_{i\in I}\), \((\mathcal{B}_{j})_{j\in J}\), \(\mathcal{C}\) be (families of) small \(\mathsf{V}\)-quivers. Let \(F^{j}:(\mathcal{A}_{i})_{i\in\phi^{-1}j}\to\mathcal{B}_{j}\), \(j\in J\), \(G:(\mathcal{B}_{j})_{j\in J}\to\mathcal{C}\) be multi-entry quiver morphisms. We construct another multi-entry quiver morphism \(H:(\mathcal{A}_{i})_{i\in I}\to\mathcal{C}\) with
* \(H=\operatorname{Ob}H:(A_{i})_{i\in I}\mapsto\big{(}(A_{i})_{i\in\phi^{-1}j}F^{ j}\big{)}_{j\in J}G\).
* \(H=H_{(A_{i}),(E_{i})}:(\mathcal{A}_{i}(A_{i},E_{i}))_{i\in I}\to\mathcal{C}((A _{i})_{i\in I}H,(E_{i})_{i\in I}H)\) obtained from \[\mu_{\phi}^{\mathsf{V}}:\prod_{j\in J}\mathsf{V}\big{(}( \mathcal{A}_{i}(A_{i},E_{i}))_{i\in\phi^{-1}j};\mathcal{B}_{j}((A_{i})_{i\in \phi^{-1}j}F^{j},(E_{i})_{i\in\phi^{-1}j}F^{j})\big{)}\times\\ \mathsf{V}\big{(}(\mathcal{B}_{j}((A_{i})_{i\in\phi^{-1}j}F^{j}, (E_{i})_{i\in\phi^{-1}j}F^{j}))_{j\in J};\mathcal{C}\big{(}((A_{i})_{i\in\phi^{ -1}j}F^{j})_{j\in J}G,((E_{i})_{i\in\phi^{-1}j}F^{j})_{j\in J}G\big{)}\big{)}\\ \to\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},E_{i}))_{i\in I}; \mathcal{C}((A_{i})_{i\in I}H,(E_{i})_{i\in I}H)\big{)},\\ \big{(}(F^{j}_{(A_{i})_{i\in\phi^{-1}j},(E_{i})_{i\in\phi^{-1}j} })_{j\in J},G_{((A_{i})_{i\in\phi^{-1}j}F^{j})_{j\in J},((E_{i})_{i\in\phi^{-1} j}F^{j})_{j\in J}}\big{)}\mapsto H_{(A_{i}),(E_{i})}.\] (2.2.1)
This assignment is in fact a component of the map
\[\mu_{\phi}^{\mathsf{V}\text{-}\mathcal{Q}u}:\big{[}\prod_{j\in J}\mathsf{V} \text{-}\mathcal{Q}u((\mathcal{A}_{i})_{i\in\phi^{-1}(j)};\mathcal{B}_{j}) \big{]}\times\mathsf{V}\text{-}\mathcal{Q}u((\mathcal{B}_{j})_{j\in J}; \mathcal{C})\to\mathsf{V}\text{-}\mathcal{Q}u((\mathcal{A}_{i})_{i\in I}; \mathcal{C}).\]
Let \((\mathcal{A}_{i})_{i\in I}\), \((\mathcal{B}_{j})_{j\in J}\), \((\mathcal{C}_{k})_{k\in K}\), \(\mathcal{D}\) be (families of) small \(\mathsf{V}\)-quivers, where \(I,J,K\in\operatorname{Ob}\mathcal{O}_{\mathsf{sk}}\). Let \(I\xrightarrow{\phi}J\xrightarrow{\psi}K\) be mappings in \(\mathcal{O}_{\mathsf{sk}}\) (in \(\mathcal{S}_{\mathsf{sk}}\)). Let \(F^{j}:(\mathcal{A}_{i})_{i\in\phi^{-1}j}\to\mathcal{B}_{j}\), \(j\in J\), \(G^{k}:(\mathcal{B}_{j})_{j\in\psi^{-1}k}\to\mathcal{C}\), \(k\in K\), \(H:(\mathcal{C}_{k})_{k\in K}\to\mathcal{D}\) be multi-entry quiver morphisms. Fix objects \(A_{i}\), \(E_{i}\) of \(\mathcal{A}_{i}\), \(i\in I\). Expanding entries of the associativity equation for \(\mathsf{V}\text{-}\mathcal{Q}u\) using (2.2.1) we get diagram at Figure 1 for \(X_{i}=\mathcal{A}_{i}(A_{i},E_{i})\), \(Y_{j}=\mathcal{B}_{j}\big{(}(A_{i})_{i\in\phi^{-1}j}F^{j},(E_{i})_{i\in\phi^{- 1}j}F^{j}\big{)}\),
\[Z_{k}=\mathcal{C}_{k}\big{(}((A_{i})_{i\in\phi^{-1}j}F^{j})_{j \in\psi^{-1}k}G^{k},((E_{i})_{i\in\phi^{-1}j}F^{j})_{j\in\psi^{-1}k}G^{k}\big{)},\] \[W=\mathcal{D}\big{(}(((A_{i})_{i\in\phi^{-1}j}F^{j})_{j\in\psi^{- 1}k}G^{k})_{k\in K}H,(((E_{i})_{i\in\phi^{-1}j}F^{j})_{j\in\psi^{-1}k}G^{k})_{ k\in K}H\big{)}.\]
Therefore, for composition in \(\mathsf{V}\text{-}\mathcal{Q}u\) the associativity holds.
Define the identity \(\mathsf{V}\)-quiver morphism \(\operatorname{Id}:\mathcal{A}\to\mathcal{A}\) with the identity map \(\operatorname{id}:\operatorname{Ob}\mathcal{A}\to\operatorname{Ob}\mathcal{A}\) and \(1_{\mathcal{A}(A,A)}\in\mathsf{V}\big{(}\mathcal{A}(A,A);\mathcal{A}(A,A)\big{)}\). Clearly, both equations for identities are satisfied, hence, \(\mathsf{V}\text{-}\mathcal{Q}u\) is a (symmetric) multicategory.
### \(\mathsf{V}\)-categories
In mathematical literature there are at least two different notions called categories enriched in bicategories. Let us consider categories enriched in multicategories. This notion seems to appear for the first time in [10, SS1, (MLC 4)], translated to a modern language in [11, SS2]. We use the definition of Leinster [14, Example 2.2.1.iii], [14, Example (2), page 399] and reproduce it here for convenience of the reader.
**2.3.1 Definition**.: Let \(\mathsf{V}\) be a plain multicategory. A small \(\mathsf{V}\)-category \(\mathcal{C}\) is a small \(\mathbb{F}\mathsf{V}\)-category \(\mathcal{C}\) with \(\mathcal{C}(X,Y)\in\operatorname{Ob}\mathbb{F}\mathsf{V}\) satisfying \(l(\mathcal{C}(X,Y))=1\). In detail, it is
* a small set \(\operatorname{Ob}\mathcal{C}\) of objects;
* for each pair of objects \((X,Y)\) of \(\mathcal{C}\) an object \(\mathcal{C}(X,Y)\) of \(\mathsf{V}\);
* the composition;
* the identity morphism
such that
* for each quadruple of objects \((W,X,Y,Z)\) of \(\mathcal{C}\) the associativity holds: \[\begin{CD}\mathcal{C}(W,X),\mathcal{C}(X,Y),\mathcal{C}(Y,Z)@>{1,\kappa,X,Y,Z}>{}> \mathcal{C}(W,X),\mathcal{C}(X,Z)\\ @V{\kappa W,X,Y,1}V{}V=\\ \mathcal{C}(W,Y),\mathcal{C}(Y,Z)@>{\kappa W,X,Z}>{}>\mathcal{C}(W,Z)\end{CD}\] (2.3.1)
* for each pair of objects \((X,Y)\) of \(\mathcal{C}\) \[\big{[}\mathcal{C}(X,Y)@>{\mathrm{id}_{X,1}}>{}>\mathcal{C}(X,X), \mathcal{C}(X,Y)@>{\kappa,X,Y,X}>{}>\mathcal{C}(X,Y)\big{]}=1,\] (2.3.2) \[\big{[}\mathcal{C}(X,Y)@>{1,\mathrm{id}_{Y}}>{}>\mathcal{C}(X,Y), \mathcal{C}(Y,Y)@>{\kappa,X,Y,Y}>{}>\mathcal{C}(X,Y)\big{]}=1.\] (2.3.3)
In detail (2.3.1) means equation \(tr=lb(=\kappa_{W,X,Y,Z})\) where
\[\mathsf{V}\big{(}\mathcal{C}(W,X);\mathcal{C}(W,X)\big{)}\times \mathsf{V}\big{(}\mathcal{C}(X,Y),\mathcal{C}(Y,Z);\mathcal{C}(X,Z)\big{)} \times\mathsf{V}\big{(}\mathcal{C}(W,X),\mathcal{C}(X,Z);\mathcal{C}(W,Z) \big{)}\\ @>{\mu_{\mathsf{W},\mathfrak{Z}}}>{}>\mathsf{V}\big{(}\mathcal{C}(W,X), \mathcal{C}(X,Y),\mathcal{C}(Y,Z);\mathcal{C}(W,Z)\big{)},\\ (1_{\mathcal{C}(W,X)},\kappa_{X,Y,Z},\kappa_{W,X,Z})\mapsto tr, \tag{2.3.4}\]
\[\mathsf{V}\big{(}\mathcal{C}(W,X),\mathcal{C}(X,Y);\mathcal{C}(W, Y)\big{)}\times\mathsf{V}\big{(}\mathcal{C}(Y,Z);\mathcal{C}(Y,Z)\big{)} \times\mathsf{V}\big{(}\mathcal{C}(W,Y),\mathcal{C}(Y,Z);\mathcal{C}(W,Z) \big{)}\\ @V{\mu_{\mathsf{W},\mathfrak{Z}}}>{}>\mathsf{V}\big{(}\mathcal{C}(W,X), \mathcal{C}(X,Y),\mathcal{C}(Y,Z);\mathcal{C}(W,Z)\big{)},\\ (\kappa_{W,X,Y},1_{\mathcal{C}(Y,Z)},\kappa_{W,Y,Z})\mapsto lb. \tag{2.3.5}\]
**2.3.2 Proposition**.: Let \(\mathsf{V}\) be a locally small symmetric closed complete multicategory. The symmetric multicategory \(\mathsf{V}\)-\(\mathcal{Q}u\) is equipped with the following. Let \((\mathcal{A}_{i})_{i\in I}\), \(I\in\mathrm{Ob}\,\mathcal{S}_{\mathsf{k}\mathsf{k}}\), be a family of small \(\mathsf{V}\)-quivers. Let \(\mathcal{C}\) be a small \(\mathsf{V}\)-category. Then there is a small \(\mathsf{V}\)-category \(\underline{\mathsf{V}\mbox{-}\mathcal{Q}u}((\mathcal{A}_{i})_{i\in I}; \mathcal{C})\) and a distinguished evaluation element
\[\mathrm{ev}_{(\mathcal{A}_{i})_{i\in I};\mathcal{C}}^{\mathsf{V}\mbox{-} \mathcal{Q}u}\big{(}(\mathcal{A}_{i})_{i\in I},\underline{\mathsf{V}\mbox{-} \mathcal{Q}u}((\mathcal{A}_{i})_{i\in I};\mathcal{C});\mathcal{C}\big{)}.\]
Proof.: Let \((\mathcal{A}_{i})_{i\in I}\), \(I\in\mathrm{Ob}\,\mathcal{S}_{\mathsf{k}\mathsf{k}}\), be a family of small \(\mathsf{V}\)-quivers. Let \(\mathcal{C}\) be a small \(\mathsf{V}\)-category. Define a small \(\mathsf{V}\)-quiver \(\underline{\mathsf{V}\mbox{-}\mathcal{Q}u}\big{(}(\mathcal{A}_{i})_{i\in I}; \mathcal{C}\big{)}\) with
* \(\mathrm{Ob}\,\underline{\mathsf{V}\mbox{-}\mathcal{Q}u}\big{(}(\mathcal{A}_{i} )_{i\in I};\mathcal{C}\big{)}=\mathsf{V}\mbox{-}\mathcal{Q}u\big{(}(\mathcal{ A}_{i})_{i\in I};\mathcal{C}\big{)};\)
* \(\underline{\mathsf{V}\mbox{-}\mathcal{Q}u}\big{(}(\mathcal{A}_{i})_{i\in I}; \mathcal{C}\big{)}(F,G)=\) the object of \(\mathsf{V}\)-transformations \(F\to G:(\mathcal{A}_{i})_{i\in I}\to\mathcal{C}=\) the enriched end in \(\mathsf{V}\) \[\int_{(A_{i}\in\mathcal{A}_{i})_{i\in I}}\mathcal{C}\big{(}(A_{i})_{i\in I}F,( A_{i})_{i\in I}G\big{)}\] similar to [20, SS 2.1], the equalizer in multicategory \(\mathsf{V}\) of the pair of morphisms \[\prod_{(A_{i}\in\mathcal{A}_{i})_{i\in I}}\!\!\!\!\!\!\!\!\!\mathcal{C}\big{(}(A _{i})_{i\in I}F,(A_{i})_{i\in I}G\big{)}\xrightarrow[(\mathrm{pr}_{(D_{i})^{ \prime}\!,\beta})]{(\mathrm{pr}_{(A_{i})^{\prime}})}\prod_{(A_{i},D_{i}\in \mathcal{A}_{i})_{i\in I}}\!\!\!\!\!\!\underline{\mathsf{V}}\big{(}(\mathcal{A} _{i}(A_{i},D_{i}))_{i\in I};\mathcal{C}((A_{i})_{i\in I}F,(D_{i})_{i\in I}G) \big{)},\] (2.3.6) where \(\beta:\mathcal{C}\big{(}(D_{i})_{i\in I}F,(D_{i})_{i\in I}G\big{)}\to \underline{\mathsf{V}}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I};\mathcal{C} ((A_{i})_{i\in I}F,(D_{i})_{i\in I}G)\big{)}\) is adjunct to \(\beta^{\dagger}\), obtained via \[\mu_{\forall\uparrow}:\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{ i}))_{i\in I};\mathcal{C}((A_{i})_{i\in I}F,(D_{i})_{i\in I}F)\big{)}\\ \times\mathsf{V}\big{(}\mathcal{C}((D_{i})_{i\in I}F,(D_{i})_{i\in I }G);\mathcal{C}((D_{i})_{i\in I}F,(D_{i})_{i\in I}G)\big{)}\\ \times\mathsf{V}\big{(}\mathcal{C}((A_{i})_{i\in I}F,(D_{i})_{i\in I }F),\mathcal{C}((D_{i})_{i\in I}F,(D_{i})_{i\in I}G);\mathcal{C}((A_{i})_{i\in I }F,(D_{i})_{i\in I}G)\big{)}\\ \to\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I}, \mathcal{C}((D_{i})_{i\in I}F,(D_{i})_{i\in I}G);\mathcal{C}((A_{i})_{i\in I }F,(D_{i})_{i\in I}G)\big{)},\\ (F_{(A_{i}),(D_{i})_{i}},1_{\mathcal{C}((D_{i})_{i\in I}F,(D_{i})_{ i\in I}G),*})\mapsto\beta^{\dagger},\] (2.3.7)
and \(\gamma:\mathcal{C}\big{(}(A_{i})_{i\in I}F,(A_{i})_{i\in I}G\big{)}\to\underline{ \mathsf{V}}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I};\mathcal{C}((A_{i})_{ i\in I}F,(D_{i})_{i\in I}G)\big{)}\) is adjunct to \(\gamma^{\dagger}\), obtained via
\[\mu_{\forall\mathsf{1}\mathsf{X}}:\mathsf{V}\big{(}\mathcal{C}(( A_{i})_{i\in I}F,(A_{i})_{i\in I}G);\mathcal{C}((A_{i})_{i\in I}F,(A_{i})_{i\in I }G)\big{)}\\ \times\big{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I}; \mathcal{C}((A_{i})_{i\in I}G,(D_{i})_{i\in I}G)\big{)}\\ \times\mathsf{V}\big{(}\mathcal{C}((A_{i})_{i\in I}F,(A_{i})_{i \in I}G),\mathcal{C}((A_{i})_{i\in I}G,(D_{i})_{i\in I}G);\mathcal{C}((A_{i})_{ i\in I}F,(D_{i})_{i\in I}G)\big{)}\\ \to\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I}, \mathcal{C}((A_{i})_{i\in I}F,(A_{i})_{i\in I}G);\mathcal{C}((A_{i})_{i\in I }F,(D_{i})_{i\in I}G)\big{)},\\ (1_{\mathcal{C}((A_{i})_{i\in I}F,(A_{i})_{i\in I}G)},G_{(A_{i}),( D_{i}),\star})\mapsto\gamma^{\dagger}. \tag{2.3.8}\]
Here \(\forall\mathsf{1}:\mathbf{n}+\mathbf{1}\to\mathbf{2}\), \(\forall\mathsf{1}\centerdot\mathsf{X}=\big{[}\mathbf{n}+\mathbf{1}\stackrel{{ \forall\mathsf{1}}}{{\longrightarrow}}\mathbf{2}\stackrel{{ \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq
Thus, (2.3.11) and (2.3.12) are giving the same element \(\mathrm{ev}^{\mathsf{V}\!-\!\mathcal{Q}u}\).
There is a composite map
\[\mathsf{V}\!-\!\mathcal{Q}u\big{(}(\mathcal{B}_{j})_{j\in J}; \underline{\mathsf{V}\!-\!\mathcal{Q}u}((\mathcal{A}_{i})_{i\in I};\mathcal{C} )\big{)}\xrightarrow{\left[\prod_{i\in I}\mathrm{i}_{\mathcal{A}_{i}}\! \right]\times\mathrm{i}\times\mathrm{ev}^{\mathsf{V}\!-\!\mathcal{Q}u}\big{(}( \mathcal{A}_{i})_{i\in I};\mathcal{C}\big{)}\right\rangle}\times\mathsf{V}\!- \!\mathcal{Q}u\big{(}(\mathcal{A}_{i})_{i\in I};\mathcal{C}\big{)}\] \[\left[\prod_{i\in I}\mathsf{V}\!-\!\mathcal{Q}u(\mathcal{A}_{i}; \mathcal{A}_{i})\right]\times\mathsf{V}\!-\!\mathcal{Q}u\big{(}(\mathcal{B}_{j} )_{j\in J};\underline{\mathsf{V}\!-\!\mathcal{Q}u}((\mathcal{A}_{i})_{i\in I };\mathcal{C})\big{)}\times\mathsf{V}\!-\!\mathcal{Q}u\big{(}(\mathcal{A}_{i} )_{i\in I};\underline{\mathsf{V}\!-\!\mathcal{Q}u}((\mathcal{A}_{i})_{i\in I };\mathcal{C});\mathcal{C}\big{)}\] \[\xrightarrow{\mu_{\mathrm{d}}^{\mathsf{V}\!-\!\mathcal{Q}u}\! \right]\times\mathrm{i}\times
Proof.: The vertical composition of objects of \(\mathsf{V}\)-transformations \(\underline{\mathsf{V}\mbox{-}Qu}((\mathcal{A}_{i})_{i\in I},\mathcal{C})(F,G)\) comes from the composition in \(\mathcal{C}\):
(2.3.18)
The multiplication \(m\) exists due to the existence of products in multicategory \(V\). We have to prove the existence of the top arrow. We use the abbreviation similar to that from Kelly's book [10, SS 2.2]\([(\mathcal{A}_{i})_{i\in I};\mathcal{C}]=\underline{\mathsf{V}\mbox{-} Qu}((\mathcal{A}_{i})_{i\in I};\mathcal{C})\). First of all the exterior of the following diagram commutes
\[[(\mathcal{A}_{i})_{i\in I};\mathcal{C}](F,G),[(\mathcal{A}_{i})_{i\in I}; \mathcal{C}](G,H)\]
\[\prod_{(A_{i}\in\mathcal{A}_{i})_{i\in I}}\mathcal{C}((A_{i})_{i\in I}F,(A_{i} )_{i\in I}),\prod_{(A_{i}\in\mathcal{A}_{i})_{i\in I}}\mathcal{C}((A_{i})_{i \in I}G,(A_{i})_{i\in I}H)\]
\[\mathcal{C}((A_{i})_{i\in I}F,(A_{i})_{i\in I}H)\]
\[\underline{\mathsf{V}\mbox{-}Qu}((\mathcal{A}_{i})_{i\in I};\mathcal{C}((A_{ i})_{i\in I};\mathcal{C})(G,H)\xrightarrow[\iota,t]{\exists\mbox{$\mathsf{V} \mbox{-}Qu$}((\mathcal{A}_{i})_{i\in I},\mathcal{C})(F,H)}\]
[MISSING_PAGE_POST]
The elements \(\kappa_{(A_{i})_{i\in I}F,(A_{i})_{i\in I}G_{i},(A_{i})_{i\in I}H,(D_{i})_{i\in I}H}\) refer to iterated composition in \(\mathcal{C}\). Notice that actually \(a=b=c\). Equality between elements \(a\), \(b\), \(c\) follows from the properties of \([(\mathcal{A}_{i})_{i\in I};\mathcal{C}]=\underline{\mathsf{V}-\mathcal{Q}u}(( \mathcal{A}_{i})_{i\in I};\mathcal{C})\).
The two (top) commutative squares imply that there is a unique arrow
\[\centerdot\in\mathsf{V}\big{(}[(\mathcal{A}_{i})_{i\in I};\mathcal{C}](F,G),[( \mathcal{A}_{i})_{i\in I};\mathcal{C}](G,H);[(\mathcal{A}_{i})_{i\in I}; \mathcal{C}](F,H)\big{)},\]
denoted \(\exists\exists\)? in diagram (2.3.18) in \(\mathsf{V}\) which makes the diagram commutative.
Associativity of composition in \(\mathcal{C}\) implies associativity of composition \(m\) in diagram (2.3.18). Hence the upper multiplication \(\centerdot\) is associative as well.
The identity transformation \(\mathrm{id}_{F}:()\to\underline{\mathsf{V}-\mathcal{C}at}((\mathcal{A}_{i})_{ i\in I};\mathcal{C})(F,F)\) is \(\mathrm{id}_{F}=\big{(}\mathrm{id}_{(A_{i})_{i\in I}F}:()\to\mathcal{C}((A_{i}) _{i\in I}F,(A_{i})_{i\in I}F)\big{)}_{(A_{i}\in\mathcal{A}_{i})_{i\in I}}\). It is a natural \(\mathsf{V}\)-transformation in the sense of Definition 2.5.1, since the square
\[\big{(}\mathcal{A}_{i}(A_{i},D_{i})\big{)}_{i\in I}\xrightarrow{F_{(A_{i}),( D_{i})};\mathrm{id}_{(D_{i})F}}\mathcal{C}((A_{i})_{i\in I}F,(D_{i})_{i\in I }F),\mathcal{C}((D_{i})_{i\in I}F,(D_{i})_{i\in I}F)\]
\[\mathcal{C}((A_{i})_{i\in I}F,(A_{i})_{i\in I}F),\mathcal{C}((A_{i})_{i\in I}F,(D_{i})_{i\in I}F)\xrightarrow{\kappa_{(A_{i})F,(A_{i})F,(D_{i})F}}\mathcal{ C}((A_{i})_{i\in I}F,(D_{i})_{i\in I}F)\]
commutes. The both triangles commute in \(\mathsf{V}\) due to \(\mathrm{id}\) being units of \(\mathcal{C}\).
This proves Proposition 2.3.2.
**2.3.6 Example**.: Assume that \(\mathcal{V}\) is a complete closed symmetric monoidal category. For \(\mathsf{V}=\widehat{\mathcal{V}}\) (see [2, Proposition 3.22]) we get
\[\beta^{\dagger}=\big{[}\otimes^{I\sqcup\mathbf{I}}[(\mathcal{A}_{i} (A_{i},D_{i}))_{i\in I},\mathcal{C}((D_{i})_{i\in I}F,(D_{i})_{i\in I}G)]\\ \xrightarrow{F_{(A_{i}),(D_{i})}\otimes 1}\mathcal{C}((A_{i})_{i \in I}F,(D_{i})_{i\in I}F)\otimes\mathcal{C}((D_{i})_{i\in I}F,(D_{i})_{i\in I }G)\xrightarrow{\star}\mathcal{C}((A_{i})_{i\in I}F,(D_{i})_{i\in I}G) \big{]},\]
\[\gamma^{\dagger}=\big{[}\otimes^{I\sqcup\mathbf{I}}[(\mathcal{A}_{i}(A_{i},D_ {i}))_{i\in I},\mathcal{C}((A_{i})_{i\in I}F,(A_{i})_{i\in I}G)]\xrightarrow{G_ {(A_{i}),(D_{i})}\otimes 1}\\ \mathcal{C}((A_{i})_{i\in I}G,(D_{i})_{i\in I}G)\otimes\mathcal{C} ((A_{i})_{i\in I}F,(A_{i})_{i\in I}G)\\ \xrightarrow{c}\mathcal{C}((A_{i})_{i\in I}F,(A_{i})_{i\in I}G) \otimes\mathcal{C}((A_{i})_{i\in I}G,(D_{i})_{i\in I}G)\xrightarrow{\star} \mathcal{C}((A_{i})_{i\in I}F,(D_{i})_{i\in I}G)\big{]}.\]
**2.3.7 Example**.: \(\mathsf{V}=\widehat{\mathcal{S}et}\), \(\mathsf{V}-\mathcal{C}at=\mathcal{C}at\). The quiver \(\underline{\mathsf{V}-\mathcal{Q}u}\big{(}(\mathcal{A}_{i})_{i\in I}; \mathcal{C}\big{)}\) has
* \(\mathrm{Ob}\,\underline{\mathsf{V}-\mathcal{Q}u}\big{(}(\mathcal{A}_{i})_{i\in I };\mathcal{C}\big{)}=\mathsf{V}-\mathcal{Q}u\big{(}(\mathcal{A}_{i})_{i\in I };\mathcal{C}\big{)};\)
* \(\underline{\mathsf{V}-\mathcal{Q}u}\big{(}(\mathcal{A}_{i})_{i\in I};\mathcal{ C}\big{)}(F,G)=\int_{(A_{i}\in\mathcal{A})_{i\in I}}\mathcal{C}((A_{i})_{i\in I }F,(A_{i})_{i\in I}G)\).
\(g\in\mathsf{V}-\mathcal{Q}u\big{(}(\mathcal{A}_{i})_{i\in I},(\mathcal{B}_{j} )_{j\in J};\mathcal{C}\big{)}\) consists of
* a function \(g=\mathrm{Ob}\,g:(\prod_{i\in I}\mathrm{Ob}\,\mathcal{A}_{i})\times(\prod_{j \in J}\mathrm{Ob}\,\mathcal{B}_{j})\to\mathrm{Ob}\,\mathcal{C}\);
* elements \(g=g_{(A_{i}),(B_{j}),(D_{i}),(E_{j})}\in\)
\[\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I},(\mathcal{B}_{j}(B_{j},E_{j}))_{j\in J};\mathcal{C}((A_{i})_{i\in I},(B_{j})_{j\in J})g,((D_{i})_{i \in I},(E_{j})_{j\in J})g\big{)}.\]
Consider an element \(f:(\mathcal{B}_{j})_{j\in J}\to\underline{\mathsf{V}-\mathcal{Q}u}\big{(}( \mathcal{A}_{i})_{i\in I};\mathcal{C}\big{)}\in\mathsf{V}-\mathcal{Q}u\) given by (2.3.14) and (2.3.15). Map (2.3.15) induces a map
\[h_{(A_{i})}:(\mathcal{B}_{j}(B_{j},E_{j}))_{j\in J}\to\mathcal{C}((A_{i})_{i\in I }(B_{j})_{j\in J}f,(A_{i})_{i\in I}(E_{j})_{j\in J}f).\]
Let \(\alpha_{i}\in\mathcal{A}_{i}(A_{i},D_{i})\), \(i\in I\), \(\beta_{j}\in\mathcal{B}_{j}(B_{j},E_{j})\), \(j\in J\). From the equality of compositions (2.3.16) and (2.3.17) we deduce that the square
\[(A_{i})_{i\in I}(B_{j})_{j\in J}f\xrightarrow{(\alpha_{i})(B_{j})_{j \in J}f(A_{i}),(D_{i})}(D_{i})_{i\in I}(B_{j})_{j\in J}f\] \[\xrightarrow{(\beta_{j})h_{(A_{i})}}\] \[\xrightarrow{(\beta_{j})h_{(D_{i})}}\] \[\xrightarrow{(\alpha_{i})(E_{j})_{j\in J}f(A_{i}),(D_{i})}(D_{i})_{ i\in I}(E_{j})_{j\in J}f\]
### Multicategory of \(\mathsf{V}\)-categories
\(\mathsf{V}\)-functors were defined in [11, SS1, (MLC 4)], translated to a modern language in [10, SS2], and by Leinster [12, Example 2.4.1.iii]. We shall use a more general notion:
**2.4.1 Definition**.: Let \(\mathsf{V}\) be a locally small symmetric multicategory. Let \(\mathcal{B}\), \(\mathcal{A}_{i}\), \(i\in I\), be small \(\mathsf{V}\)-categories. A multi-entry \(\mathsf{V}\)-functor \(F:(\mathcal{A}_{i})_{i\in I}\to\mathcal{B}\) is an \(\mathsf{FV}\)-functor \(F:\mathbb{Sil}^{i\in I}\mathcal{A}_{i}\to\mathcal{B}\).
**2.4.2 Proposition**.: A multi-entry \(\mathsf{V}\)-functor \(F:(\mathcal{A}_{i})_{i\in I}\to\mathcal{B}\) is identified with the following data:
* a function \(F=\operatorname{Ob}F:\operatorname{Ob}\mathcal{A}_{1}\times\dots\times \operatorname{Ob}\mathcal{A}_{I}\to\operatorname{Ob}\mathcal{B}\);
* a collection of elements \(F=F_{(A_{i}),(E_{i})}\in\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},E_{i}))_{i\in I };\mathcal{B}((A_{i})_{i\in I}F,(E_{i})_{i\in I}F)\big{)}\);
such that \(lb=tr\) where these elements come from
\[\mu_{\triangledown}:\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i }))_{i\in I};\mathcal{B}((A_{i})_{i\in I}F,(D_{i})_{i\in I}F)\big{)}\times \mathsf{V}\big{(}(\mathcal{A}_{i}(D_{i},E_{i}))_{i\in I};\mathcal{B}((D_{i}) _{i\in I}F,(E_{i})_{i\in I}F)\big{)}\\ \times\mathsf{V}\big{(}\mathcal{B}((A_{i})_{i\in I}F,(D_{i})_{i \in I}F),\mathcal{B}((D_{i})_{i\in I}F,(E_{i})_{i\in I}F);\mathcal{B}((A_{i}) _{i\in I}F,(E_{i})_{i\in I}F)\big{)}\\ \to\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I},( \mathcal{A}_{i}(D_{i},E_{i}))_{i\in I};\mathcal{B}((A_{i})_{i\in I}F,(E_{i})_ {i\in I}F)\big{)},\\ (F_{(A_{i}),(D_{i})},F_{(D_{i}),(E_{i})},*)\mapsto lb,\]
\[\mu_{\chi}:\prod_{i\in I}\mathsf{V}\big{(}\mathcal{A}_{i}(A_{i},D_{i}), \mathcal{A}_{i}(D_{i},E_{i});\mathcal{A}_{i}(A_{i},E_{i})\big{)}\times \mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},E_{i}))_{i\in I};\mathcal{B}((A_{i}) _{i\in I}F,(E_{i})_{i\in I}F)\big{)}\\ \to\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I},( \mathcal{A}_{i}(D_{i},E_{i}))_{i\in I};\mathcal{B}((A_{i})_{i\in I}F,(E_{i})_ {i\in I}F)\big{)},\\ ((\kappa_{A_{i},D_{i},E_{i}})_{i\in I},F_{(A_{i}),(E_{i})})\mapsto tr.\]
Here
\[\triangledown =\begin{pmatrix}1&\dots&n&n+1&\dots&2n\\ 1&\dots&1&2&\dots&2\end{pmatrix}:\mathbf{2n}\to\mathbf{2},\] \[\chi =\begin{pmatrix}1&2&\dots&n&n+1&n+2&\dots&2n\\ 1&2&\dots&n&1&2&\dots&n\end{pmatrix}:\mathbf{2n}\to\mathbf{n}.\]
Another requirement is coherence with the units
\[\big{[}()\xrightarrow{(\operatorname{id}_{A_{i}})_{i\in I}}(\mathcal{A}_{i}( A_{i},A_{i}))_{i\in I}\xrightarrow{F_{(A_{i}),(A_{i})}}\mathcal{B}((A_{i})_{i \in I}F,(A_{i})_{i\in I}F)\big{]}=\operatorname{id}_{(A_{i})_{i\in I}F}. \tag{2.4.1}\]
Proof.: An \(\mathsf{FV}\)-functor \(F:\mathbb{Sil}^{i\in I}\mathcal{A}_{i}\to\mathcal{B}\) consists of a map \(F=\operatorname{Ob}F:\prod_{i\in I}\operatorname{Ob}\mathcal{A}_{i}\to \operatorname{Ob}\mathcal{B}\) and a collection of elements
\[F=F_{(A_{i}),(E_{i})}\in\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},E_{i}))_{i\in I };\mathcal{B}((A_{i})_{i\in I}F,(E_{i})_{i\in I}F)\big{)}.\]
The \(\mathsf{FV}\)-functor has to satisfy the equation
\[(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I},(\mathcal{A}_{i}(D_{i},E_{i}))_{i\in I }\xrightarrow{\lambda^{\text{sh}}}(\mathcal{A}_{i}(A_{i},D_{i}),\mathcal{A} _{i}(D_{i},E_{i}))_{i\in I}\xrightarrow{(\kappa)_{I}}(\mathcal{A}_{i}(A_{i},E_ {i}))_{i\in I}\]
\[\big{\lvert}_{F_{(A_{i}),(D_{i})},F_{(D_{i}),(E_{i})}}=\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
The small set of multi-entry \(\mathsf{V}\)-functors \((\mathcal{A}_{i})_{i\in I}\to\mathcal{B}\) is denoted
\[\mathsf{V}\mbox{-}\mathcal{C}at\big{(}(\mathcal{A}_{i})_{i\in I};\mathcal{B} \big{)}\subset\mathsf{V}\mbox{-}\mathcal{Q}u\big{(}(\mathcal{A}_{i})_{i\in I}; \mathcal{B}\big{)}.\]
**2.4.3 Example**.: Consider the particular case \(I=\varnothing\). What is a multi-entry \(\mathsf{V}\)-functor \(e:()\to\mathcal{B}\)? By definition it consists of an object \(B\in\operatorname{Ob}\mathcal{B}\), an element \(e\in\mathsf{V}\big{(};\mathcal{B}(B,B)\big{)}\) such that \(lb=tr\) where
\[\mu_{\varnothing\to\mathfrak{Z}}:\mathsf{V}\big{(};\mathcal{B}(B, B)\big{)}\times\mathsf{V}\big{(};\mathcal{B}(B,B)\big{)}\times\mathsf{V}\big{(} \mathcal{B}(B,B),\mathcal{B}(B,B);\mathcal{B}(B,B)\big{)}\to\mathsf{V}\big{(}; \mathcal{B}(B,B)\big{)},\\ (e,e,\centerdot)\mapsto lb,\\ \mu_{\varnothing\to\varnothing}=\operatorname{id}:\mathsf{V} \big{(};\mathcal{B}(B,B)\big{)}\to\mathsf{V}\big{(};\mathcal{B}(B,B)\big{)},e \mapsto tr=e,\]
(see (1.3.2) for \(I=\varnothing\)) (that is, \(e\) is an idempotent) and (2.4.1) holds. The latter condition, \(e\mu_{\varnothing\to\varnothing}=\operatorname{id}_{B}\), fixes the value of \(e\) as \(e=\operatorname{id}_{B}\). Thus, \(\mathsf{V}\mbox{-}\mathcal{C}at(;\mathcal{B})\cong\operatorname{Ob}\mathcal{B}\). The multi-entry \(\mathsf{V}\)-functor corresponding to an object \(B\) is denoted \(\tilde{B}:()\to\mathcal{B}\).
**2.4.4 Example**.: Consider the particular case \(I=\mathbf{1}\). A \(\mathsf{V}\)-functor \(F:\mathcal{A}\to\mathcal{B}\) is a multi-entry \(\mathsf{V}\)-functor with the set of entries indexed by \(I=\mathbf{1}\). Thus, it is
* a function \(F=\operatorname{Ob}F:\operatorname{Ob}\mathcal{A}\to\operatorname{Ob}\mathcal{B}\);
* a collection of elements \(F=F_{A,E}\in\mathsf{V}\big{(}\mathcal{A}(A,E);\mathcal{B}(AF,EF)\big{)}\);
such that \(lb=tr\) where these elements come from
\[\mu_{\mathsf{I}}:\mathsf{V}\big{(}(\mathcal{A}(A,D);\mathcal{B}( AF,DF)\big{)}\times\mathsf{V}\big{(}\mathcal{A}(D,E);\mathcal{B}(DF,EF)\big{)}\\ \times\mathsf{V}\big{(}\mathcal{B}(AF,DF),\mathcal{B}(DF,EF); \mathcal{B}(AF,EF)\big{)}\to\mathsf{V}\big{(}\mathcal{A}(A,D),\mathcal{A}(D, E);\mathcal{B}(AF,EF)\big{)},\\ (F_{A,D},F_{D,E},\kappa_{AF,DF,EF})\mapsto lb,\\ \mu_{\mathsf{V}}:\mathsf{V}\big{(}\mathcal{A}(A,D),\mathcal{A}(D, E);\mathcal{A}(A,E)\big{)}\times\mathsf{V}\big{(}\mathcal{A}(A,E);\mathcal{B}(AF,EF) \big{)}\\ \to\mathsf{V}\big{(}\mathcal{A}(A,D),\mathcal{A}(D,E);\mathcal{B} (AF,EF)\big{)},\qquad(\kappa_{A,D,E},F_{A,E})\mapsto tr.\]
The equation \(lb=tr\) is a commutative square in \(\mathsf{V}\):
\[\begin{CD}\mathcal{A}(A,D),\mathcal{A}(D,E)@>{\kappa_{A,D,E}}>{}>\mathcal{A }(A,E)\\ @V{F_{A,D,F_{D,E}}}V{}V=\Big{\downarrow}F_{A,E}\\ \mathcal{B}(AF,DF),\mathcal{B}(DF,EF)@>{\kappa_{AF,DF,EF}}>{}>\mathcal{B}(AF,EF) \end{CD} \tag{2.4.3}\]
And, furthermore, coherence with units is required:
\[\big{[}(\mathcal{O})@>{\operatorname{id}_{A}}>{}>\mathcal{A}(A,A)@>{F_{A,A}}>{ }>\mathcal{B}(AF,AF)\big{]}=\operatorname{id}_{AF}. \tag{2.4.4}\]
**2.4.5 Proposition**.: Let \(\mathsf{V}\) be a locally small symmetric multicategory. Small \(\mathsf{V}\)-categories and multi-entry \(\mathsf{V}\)-functors form a locally small symmetric multicategory \(\mathsf{V}\mbox{-}\mathcal{C}at\).
Proof.: Let \(\phi:I\to J\in\mathcal{S}_{\mathsf{sk}}\). Let \((\mathcal{A}_{i})_{i\in I}\), \((\mathcal{B}_{j})_{j\in J}\), \(\mathcal{C}\) be (families of) small \(\mathsf{V}\)-categories. Let \(F^{j}:(\mathcal{A}_{i})_{i\in\phi^{-1}j}\to\mathcal{B}_{j}\), \(j\in J\), \(G:(\mathcal{B}_{j})_{j\in J}\to\mathcal{C}\) be multi-entry functors. Similarly to the \(\mathsf{V}\)-quiver case considered in Proposition 2.2.3 we construct another multi-entry functor \(H:(\mathcal{A}_{i})_{i\in I}\to\mathcal{C}\) with
* \(H=\operatorname{Ob}H:(A_{i})_{i\in I}\mapsto\big{(}(A_{i})_{i\in\phi^{-1}j}F^{j} \big{)}_{j\in J}G\).
* \(H=H_{(A_{i}),(E_{i})}:(\mathcal{A}_{i}(A_{i},E_{i}))_{i\in I}\to\mathcal{C}((A_{i })_{i\in I}H,(E_{i})_{i\in I}H)\) obtained from \[\mu_{\phi}^{\mathsf{V}}:\prod_{j\in J}\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i },E_{i}))_{i\in\phi^{-1}j};\mathcal{B}_{j}((A_{i})_{i\in\phi^{-1}j}F^{j},(E_{i}) _{i\in\phi^{-1}j}F^{j})\big{)}\times\\ \mathsf{V}\big{(}(\mathcal{B}_{j}((A_{i})_{i\in\phi^{-1}j}F^{j},(E _{i})_{i\in\phi^{-1}j}F^{j}))_{j\in J};\mathcal{C}(((A_{i})_{i\in\phi^{-1}j}F^{j}) _{j\in J}G,((E_{i})_{i\in\phi^{-1}j}F^{j})_{j\in J}G)\big{)}\\ \to\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},E_{i}))_{i\in I}; \mathcal{C}((A_{i})_{i\in I}H,(E_{i})_{i\in I}H)\big{)},\\ \big{(}(F^{j}_{(A_{i})_{i\in\phi^{-1}j}(E_{i})_{i\in\phi^{-1}j}})_{ j\in J},G_{((A_{i})_{i\in\phi^{-1}j}F^{j})_{j\in J},((E_{i})_{i\in\phi^{-1}j}F^{j}) _{j\in J}\big{)}\mapsto H_{(A_{i}),(E_{i})}.\] (2.4.5)
One can check that this assignment is in fact a map
\[\mu_{\phi}^{\mathsf{V-Cat}}:[\prod_{j\in J}\mathsf{V-Cat}((\mathcal{A}_{i})_{i\in \phi^{-1}(j)};\mathcal{B}_{j})]\times\mathsf{V-Cat}((\mathcal{B}_{j})_{j\in J}; \mathcal{C})\rightarrow\mathsf{V-Cat}((\mathcal{A}_{i})_{i\in I};\mathcal{C}).\]
Let \((\mathcal{A}_{i})_{i\in I}\), \((\mathcal{B}_{j})_{j\in J}\), \((\mathcal{C}_{k})_{k\in K}\), \(\mathcal{D}\) be (families of) V-categories, where \(I,J,K\in\operatorname{Ob}\mathcal{S}_{\mathsf{sk}}\). Let \(I\xrightarrow{\phi}J\xrightarrow{\psi}K\) be mappings (in \(\mathcal{S}_{\mathsf{sk}}\)). Let \(F^{j}:(\mathcal{A}_{i})_{i\in\phi^{-1}j}\rightarrow\mathcal{B}_{j}\), \(j\in J\), \(G^{k}:(\mathcal{B}_{j})_{j\in\psi^{-1}k}\rightarrow\mathcal{C}\), \(k\in K\), \(H:(\mathcal{C}_{k})_{k\in K}\rightarrow\mathcal{D}\) be multi-entry functors. Fix objects \(A_{i}\), \(E_{i}\) of \(\mathcal{A}_{i}\), \(i\in I\). Expanding entries of associativity equation for \(\mathsf{V-Cat}\) using using (2.4.5) we get diagram at Figure 1 for \(X_{i}=\mathcal{A}_{i}(A_{i},E_{i})\), \(Y_{j}=\mathcal{B}_{j}((A_{i})_{i\in\phi^{-1}j}F^{j},(E_{i})_{i\in\phi^{-1}j}F^ {j})\), \(Z_{k}=\mathcal{C}_{k}(((A_{i})_{i\in\phi^{-1}j}F^{j})_{j\in\psi^{-1}k}G^{k},((E _{i})_{i\in\phi^{-1}j}F^{j})_{j\in\psi^{-1}k}G^{k})\),
\[W=\mathcal{D}((((A_{i})_{i\in\phi^{-1}j}F^{j})_{j\in\psi^{-1}k}G^{k})_{k\in K}H,(((E_{i})_{i\in\phi^{-1}j}F^{j})_{j\in\psi^{-1}k}G^{k})_{k\in K}H).\]
Therefore, for composition in \(\mathsf{V-Cat}\) the associativity holds.
Define the identity \(\mathsf{V-functor}\operatorname{Id}:\mathcal{A}\rightarrow\mathcal{A}\) with the identity map \(\operatorname{id}:\operatorname{Ob}\mathcal{A}\rightarrow\operatorname{Ob} \mathcal{A}\) and \(1_{\mathcal{A}(A,A)}\in\mathsf{V}\big{(}\mathcal{A}(A,A);\mathcal{A}(A,A)\big{)}\). Clearly, both equations for identities are satisfied, hence, \(\mathsf{V-Cat}\) is a symmetric multicategory.
### Natural V-transformations
**2.5.1 Definition**.: Natural V-transformation \(\lambda:F\to G:(\mathcal{A}_{i})_{i\in I}\rightarrow\mathcal{C}\) is a family \((\lambda_{A_{1},\ldots,A_{I}})_{(A_{i}\in\mathcal{A}_{i})}\), \(\lambda_{A_{1},\ldots,A_{I}}\in\mathsf{V}\big{(};\mathcal{C}((A_{i})_{i\in I}F, (A_{i})_{i\in I}G\big{)}\big{)}\), such that for all objects \(A_{i}\), \(D_{i}\) of \(\mathcal{A}_{i}\), \(i\in I\), the square
\[(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I}\xrightarrow{F_{(A_{i}),(D_{i}), \lambda(D_{i})}}\mathcal{C}((A_{i})_{i\in I}F,(D_{i})_{i\in I}F),\mathcal{C}(( D_{i})_{i\in I}F,(D_{i})_{i\in I}G)\]
commutes in \(\mathsf{V}\). In detail, elements \(b^{\prime}\) and \(g^{\prime}\) of \(\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i})_{i\in I};\mathcal{C}((A_{i})_{i \in I}F,(D_{i})_{i\in I}G)\big{)}\) are equal, where
\[\mu_{\forall_{\star}}:\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{ i})_{i\in I};\mathcal{C}((A_{i})_{i\in I}F,(A_{i})_{i\in I}F)\big{)}\times \mathsf{V}\big{(};\mathcal{C}((D_{i})_{i\in I}F,(D_{i})_{i\in I}G)\big{)}\\ \times\mathsf{V}\big{(}\mathcal{C}((A_{i})_{i\in I}F,(D_{i})_{i \in I}F),\mathcal{C}((D_{i})_{i\in I}F,(D_{i})_{i\in I}G);\mathcal{C}((A_{i})_{ i\in I}F,(D_{i})_{i\in I}G)\big{)}\rightarrow\\ \mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i})_{i\in I};\mathcal{C} ((A_{i})_{i\in I}F,(D_{i})_{i\in I}G)\big{)},\quad(F_{(A_{i}),(D_{i})}, \lambda_{(D_{i})},\kappa_{(A_{i})F,(D_{i})F,(D_{i})G})\mapsto b^{\prime}, \tag{2.5.1}\]
\[\mu_{\forall_{\star}}:\mathsf{V}\big{(};\mathcal{C}((A_{i})_{i\in I }F,(A_{i})_{i\in I}G)\big{)}\times\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i})_{ i\in I};\mathcal{C}((A_{i})_{i\in I}G,(D_{i})_{i\in I}G)\big{)}\\ \times\mathsf{V}\big{(}\mathcal{C}((A_{i})_{i\in I}F,(A_{i})_{i\in I }G),\mathcal{C}((A_{i})_{i\in I}G,(D_{i})_{i\in I}G);\mathcal{C}((A_{i})_{i \in I}F,(D_{i})_{i\in I}G)\big{)}\rightarrow\\ \mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i})_{i\in I};\mathcal{C} ((A_{i})_{i\in I}F,(D_{i})_{i\in I}G)\big{)},\quad(\lambda_{(A_{i})},G_{(A_{i}),(D_{i})},\kappa_{(A_{i})F,(A_{i})G,(D_{i})G})\mapsto g^{\prime}. \tag{2.5.2}\]
Here \(\forall_{\star}=\begin{pmatrix}1&\ldots&I\\ 1&\ldots&1\end{pmatrix}:I\rightarrow\mathbf{2}\) and \(\boldsymbol{\cdot}\triangledown=\begin{pmatrix}1&\ldots&I\\ 2&\ldots&2\end{pmatrix}:I\rightarrow\mathbf{2}\).
**2.5.2 Proposition**.: The set \(\mathsf{V-Cat}((\mathcal{A}_{i})_{i\in I},\mathcal{C})(F,G)\) of natural V-transformations \(\lambda:F\to G:(\mathcal{A}_{i})_{i\in I}\rightarrow\mathcal{C}\) is in bijection with the set \(\mathsf{V}\big{(};\int_{(A_{i}\in\mathcal{A}_{i})}\mathcal{C}((A_{i})_{i\in I}F,(A_{ i})_{i\in I}G)\big{)}\).
Proof.: The latter set is
\[\mathsf{V}\big{(};\int_{(A_{i}\in\mathcal{A}_{i})}\mathcal{C}((A_{i})_{i\in I}F,( A_{i})_{i\in I}G)\big{)}\\ =\Big{\{}\lambda=(\lambda_{(A_{i})})\in\prod_{(A_{i}\in\mathcal{A}_{i})} \mathsf{V}\big{(};\mathcal{C}((A_{i})_{i\in I}F,(A_{i})_{i\in I}G)\big{)}\mid\\ \xrightarrow{\lambda_{(D_{i})}}\mathcal{C}((D_{i})_{i\in I}F,(D_{i})_{i\in I }G)\\ =\big{\{}\big{\}\big{\}}_{\beta}\\ \mathcal{C}((A_{i})_{i\in I}F,(A_{i})_{i\in I}G)\xrightarrow{\gamma} \underline{\mathsf{V}}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I};\mathcal{C}((A_{i} )_{i\in I}F,(D_{i})_{i\in I}G)\big{)}\end{pmatrix}\]
Equivalently the condition can be written as: for all families of objects \((A_{j},E_{j}\in\mathcal{E}_{j})_{j\in J}\)
Equivalently, \(tr=lb\) where:
\[\prod_{i\in I}\mathsf{V}\big{(}\mathcal{A}_{i}(A_{i},D_{i}); \mathcal{A}_{i}(A_{i},D_{i})\big{)}\times\mathsf{V}\big{(};\mathcal{C}((D_{i})_ {i\in I}F,(D_{i})_{i\in I}G)\big{)}\\ \times\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I}, \mathcal{C}((D_{i})_{i\in I}F,(D_{i})_{i\in I}G);\mathcal{C}((A_{i})_{i\in I }F,(D_{i})_{i\in I}G)\big{)}\\ \xrightarrow[]{\mu_{\text{in}_{1}:I\to J\cup I}}\mathsf{V}\big{(}( \mathcal{A}_{i}(A_{i},D_{i}))_{i\in I};\mathcal{C}((A_{i})_{i\in I}F,(D_{i})_{ i\in I}G)\big{)},\\ \big{(}(1_{\mathcal{A}_{i}(A_{i},D_{i})})_{i\in I},\lambda_{(D_{ i})},\gamma^{\dagger}\big{)}\mapsto tr,\]
\[\prod_{i\in I}\mathsf{V}\big{(}\mathcal{A}_{i}(A_{i},D_{i}); \mathcal{A}_{i}(A_{i},D_{i})\big{)}\times\mathsf{V}\big{(};\mathcal{C}((A_{i}) _{i\in I}F,(A_{i})_{i\in I}G)\big{)}\\ \times\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I}, \mathcal{C}((A_{i})_{i\in I}F,(A_{i})_{i\in I}G);\mathcal{C}((A_{i})_{i\in I }F,(D_{i})_{i\in I}G)\big{)}\\ \xrightarrow[]{\mu_{\text{in}_{1}:I\to J\cup I}}\mathsf{V}\big{(}( \mathcal{A}_{i}(A_{i},D_{i}))_{i\in I};\mathcal{C}((A_{i})_{i\in I}F,(D_{i})_{ i\in I}G)\big{)},\\ \big{(}(1_{\mathcal{A}_{i}(A_{i},D_{i})})_{i\in I},\lambda_{(A_{i} )},\gamma^{\dagger}\big{)}\mapsto lb.\]
Equations \(tr=lb\) and \(b^{\prime}=g^{\prime}\) from (2.5.1) and (2.5.2) coincide identically.
### Closedness of the multicategory of \(\mathsf{V}\)-categories
**2.6.1 Proposition**.: Let \(\mathsf{V}\) be a locally small symmetric closed complete multicategory. Let \((\mathcal{A}_{i})_{i\in I}\), \(I\in\operatorname{Ob}\mathcal{S}_{\mathsf{sk}}\), \(\mathcal{C}\), be (a family of) small \(\mathsf{V}\)-categories. Then
\[\operatorname{ev}^{\mathsf{V}\text{-}\mathcal{C}at}=\operatorname{ev}_{( \mathcal{A}_{i})_{i\in I}:\mathcal{C}}^{\mathsf{V}\text{-}\mathcal{Q}u}\in \mathsf{V}\text{-}\mathcal{C}at\big{(}(\mathcal{A}_{i})_{i\in I},\mathsf{V} \text{-}\mathcal{C}at\big{(}(\mathcal{A}_{i})_{i\in I};\mathcal{C});\mathcal{C} \big{)}.\]
Proof.: One can prove that \(lb=tr\), where
\[\mu_{\text{\tiny$\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{ \mathfrak{ }}}}}}}}}}}}}} =\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I},[( \mathcal{A}_{i})_{i\in I};\mathcal{C}](F,G);\mathcal{C}((A_{i})_{i\in I}F,(D_{i })_{i\in I}G)\big{)}\\ \times\mathsf{V}\big{(}(\mathcal{A}_{i}(D_{i},E_{i})_{i\in I},[( \mathcal{A}_{i})_{i\in I};\mathcal{C}](G,H);\mathcal{C}((D_{i})_{i\in I}G,(E_{ i})_{i\in I}H);\mathcal{C}((A_{i})_{i\in I}F,(E_{i})_{i\in I}H)\big{)}\\ \times\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I},[( \mathcal{A}_{i})_{i\in I};\mathcal{C}](F,G),(\mathcal{A}_{i}(D_{i},E_{i})_{i \in I},[(\mathcal{A}_{i})_{i\in I};\mathcal{C}](G,H);\mathcal{C}((A_{i})_{i \in I}F,(E_{i})_{i\in I}H)\big{)},\\ (\mathsf{ev}_{(A_{i}),F,(D_{i}),G}^{\mathsf{V}\text{-}\mathcal{Q}u}, \operatorname{ev}_{(D_{i}),G,(E_{i})_{i},H}^{\mathsf{V}\text{-}\mathcal{Q}u}, \operatorname{ev}_{(D_{i}),G,(E_{i})_{i},H}^{\mathsf{V}\text{-}\mathcal{Q}u}, \operatorname{v}_{(D_{i}),G,(E_{i})_{i},H}^{\mathsf{V}\text{-}\mathcal{Q}u}, \operatorname{v}_{(D_{i}),G,(E_{i})_{i},H}^{\mathsf{V}\text{-}\mathcal{Q}u}, \operatorname{v}_{(D_{i}),G,(E_{i})_{i},H}^{\mathsf{V}\text{-}\mathcal{Q}u}, \operatorname{v}_{(D_{i}),G,(E_{i})_{i},H}^{\mathsf{V}\text{-}\mathcal{Q}u}, \operatorname{v}_{(D_{i}),G,(E_{i})_{i},H}^{\mathsf{V}\text{-}\mathcal{Q}u
Recall that \(\mathrm{ev}^{\mathsf{V}\text{-}\mathcal{Q}u}\) has two presentations: (2.3.11) and (2.3.12). The first one gives
\[\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I};\mathcal{ C}((A_{i})_{i\in I}F,(D_{i})_{i\in I}F)\big{)}\times\mathsf{V}\big{(}(\mathcal{A}_{i}(D_{i },E_{i}))_{i\in I};\mathcal{C}((D_{i})_{i\in I}F,(E_{i})_{i\in I}F)\big{)}\] \[\times\mathsf{V}\big{(}[(\mathcal{A}_{i})_{i\in I};\mathcal{C}](F,G);\mathcal{C}((E_{i})_{i\in I}F,(E_{i})_{i\in I}G)\big{)}\times\mathsf{V} \big{(}[(\mathcal{A}_{i})_{i\in I};\mathcal{C}](G,H);\mathcal{C}((E_{i})_{i\in I }G,(E_{i})_{i\in I}H)\big{)}\] \[\times\mathsf{V}\big{(}\mathcal{C}((A_{i})_{i\in I}F,(D_{i})_{i \in I}F),\mathcal{C}((D_{i})_{i\in I}F,(E_{i})_{i\in I}F),\mathcal{C}((E_{i})_ {i\in I}F,(E_{i})_{i\in I}G),\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
where \(\varpi=(I+1I\dots 21)\) and \(\varpi^{-1}=(12\dots II+1)\). We have \(\pi_{1}=1\), \(\pi_{2}=\varpi:\mathbb{I}\sqcup I\to I\sqcup\mathbb{1}\), \(\pi_{3}=1\). Equation (A.1.3) gives, in particular,
\[\big{[}\mathsf{V}((X_{i})_{i\in I};Y_{1})\times\mathsf{V}((U_{i}) _{i\in I},Z;Y_{2})\times\mathsf{V}(Q;Y_{3})\times\mathsf{V}(Y_{1},Y_{2},Y_{3};W )\xrightarrow{1\times r_{\pi_{2}}\times 1\times 1}\\ \mathsf{V}((X_{i})_{i\in I};Y_{1})\times\mathsf{V}(Z,(U_{i})_{i \in I};Y_{2})\times\mathsf{V}(Q;Y_{3})\times\mathsf{V}(Y_{1},Y_{2},Y_{3};W) \xrightarrow{\mu_{\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall\forall \forall\
Restricting (2.3.11) to \(\mathsf{V}\!\!-\!\!\mathit{Cat}\) we get that the evaluation element can be obtained via
\[\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},E_{i}))_{i\in I};\mathcal{C} (A_{i})_{i\in I}F,(E_{i})_{i\in I}F\big{)}\times\mathsf{V}\big{(}\underline{ \mathsf{V}\!\!-\!\!\mathit{Cat}}\big{(}(\mathcal{A}_{i})_{i\in I};\mathcal{C} \big{)}(F,G);\mathcal{C}((E_{i})_{i\in I}F,(E_{i})_{i\in I}G\big{)}\big{)}\\ \times\mathsf{V}\big{(}\mathcal{C}((A_{i})_{i\in I}F,(E_{i})_{i \in I}F),\mathcal{C}((E_{i})_{i\in I}F,(E_{i})_{i\in I}G);\mathcal{C}((A_{i})_{ i\in I}F,(E_{i})_{i\in I}G)\big{)}\\ \xrightarrow{\mu_{\sqcap}}\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i },E_{i}))_{i\in I},\underline{\mathsf{V}\!\!-\!\!\mathit{Cat}}\big{(}( \mathcal{A}_{i})_{i\in I};\mathcal{C}\big{)}(F,G);\mathcal{C}((A_{i})_{i\in I }F,(E_{i})_{i\in I}G)\big{)},\\ \big{(}F_{(A_{i}),(E_{i})},p_{(E_{i})_{i\in I}},\star\big{)} \mapsto\mathrm{ev}^{\mathsf{V}\!\!-\!\mathit{Cat}}\,. \tag{2.6.4}\]
Looking at another path of commutative diagram (2.3.10) we get another presentation of \(\mathrm{ev}^{\mathsf{V}\!\!-\!\mathit{Cat}}\). Restricting (2.3.12) we conclude that the evaluation element can be obtained via
\[\mathsf{V}\big{(}\underline{\mathsf{V}\!\!-\!\!\mathit{Cat}}\big{(}( \mathcal{A}_{i})_{i\in I};\mathcal{C}\big{)}(F,G);\mathcal{C}((A_{i})_{i\in I }F,(A_{i})_{i\in I}G)\big{)}\times\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},E_{ i}))_{i\in I};\mathcal{C}((A_{i})_{i\in I}G,(E_{i})_{i\in I}G)\big{)}\\ \times\mathsf{V}\big{(}\mathcal{C}((A_{i})_{i\in I}F,(A_{i})_{i \in I}G),\mathcal{C}((A_{i})_{i\in I}G,(E_{i})_{i\in I}G);\mathcal{C}((A_{i})_ {i\in I}F,(E_{i})_{i\in I}G)\big{)}\\ \xrightarrow{\mu_{\sqcap\mathsf{V}\!\!-\!\mathit{Cat}}}\big{(}( \mathcal{A}_{i},E_{i})_{i\in I},\underline{\mathsf{V}\!\!-\!\mathit{Cat}} \big{(}(\mathcal{A}_{i})_{i\in I};\mathcal{C}\big{)}(F,G);\mathcal{C}((A_{i})_ {i\in I}F,(E_{i})_{i\in I}G)\big{)},\\ \big{(}p_{(A_{i})_{i\in I}},G_{(A_{i}),(E_{i})},\star\big{)} \mapsto\mathrm{ev}^{\mathsf{V}\!\!-\!\mathit{Cat}}\,. \tag{2.6.5}\]
Thus, (2.6.4) and (2.6.5) are giving the same element \(\mathrm{ev}^{\mathsf{V}\!\!-\!\mathit{Cat}}\).
Let \((\mathcal{A}_{i})_{i\in I}\), \((\mathcal{B}_{j})_{j\in J}\), \(\mathcal{C}\) be (families of) small \(\mathsf{V}\!\!-\!\!\mathit{categories}\). According to Proposition 2.3.2 there is a map
\[\Phi:\mathsf{V}\!\!-\!\!\mathit{Qu}\big{(}(\mathcal{B}_{j})_{j\in J};\mathsf{ V}\!\!-\!\!\mathit{Cat}((\mathcal{A}_{i})_{i\in I};\mathcal{C})\big{)}\to \mathsf{V}\!\!-\!\!\mathit{Qu}\big{(}(\mathcal{A}_{i})_{i\in I},(\mathcal{B}_{ j})_{j\in J};\mathcal{C}\big{)}.\]
Let us provide a map in the other direction
\[\Psi:\mathsf{V}\!\!-\!\!\mathit{Cat}\big{(}(\mathcal{A}_{i})_{i\in I},( \mathcal{B}_{j})_{j\in J};\mathcal{C}\big{)}\to\mathsf{V}\!\!-\!\!\mathit{Qu} \big{(}(\mathcal{B}_{j})_{j\in J};\underline{\mathsf{V}\!\!-\!\!\mathit{Cat}} ((\mathcal{A}_{i})_{i\in I};\mathcal{C})\big{)}.\]
Let \(g:(\mathcal{A}_{i})_{i\in I},(\mathcal{B}_{j})_{j\in J}\to\mathcal{C}\in \mathsf{V}\!\!-\!\mathit{Cat}\). For any family of objects \(B_{j}\in\mathrm{Ob}\,\mathcal{B}_{j}\), \(j\in J\), define a multi-entry \(\mathsf{V}\!\!-\!\!\mathit{functor}\)
\[(B_{j})_{j\in J}f=\big{[}(\mathcal{A}_{i})_{i\in I}\xrightarrow{(\mathrm{Id})_{ I},(\bar{B}_{j})_{j\in J}}(\mathcal{A}_{i})_{i\in I},(\mathcal{B}_{j})_{j\in J} \xrightarrow{g}\mathcal{C}\big{]}\in\mathsf{V}\!\!-\!\mathit{Cat}. \tag{2.6.6}\]
In detail:
\[\big{[}\prod_{i\in I}\mathsf{V}\!\!-\!\!\mathit{Cat}(\mathcal{A} _{i};\mathcal{A}_{i})\big{]}\times\big{[}\prod_{j\in J}\mathsf{V}\!\!-\!\! \mathit{Cat}(;\mathcal{B}_{j})\big{]}\times\mathsf{V}\!\!-\!\!\mathit{Cat} \big{(}(\mathcal{A}_{i})_{i\in I},(\mathcal{B}_{j})_{j\in J};\mathcal{C} \big{)}\xrightarrow{\mu_{\sqcap}:I\to I\sqcup J}\mathsf{V}\!\!-\!\mathit{Cat} \big{(}(\mathcal{A}_{i})_{i\in I};\mathcal{C}\big{)}\\ \big{(}(\mathrm{Id}_{\mathcal{A}_{i}})_{i\in I},(\bar{B}_{j})_{j\in J },g\big{)}\mapsto(B_{j})_{j\in J}f.\]
This defines a map \(\mathrm{Ob}\,f:\prod_{j\in J}\mathrm{Ob}\,\mathcal{B}_{j}\to\mathrm{Ob}\, \underline{\mathsf{V}\!\!-\!\mathit{Cat}}\big{(}(\mathcal{A}_{i})_{i\in I}; \mathcal{C}\big{)}=\mathsf{V}\!\!-\!\!\mathit{Cat}\big{(}(\mathcal{A}_{i})_{i \in I};\mathcal{C}\big{)}\). On morphisms we have
\[(B_{j})_{j\in J}f_{(A_{i}),(D_{i})}=\big{[}(\mathcal{A}_{i}(A_{ i},D_{i}))_{i\in I}\xrightarrow{(1)_{I},(\mathrm{Id})_{J}}(\mathcal{A}_{i}(A_{i},D_{i}) _{i\in I},(\mathcal{B}_{j}(B_{j},B_{j}))_{j\in J}\\ \xrightarrow{g_{(A_{i}),(B_{j}),(D_{i}),(B_{j})}}\mathcal{C}(((A_{i })_{i\in I},(B_{j})_{j\in J})g,((D_{i})_{i\in I},(B_{j})_{j\in J})g)\big{]}.\]
In detail:
\[\mu_{\sqcap I\sqcup J}:\prod_{i\in I}\mathsf{V}\big{(}\mathcal{A}_ {i}(A_{i},D_{i});\mathcal{A}_{i}(A_{i},D_{i})\big{)}\times\prod_{j\in J}\mathsf{ V}(;\mathcal{B}_{j}(B_{j},B_{j}))\\ \times\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I},( \mathcal{B}_{j}(B_{j},B_{j}))_{j\in J};\mathcal{C}(((A_{i})_{i\in I},(B_{j})_{j \in J})g,((D_{i})_{i\in I},(B_{j})_{j\in J})g)\big{)}\\ \to\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I}; \mathcal{C}(((A_{i})_{i\in I},(B_{j})_{j\in J})g,((D_{i})_{i\in I},(B_{j})_{j \in J})g)\big{)}\\ \big{(}((1_{\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I},(\mathrm{id}_{B_{ j}})_{j\in J},g_{(A_{i}),(B_{j}),(D_{i}),(B_{j})})\mapsto(B_{j})_{j\in J}f_{(A_{i}),(D_{i})}. \tag{2.6.7}\]
Let us introduce a \(\mathsf{V}\!\!-\!\!\mathit{V}\!\!
With \(g\) we are given elements
\[g_{(A_{i}),(B_{j}),(D_{i}),(E_{j})}\\ \in\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I},( \mathcal{B}_{j}(B_{j},E_{j}))_{j\in J};\mathcal{C}(((A_{i})_{i\in I},(B_{j})_{ j\in J})g,((D_{i})_{i\in I},(E_{j})_{j\in J})g)\big{)}.\]
Using them we define elements
\[\mu_{\mathrm{inz}:J\mapsto I\sqcup J}:[\prod_{i\in I}\mathsf{V}( \mathsf{;}A_{i}(A_{i},A_{i}))]\times[\prod_{j\in J}\mathsf{V}(\mathcal{B}_{j}(B _{j},E_{j});\mathcal{B}_{j}(B_{j},E_{j}))]\\ \times\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},A_{i}))_{i\in I},( \mathcal{B}_{j}(B_{j},E_{j}))_{j\in J};\mathcal{C}(((A_{i})_{i\in I},(B_{j})_{ j\in J})g,((A_{i})_{i\in I},(E_{j})_{j\in J})g)\big{)}\\ \to\mathsf{V}\big{(}(\mathcal{B}_{j}(B_{j},E_{j}))_{j\in J}; \mathcal{C}(((A_{i})_{i\in I},(B_{j})_{j\in J})g,((A_{i})_{i\in I},(E_{j})_{ j\in J})g)\big{)}\\ \big{(}(\mathrm{id}_{A_{i}})_{i\in I},(1_{\mathcal{B}_{j}(B_{j},E _{j})})_{j\in J},g_{(A_{i}),(B_{j}),(A_{i}),(E_{j})})\mapsto(A_{i})_{i\in I} \bar{f}_{(B_{j}),(E_{j})}. \tag{2.6.8}\]
So we define \(\bar{f}:(\mathcal{B}_{j})_{j\in J}\to\overline{\mathsf{C}at}\big{(}(\mathcal{ A}_{i})_{i\in I};\mathcal{C}\big{)}\), \((B_{j})_{j\in J}\mapsto(B_{j})_{j\in J}f\) as
\[\bar{f}_{(B_{j}),(E_{j})}=((A_{i})_{i\in I}\bar{f}_{(B_{j}),(E_{j })})_{(A_{i}\in\mathcal{A}_{i})_{i\in I}}\\ \in\prod_{(A_{i}\in\mathcal{A}_{i})_{i\in I}}\mathsf{V}\big{(}( \mathcal{B}_{j}(B_{j},E_{j}))_{j\in J};\mathcal{C}(((A_{i})_{i\in I},(B_{j})_{ j\in J})g,((A_{i})_{i\in I},(E_{j})_{j\in J})g)\big{)}\\ \cong\mathsf{V}\big{(}(\mathcal{B}_{j}(B_{j},E_{j}))_{j\in J}; \overline{\mathsf{V}\!-\!\mathcal{C}at}((\mathcal{A}_{i})_{i\in I};\mathcal{C })((B_{j})_{j\in J}f,(E_{j})_{j\in J}f)\big{)}.\]
Let us show that this element is sent by the following two maps to the same element:
Equivalently, for any \(A_{i},D_{i}\in\mathrm{Ob}\,\mathcal{A}_{i}\), \(i\in I\), \(B_{j},E_{j}\in\mathrm{Ob}\,\mathcal{B}_{j}\), \(j\in J\), the following square is commutative:
By closedness of \(\mathsf{V}\) this is equivalent to commutativity of
\[\big{(}\mathcal{A}_{i}(A_{i},D_{i})\big{)}_{i\in I},\big{(} \mathcal{B}_{j}(B_{j},E_{j})\big{)}_{j\in J}\underbrace{(1)_{I},(D_{i})_{i \in I}\bar{f}_{(B_{j}),(E_{j})}}_{(1)_{I},(A_{i})_{i\in I}\bar{f}_{(B_{j}),(E_ {j})}}\\ \big{(}\mathcal{A}_{i}(A_{i},D_{i})\big{)}_{i\in I},\mathcal{C} \big{(}((A_{i})_{i\in I},(B_{j})_{j\in J})g,((A_{i})_{i\in I},(E_{j})_{j\in J })g\big{)}\\ \big{(}\mathcal{A}_{i}(A_{i},D_{i})\big{)}_{i\in I},\mathcal{C} \big{(}((A_{i})_{i\in I},(B_{j})_{j\in J})\underbrace{g,((A_{i})_{i\in I},(E_ {j})_{j\in J})g}_{\begin{subarray}{c}\end{subarray}\end{subarray}}\\ \mathcal{C}\big{(}((A_{i})_{i\in I},(B_{j})_{j\in J})g,((D_{i})_{i\in I},(E_ {j})_{j\in J})g\big{)}\]
where \(\beta^{\dagger}\) is given by (2.3.7) and \(\gamma^{\dagger}\) is given by (2.3.8). In detail, \(tr=lb\) where
\[\begin{split}&\big{[}\prod_{i\in I}\mathsf{V}\big{(}\mathcal{A}_{ i}(A_{i},D_{i});\mathcal{A}_{i}(A_{i},D_{i})\big{)}\big{]}\times\big{[}\prod_{i\in I} \mathsf{V}\big{(};\mathcal{A}_{i}(D_{i},D_{i})\big{)}\big{]}\times\big{[}\prod_ {j\in J}\mathsf{V}\big{(}\mathcal{B}_{j}(B_{j},E_{j});\mathcal{B}_{j}(B_{j},E_ {j})\big{)}\big{]}\\ &\qquad\times\mathsf{V}\big{(}(\mathcal{A}_{i}(D_{i},D_{i}))_{i \in I},(\mathcal{B}_{j}(B_{j},E_{j}))_{j\in J};\mathcal{C}(((D_{i})_{i\in I},(B _{j})_{j\in J})g,((D_{i})_{i\in I},(E_{j})_{j\in J})g\big{)}\big{)}\\ &\qquad\qquad\times\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}) _{i\in I};\mathcal{C}(((A_{i})_{i\in I},(B_{j})_{j\in J})g,((D_{i})_{i\in I},(B _{j})_{j\in J})g)\big{)}\times\\ &\qquad\mathsf{V}\big{(}\mathcal{C}((((D_{i})_{i\in I},(B_{j})_{ j\in J})g,((D_{i})_{i\in I},(E_{j})_{j\in J})g);\mathcal{C}(((D_{i})_{i\in I },(B_{j})_{j\in J})g,((D_{i})_{i\in I},(E_{j})_{j\in J})g\big{)}\big{)}\\ &\qquad\qquad\mathcal{C}((((A_{i})_{i\in I},(B_{j})_{j\in J})g,(( D_{i})_{i\in I},(B_{j})_{j\in J})g\big{)}\big{)}\xrightarrow{\mu_{\text{d}\cup\text{f} }:\text{l}\cup J\to\text{l}\cup\text{l}\cup\text{l}}\\ &\qquad\qquad\qquad\qquad\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D _{i}))_{i\in I},(\mathcal{B}_{j}(B_{j},E_{j}))_{j\in J};\mathcal{C}(((A_{i})_{ i\in I},(B_{j})_{j\in J})g,((D_{i})_{i\in I},(E_{j})_{j\in J})g\big{)}\big{)},\\ &\qquad\qquad\qquad\qquad\mathsf{V}\big{(}(\mathcal{B}_{j}(B_{j},E_{j}))_{j\in J};\mathcal{C}(((D_{i})_{i\in I},(B_{j})_{j\in J})g,((D_{i})_{i \in I},(E_{j})_{j\in J})g\big{)}\big{)}\\ &\times\mathsf{V}\big{(}((\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I },\mathcal{C}(((D_{i})_{i\in I},(B_{j})_{j\in J})g,((D_{i})_{i\in I},(E_{j})_{ j\in J})g)\big{)}\\ &\qquad\qquad\qquad\mathsf{C}(((A_{i})_{i\in I},(B_{j})_{j\in J})g, ((D_{i})_{i\in I},(E_{j})_{j\in J})g,((D_{i})_{i\in I},(E_{j})_{j\in J})g); \\ &\qquad\qquad\qquad\mathsf{C}((((A_{i})_{i\in I},(B_{j})_{j\in J })g,((D_{i})_{i\in I},(E_{j})_{j\in J})g\big{)}\big{)}\times\big{[}\prod_{j \in J}\mathsf{V}\big{(}\mathcal{B}_{j}(B_{j},E_{j});\mathcal{B}_{j}(B_{j},E_ {j})\big{)}\big{]}\\ &\qquad\qquad\qquad\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}) _{i\in I},(\mathcal{B}_{j}(B_{j},E_{j}))_{j\in J};\mathcal{C}(((A_{i})_{i\in I },(B_{j})_{j\in J})g,((A_{i})_{i\in I},(E_{j})_{j\in J})g\big{)}\big{)}\times \\ &\qquad\qquad\mathsf{V}\big{(}\mathcal{C}(((A_{i})_{i\in I },(B_{j})_{j\in J})g,((A_{i})_{i\in I},(E_{j})_{j\in J})g\big{)}\big{;}\big{(}( (A_{i})_{i\in I},(E_{j})_{j\in J})g\big{)}\big{)}\times\\ &\qquad\qquad\qquad\mathsf{V}\big{(}((A_{i})_{i\in I},(B_{j})_{j\in J })g,((A_{i})_{i\in I},(E_{j})_{j\in J})g,(((A_{i})_{i\in I},(E_{j})_{j\in J})g \big{)}\big{)}\\ &\qquad\qquad\qquad\qquad\qquad\mathsf{C}(((A_{i})_{i\in I},(B_{j} )_{j\in J})g,((D_{i})_{i\in I},(E_{j})_{j\in J})g\big{)}\big{)}\\ &\qquad\qquad\qquad\qquad\mathsf{V}\big{(}((A_{i})_{i\in I},(B_{j} )_{j\in J})g,((D_{i})_{i\in I},(E_{j})_{j\in J})g\big{)}\big{)}\\ &\qquad\qquad\qquad\qquad\qquad\mathsf{V}\big{(}((A_{i})_{i\in I },(B_{j})_{j\in J})g,((D_{i})_{i\in I},(E_{j})_{j\in J})g\big{)}\big{)}\\ &\qquad\qquad\qquad\qquad\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D _{i})_{i\in I},(\mathcal{B}_{j})_{j\in J})g,((D_{i})_{i\in I},(E_{j})_{j\in J})g \big{)}\big{)}\\ &\qquad\qquad\qquad\qquad\mathsf{V}\big{(}((A_{i})_{i\in I},(B_{j} )_{j\in J})g,((D_{i})_{i\in I},(E_{j})_{j\in J})g\big{)}\big{)}\\ &\qquad\qquad\qquad\qquad\mathsf{V}\big{(}((A_{i})_{i\in I},(B_{j} )_{j\in J})g,((D_{i})_{i\in I},(E_{j})_{j\in J})g\big{)}\big{)}\\ &\qquad\qquad\qquad\qquad\mathsf{V}\big{(}((A_{i})_{i\in I},(B_{j} )_{j\in J})g,((D_{i})_{i\in I},(E_{j})_{j\in J})g\big{)}\big{)}\\ &\qquad\qquad\qquad\qquad\mathsf{V}\big{(}((A_{i})_{i\in I},(B_{j} )_{j\in J})g,((D_{i})_{i\in I},(E_{j})_{j\in J})g\big{)}\big{)}\\ &\qquad\qquad\qquad\qquad\mathsf{V}\big{(}((A_{i},D_{i})_{i\in I },(\mathcal{B}_{j}(B_{j},E_{j}))_{j\in J};\mathcal{C}(((A_{i})_{i\in I},(B_{j} )_{j\in J})g,((D_{i})_{i\in I},(E_{j})_{j\in J})g\big{)}\big{)}\\ &\qquad\qquad\qquad\qquad\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D _{i}))_{i\in I},(\mathcal{B}_{j}(B_{j},E_{j}))_{j\in J};\mathcal{C}(((A_{i})_{i \in I},(B_{j})_{j\in J})g,((D_{i})_{i\in I},(E_{j})_{j\in J})g\big{)}\big{)},\\ &\qquad\qquad\qquad\qquad\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D _{i}))_{i\in I},(\mathcal{B}_{j}(B_{j},E_{j}))_{j\in J};\mathcal{C}(((A_{i})_{i \in I},(B_{j})_{j\in J})g,((D_{i})_{i\in I},(E_{j})_{j\in J})g\big{)}\big{)},\\ &\qquad\qquad\qquad\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D _{i}))_{i\in I},(\mathcal{B}_{j}(B_{j},E_{j}))_{j\in J};\mathcal{C}(((A_{i})_{i \in I},(B_{j})_{j\in J})g,((D_{i})_{i\in I},(E_{j})_{j\in J})g\big{)},\\ &\qquad\qquad\qquad\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I },(\mathcal{B}_
On objects \(f\Phi:\big{(}(A_{i})_{i\in I},(B_{j})_{j\in J}\big{)}\mapsto((A_{i})_{i\in I},(B_{ j})_{j\in J}f\big{)}\mapsto(A_{i})_{i\in I}(B_{j})_{j\in J}f\). On morphisms
\[\prod_{i\in I}\mathsf{V}\big{(}\mathcal{A}_{i}(A_{i},D_{i}); \mathcal{A}_{i}(A_{i},D_{i})\big{)}\times\mathsf{V}\big{(}(\mathcal{B}_{j}(B_{ j},E_{j}))_{j\in J};\underline{\mathsf{V}\text{-}Cat}((\mathcal{A}_{i})_{i\in I}; \mathcal{C})((B_{j})_{j\in J}f,(E_{j})_{j\in J}f)\big{)}\times\] \[\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I},\underline {\mathsf{V}\text{-}Cat}((\mathcal{A}_{i})_{i\in I};\mathcal{C})((B_{j})_{j\in J }f,(E_{j})_{j\in J}f);\mathcal{C}((A_{i})_{i\in I}(B_{j})_{j\in J}f,(D_{i})_{i \in I}(E_{j})_{j\in J}f\big{)}\big{)}\] \[\underline{\mu_{\text{id}}^{\mathsf{V}\text{-}Cat}}\mathsf{V} \big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I},(\mathcal{B}_{j}(B_{j},E_{j}))_ {j\in J};\mathcal{C}((A_{i})_{i\in I}(B_{j})_{j\in J}f,(D_{i})_{i\in I}(E_{j}) _{j\in J}f\big{)}\big{)},\] \[\big{(}(1_{\mathcal{A}_{i}(A_{i},D_{i})})_{i\in I},f_{(B_{j}),(E_ {j})},\mathrm{ev}^{\mathsf{-}Cat}\big{)}\mapsto(f\Phi)_{(A_{i}),(B_{j}),(D_{i}),(E_{j})}. \tag{2.6.10}\]
In place of \(\mathrm{ev}^{\mathsf{-}Cat}\) we can substitute formula (2.3.11) or (2.3.12).
Start from \(f:(\mathcal{B}_{j})_{j\in J}\to\mathsf{V}\text{-}Cat((\mathcal{A}_{i})_{i\in I };\mathcal{C})\). Produce \(g=f\Phi\) and \(f^{\prime}=g\Psi\). Then \(\mathrm{Ob}\,f^{\prime}:\prod_{j\in J}\mathrm{Ob}\,\mathcal{B}_{j}\to\mathsf{ V}\text{-}Cat((\mathcal{A}_{i})_{i\in I};\mathcal{C})\) is given by
\[\prod_{i\in I}\mathsf{V}\text{-}Cat(\mathcal{A}_{i};\mathcal{A}_ {i})\times\prod_{j\in J}\mathsf{V}\text{-}Cat(;\mathcal{B}_{j})\times\mathsf{ V}\text{-}Cat((\mathcal{A}_{i})_{i\in I},(\mathcal{B}_{j})_{j\in J}; \mathcal{C})\xrightarrow{\mu_{\text{in}_{1}:I\to I\cup J}}\mathsf{V}\text{-} Cat((\mathcal{A}_{i})_{i\in I};\mathcal{C}),\\ (B_{j})_{j\in J}\mapsto\big{(}(\mathrm{Id}_{\mathcal{A}_{i}})_{i \in I},(\breve{B}_{j})_{j\in J},f\Phi\big{)}\mapsto(B_{j})_{j\in J}f^{\prime}.\]
\[(B_{j})_{j\in J}f^{\prime} =\big{[}(\mathcal{A}_{i})_{i\in I}\xrightarrow{(\mathrm{Id}_{ \mathcal{A}_{i}})_{i\in I},(\breve{B}_{j})_{j\in J}}(\mathcal{A}_{i})_{i\in I },(\mathcal{B}_{j})_{j\in J}\] \[\xrightarrow{(\mathrm{Id}_{\mathcal{A}_{i}})_{i\in I},\breve{f}}( \mathcal{A}_{i})_{i\in I},\underline{\mathsf{V}\text{-}Cat}((\mathcal{A}_{i})_ {i\in I};\mathcal{C})\xrightarrow{\mathrm{ev}^{\mathsf{-}Cat}}\mathcal{C}\big{]}\] \[=\big{[}(\mathcal{A}_{i})_{i\in I}\xrightarrow{(\mathrm{Id}_{ \mathcal{A}_{i}})_{i\in I},\breve{h}}(\mathcal{A}_{i})_{i\in I},\underline{ \mathsf{V}\text{-}Cat}((\mathcal{A}_{i})_{i\in I};\mathcal{C})\xrightarrow{ \mathrm{ev}^{\mathsf{-}Cat}}\mathcal{C}\big{]}=h,\]
where \(h=(B_{j})_{j\in J}f\in\mathsf{V}\text{-}Cat((\mathcal{A}_{i})_{i\in I}; \mathcal{C})\). Notice that \((\breve{B}_{j})_{j\in J}\centerdot f=\breve{h}\) due to Example 2.4.3. The last equation follows from the
**2.6.3 Lemma**.: For an arbitrary multi-entry \(\mathsf{V}\)-functor \(h:(\mathcal{A}_{i})_{i\in I}\to\mathcal{C}\) (1.3.4) holds for \(\mathsf{C}=\mathsf{V}\text{-}Cat\):
\[\big{[}(\mathcal{A}_{i})_{i\in I}\xrightarrow{(\mathrm{Id}_{\mathcal{A}_{i}})_{i \in I},\breve{h}}(\mathcal{A}_{i})_{i\in I},\underline{\mathsf{V}\text{-}Cat}(( \mathcal{A}_{i})_{i\in I};\mathcal{C})\xrightarrow{\mathrm{ev}^{\mathsf{-}Cat}} \mathcal{C}\big{]}=h.\]
Proof.: The left hand side sends a tuple of objects \((A_{i})_{i\in I}\) to \(\big{(}(A_{i})_{i\in I},h\big{)}\mapsto(A_{i})_{i\in I}h\), thus, it acts on objects like \(\mathrm{Ob}\,h\). On morphisms the left hand side is a particular case of map \(\Phi\) for \(J=\varnothing\) (see (2.6.9)):
\[\Phi_{0}=\big{[}\mathsf{V}\text{-}Cat\big{(}(\mathcal{A}_{i})_{i \in I};\mathcal{C}\big{)}\xrightarrow{(\mathrm{id}_{\mathcal{A}_{i}})_{i\in I} \times\mathrm{id}\times\mathrm{ev}^{\mathsf{V}\text{-}Cat}_{(\mathcal{A}_{i})_{i \in I};\mathcal{C}}}\\ \prod_{i\in I}\mathsf{V}\text{-}Cat(\mathcal{A}_{i};\mathcal{A}_{i} )\times\mathsf{V}\text{-}Cat\big{(}(\mathcal{A}_{i})_{i\in I},\mathcal{C} \big{)}\times\mathsf{V}\text{-}Cat\big{(}(\mathcal{A}_{i})_{i\in I},\underline{ \mathsf{V}\text{-}Cat}((\mathcal{A}_{i})_{i\in I};\mathcal{C});\mathcal{C}\big{)} \\ \xrightarrow{\mu_{\text{in}_{1}:I\to I\cup J}^{\mathsf{V}\text{-}Cat} \big{(}(\mathcal{A}_{i})_{i\in I},\mathcal{C}\big{)}}\mathsf{V}\text{-}Cat\big{(}( \mathcal{A}_{i})_{i\in I},\underline{\mathsf{V}\text{-}Cat}\big{(}(\mathcal{A}_{i} )_{i\in I};\mathcal{C})\big{]},\\ h\mapsto\big{(}(\mathrm{Id}_{\mathcal{A}_{i}})_{i\in I},h, \mathrm{ev}^{\mathsf{V}\text{-}Cat}\big{)}\mapsto h\Phi_{0}=\big{[}(\mathcal{A}_{i} )_{i\in I}\xrightarrow{(\mathrm{Id}_{\mathcal{A}_{i}})_{i\in I},\breve{h}}( \mathcal{A}_{i})_{i\in I},\underline{\mathsf{V}\text{-}Cat}((\mathcal{A}_{i})_{i \in I};\mathcal{C})\xrightarrow{\mathrm{ev}^{\mathsf{-}Cat}}\mathcal{C}\big{]}.\]
One can prove that \(\Phi_{0}=\mathrm{id}\). Thus, \(h\Phi_{0}=h\).
We conclude that \(\mathrm{Ob}\,f^{\prime}=\mathrm{Ob}\,f\).
On morphisms \(f^{\prime}_{(B_{j}),(E_{j})}p_{(A_{i})}\) is determined by (2.6.8):
\[\prod_{i\in I}\mathsf{V}\big{(};\mathcal{A}_{i}(A_{i},A_{i}) \big{)}\times\prod_{j\in J}\mathsf{V}(\mathcal{B}_{j}(B_{j},E_{j});\mathcal{B}_{j }(B_{j},E_{j}))\\ \times\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},A_{i})_{i\in I},( \mathcal{B}_{j}(B_{j},E_{j}))_{j\in J};\mathcal{C}(((A_{i})_{i\in I},(B_{j})_{j \in J})g,((A_{i})_{i\in I},(E_{j})_{j\in J})g\big{)}\big{)}\\ \xrightarrow{\mu_{\text{in}_{2}:J\to I\cup J}}\mathsf{V}\big{(}( \mathcal{B}_{j}(B_{j},E_{j}))_{j\in J};\mathcal{C}(((A_{i})_{i\in I},(B_{j})_{j \in J})g,((A_{i
where \(g=f\Phi\) and \(f^{\prime}=g\Psi\). One shows that \(f^{\prime}=f\) and \(\Phi\centerdot\centerdot\Psi=id\).
Start from \(g:(\mathcal{A}_{i})_{i\in I},(\mathcal{B}_{j})_{j\in J}\to\mathcal{C}\). Produce \(f=g\Psi\) and \(g^{\prime\prime}=f\Phi=g\Psi\Phi\). Then \(\operatorname{Ob}g^{\prime\prime}:\prod_{i\in I}\operatorname{Ob}\mathcal{A}_ {i}\times\prod_{j\in J}\operatorname{Ob}\mathcal{B}_{j}\to\operatorname{Ob} \mathcal{C}\) is given by
\[g^{\prime\prime}=\big{[}(\mathcal{A}_{i})_{i\in I},(\mathcal{B} _{j})_{j\in J}\xrightarrow{(\operatorname{Id}_{A_{i}})_{i\in I},g\Psi}( \mathcal{A}_{i})_{i\in I},\underline{\mathsf{V}\!\!-\!\!\mathcal{C}at}(( \mathcal{A}_{i})_{i\in I};\mathcal{C})\xrightarrow{\operatorname{ev}\!\!-\! \!\mathcal{C}at}\big{]},\] \[\big{(}(A_{i})_{i\in I},(B_{j})_{j\in J}\big{)}\mapsto\big{(}(A_ {i})_{i\in I},\big{[}(\mathcal{A}_{i})_{i\in I}\xrightarrow{(\operatorname{ Id})_{I},(\tilde{B}_{j})_{j\in J}}(\mathcal{A}_{i})_{i\in I},(\mathcal{B}_{j})_{j \in J}\xrightarrow{g}\mathcal{C}\big{]}\big{)}\] \[\mapsto\big{(}(A_{i})_{i\in I},(B_{j})_{j\in J}\big{)}g.\]
Thus, \(\operatorname{Ob}g^{\prime\prime}=\operatorname{Ob}g\).
In order to describe \(g^{\prime\prime}\) on morphisms let us rewrite (2.6.10) substituting (2.6.6) into it:
\[\prod_{i\in I}\mathsf{V}\big{(}\mathcal{A}_{i}(A_{i},D_{i}); \mathcal{A}_{i}(A_{i},D_{i})\big{)}\times\mathsf{V}\big{(}(\mathcal{B}_{j}(B_ {j},E_{j}))_{j\in J};\underline{\mathsf{V}\!\!-\!\!\mathcal{C}at}((\mathcal{ A}_{i})_{i\in I};\mathcal{C})\\ (\big{[}(\mathcal{A}_{i})_{i\in I}\xrightarrow{(\operatorname{Id} )_{I},(\tilde{B}_{j})_{j\in J}}(\mathcal{A}_{i})_{i\in I},(\mathcal{B}_{j})_{j \in J}\xrightarrow{g}\mathcal{C}\big{]},\big{[}(\mathcal{A}_{i})_{i\in I} \xrightarrow{(\operatorname{Id})_{I},(\tilde{B}_{j})_{j\in J}}(\mathcal{A}_{i} )_{i\in I},(\mathcal{B}_{j})_{j\in J}\xrightarrow{g}\mathcal{C}\big{]})\big{)} \\ \times\mathsf{V}\big{(}(\mathcal{A}_{i}(A_{i},D_{i}))_{i\in I}, \underline{\mathsf{V}\!\!-\!\!\mathcal{C}at}((\mathcal{A}_{i})_{i\in I}; \mathcal{C})\big{(}\big{[}(\mathcal{A}_{i})_{i\in I}\xrightarrow{( \operatorname{Id})_{I},(\tilde{B}_{j})_{j\in J}}(\mathcal{A}_{i})_{i\in I},( \mathcal{B}_{j})_{j\in J}\xrightarrow{g}\mathcal{C}\big{]},\\ \big{[}(\mathcal{A}_{i})_{i\in I}\xrightarrow{(\operatorname{Id} )_{I},(\tilde{B}_{j})_{j\in J}}(\mathcal{A}_{i})_{i\in I},(\mathcal{B}_{j})_{ j\in J}\xrightarrow{g}\mathcal{C}\big{]});\mathcal{C}(((A_{i})_{i\in I},(B_{j})_{j\in J })g,((D_{i})_{i\in I},(E_{j})_{j\in J})g)\big{)}\\ \xrightarrow{\mu_{\operatorname{Id}\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**2.7.2 Proposition**.: Let \(\mathsf{V}\) be a locally small symmetric complete multicategory. The multicategory \(\mathsf{V}\)-\(\mathcal{C}at\) has equalizers.
Proof.: Let \(\mathcal{A}\xrightarrow[g]{f}\mathcal{B}\in\mathsf{V}\)-\(\mathcal{C}at\) be a pair of parallel \(\mathsf{V}\)-functors. Define a subset \(\operatorname{Ob}\mathcal{K}=\{X\in\operatorname{Ob}\mathcal{A}\mid Xf=Xg\}\). Denote by \(\operatorname{Ob}e:\operatorname{Ob}\mathcal{K}\to\operatorname{Ob} \mathcal{A}\) the inclusion map. For \(X,Y\in\operatorname{Ob}\mathcal{K}\) define an object \(\mathcal{K}(X,Y)\in\mathsf{V}\) and a morphism \(e_{X,Y}\in\mathsf{V}\) via an equalizer diagram (in multicategory \(\mathsf{V}\))
\[\mathcal{K}(X,Y)\xrightarrow[g]{e_{X,Y}}\mathcal{A}(X,Y)\xrightarrow[g_{X,Y} ]{f_{X,Y}}\mathcal{B}(Xf=Xg,Yf=Yg).\]
This defines a \(\mathsf{V}\)-quiver \(\mathcal{K}\). Let us show that the \(\mathsf{V}\)-subquiver \(\mathcal{K}\subset\mathcal{A}\) is a \(\mathsf{V}\)-subcategory.
Identity morphism for \(X\in\operatorname{Ob}\mathcal{K}\) is obtained via equalizer property for the empty family. Given \(\operatorname{id}_{X}^{\mathcal{A}}\) factorizes in a unique way as shown on the diagram from Definition 1.3.2
(2.7.1)
The left-bottom path in the following diagram is a fork, that is, \((e_{X,Y},e_{Y,Z})\mathbin{\boldsymbol{\cdot}}\kappa^{\mathcal{A}}\mathbin{ \boldsymbol{\cdot}}f_{X,Z}=(e_{X,Y},e_{Y,Z})\mathbin{\boldsymbol{\cdot}} \kappa^{\mathcal{A}}\mathbin{\boldsymbol{\cdot}}g_{X,Z}\),
(2.7.2)
In fact, due to (2.4.3) for \(f\) and \(g\) the left-bottom path composes to the same parallel arrows as
\[\mathcal{K}(X,Y),\mathcal{K}(Y,Z)\xrightarrow[g_{X,Y},g_{Y,Z}]{e_{X,Y},e_{Y,Z} }\mathcal{A}(X,Y),\mathcal{A}(Y,Z)\xrightarrow[g_{X,Y,f_{Y,Z}}]{f_{X,Y,f_{Y,Z} }}\mathcal{B}(Xf,Yf),\mathcal{B}(Yf,Zf)\xrightarrow[g]{\kappa^{\mathcal{B}}} \mathcal{B}(Xf,Zf).\]
Therefore, there is a unique top arrow \(\kappa^{\mathcal{K}}_{X,Y,Z}\) in this diagram which makes it commutative. We take this arrow as a composition in \(\mathcal{K}\). It is associative and unital since \(e_{-,-}\) are monomorphisms, more precisely, enjoy the property of Definition 1.3.2. Furthermore, diagrams (2.7.1) and (2.7.2) show that \(e\) is a \(\mathsf{V}\)-functor (compare with (2.4.4) and diagram (2.4.3)). Clearly, \(e:\mathcal{K}\to\mathcal{A}\) is an equalizer of \((f,g)\) as required in Definition 1.3.2.
### Summary
**2.8.1 Theorem**.: Let \(\mathsf{V}\) be a locally small symmetric closed complete multicategory. Then so is \(\mathsf{V}\)-\(\mathcal{C}at\), the multicategory of small \(\mathsf{V}\)-categories and multi-entry \(\mathsf{V}\)-functors.
Proof.: This is proven in Propositions 2.4.5, 2.6.2, 2.7.1 and 2.7.2.
## 3 First examples
### Compositions and whiskers
**3.1.1 Lemma**.: Let \(F,G:(\mathcal{A}_{i})_{i\in I}\to\mathcal{C}\) be multi-entry \(\mathsf{V}\)-functors. Then
\[\mu_{\operatorname{in}2:\mathbf{1}\hookrightarrow I\sqcup 1}^{ \mathsf{V}}:\bigl{[}\prod_{i\in I}\mathsf{V}\bigl{(};\mathcal{A}_{i}(A_{i},A_{ i})\bigr{)}\bigr{]}\times\mathsf{V}\bigl{(}\underline{\mathsf{V}\text{-} \mathcal{C}at}\bigl{(}(\mathcal{A}_{i})_{i\in I};\mathcal{C}\bigr{)}(F,G); \underline{\mathsf{V}\text{-}\mathcal{C}at}\bigl{(}(\mathcal{A}_{i})_{i\in I };\mathcal{C}\bigr{)}(F,G)\bigr{)}\\ \times\mathsf{V}\bigl{(}\bigl{(}\mathcal{A}_{i}(A_{i},A_{i}) \bigr{)}_{i\in I},\underline{\mathsf{V}\text{-}\mathcal{C}at}\bigl{(}( \mathcal{A}_{i})_{i\in I};\mathcal{C}\bigr{)}(F,G);\mathcal{C}\bigl{(}(A_{i})_{ i\in I}F;(A_{i})_{i\in I}G\bigr{)}\bigr{)}\\ \to\mathsf{V}\bigl{(}\underline{\mathsf{V}\text{-}\mathcal{C}at} \bigl{(}(\mathcal{A}_{i})_{i\in I};\mathcal{C}\bigr{)}(F,G);\mathcal{C}\bigl{(} (A_{i})_{i\in I}F;(A_{i})_{i\in I}G\bigr{)}\bigr{)},\\ \bigl{(}(\operatorname{id}_{A_{i}})_{i\in I},1,(\operatorname{ev}_ {(\mathcal{A}_{i})_{i\in I}\mathcal{C}}^{\mathsf{V}\text{-}\mathcal{C}at}_{(A _{i})_{i\in I},F,(A_{i})_{i\in I},G}\bigr{)}\mapsto p_{(A_{i})_{i\in I}}.\]
Proof is left to the reader.
#### 3.1.2 Compositions
Let \(\mathsf{C}\) be a closed symmetric multicategory. As noticed in [1, Proposition 4.10] for each map \(\phi:I\to J\) in \(\operatorname{Mor}\mathcal{S}\) and \(X_{i},Y_{j},Z\in\operatorname{Ob}\mathsf{C}\), \(i\in I\), \(j\in J\), there exists a unique morphism
\[\mu_{\phi}^{\mathsf{C}}:\big{(}\underline{\mathsf{C}}((X_{i})_{i\in\phi^{-1}j}; Y_{j})\big{)}_{j\in J},\underline{\mathsf{C}}\big{(}(Y_{j})_{j\in J};Z\big{)} \to\underline{\mathsf{C}}\big{(}(X_{i})_{i\in I};Z\big{)}\]
that makes the bottom square in diagram
\[(X_{i})_{i\in I}\] \[(X_{i})_{i\in I},\big{(}\underline{\mathsf{C}}((X_{i})_{i\in\phi^{-1 }j};Y_{j})\big{)}_{j\in J},\underline{\mathsf{C}}\big{(}(Y_{j})_{j\in J};Z \big{)}\xrightarrow{(1_{X_{i}})_{i\in I},\underline{\mathsf{C}}}(X_{i})_{i \in I},\underline{\mathsf{C}}\big{(}(X_{i})_{i\in I};Z\big{)}\] \[(Y_{j})_{j\in J},\underline{\mathsf{C}}\big{(}(Y_{j})_{j\in J};Z \big{)}\xrightarrow{\operatorname{ev}_{(Y_{j})_{j\in J};Z}^{\zeta}}Z \tag{3.1.1}\]
commute. Here \(F^{j}:(X_{i})_{i\in\phi^{-1}j}\to Y_{j}\), \(j\in J\), \(G:(Y_{j})_{j\in J}\to Z\) are morphisms in \(\mathsf{C}\). This composition law turns \(\underline{\mathsf{C}}\) into a \(\mathsf{C}\)-multicategory.
In particular, we can apply this discussion to the multicategory \(\mathsf{C}=\mathsf{V}\text{-}\mathcal{C}at\). We deduce that on objects the \(\mathsf{V}\)-functor \(\mu_{\phi}^{\mathsf{V}\text{-}\mathcal{C}at}\) gives map \(\operatorname{Ob}\mu_{\phi}^{\mathsf{V}\text{-}\mathcal{C}at}\) :
\[\big{[}\prod_{j\in J}\operatorname{Ob}\underline{\mathsf{V}\text{-}\mathcal{C}at }\big{(}(\mathcal{A}_{i})_{i\in\phi^{-1}j};\mathcal{B}_{j}\big{)}\big{]}\times \operatorname{Ob}\underline{\mathsf{V}\text{-}\mathcal{C}at}\big{(}( \mathcal{B}_{j})_{j\in J};\mathcal{C}\big{)}\to\operatorname{Ob}\underline{ \mathsf{V}\text{-}\mathcal{C}at}\big{(}(\mathcal{A}_{i})_{i\in I};\mathcal{C} \big{)}\]
which coincides with
\[\mu_{\phi}^{\mathsf{V}\text{-}\mathcal{C}at}:\big{[}\prod_{j\in J}\mathsf{V} \text{-}\mathcal{C}at\big{(}(\mathcal{A}_{i})_{i\in\phi^{-1}j};\mathcal{B}_{j }\big{)}\big{]}\times\mathsf{V}\text{-}\mathcal{C}at\big{(}(\mathcal{B}_{j})_{ j\in J};\mathcal{C}\big{)}\to\mathsf{V}\text{-}\mathcal{C}at\big{(}(\mathcal{A}_{i})_{i \in I};\mathcal{C}\big{)}. \tag{3.1.2}\]
Let us study multi-entry \(\mathsf{V}\)-functor
\[\mu_{\phi}^{\mathsf{V}\text{-}\mathcal{C}at}:\big{(}\underline{\mathsf{V} \text{-}\mathcal{C}at}\big{(}(\mathcal{A}_{i})_{i\in\phi^{-1}j};\mathcal{B}_{j }\big{)}\big{)}_{j\in J},\underline{\mathsf{V}\text{-}\mathcal{C}at}\big{(}( \mathcal{B}_{j})_{j\in J};\mathcal{C}\big{)}\to\underline{\mathsf{V}\text{-} \mathcal{C}at}\big{(}(\mathcal{A}_{i})_{i\in I};\mathcal{C}\big{)}.\]
#### 3.1.3 Left whiskering
Let \(F^{j}:(\mathcal{A}_{i})_{i\in\phi^{-1}j}\to\mathcal{B}_{j}\), \(j\in J\), be multi-entry \(\mathsf{V}\)-functors. Consider the left whiskering \(\mathsf{V}\)-functor
\[LW=\big{[}\underline{\mathsf{V}\text{-}\mathcal{C}at}\big{(}( \mathcal{B}_{j})_{j\in J};\mathcal{C}\big{)}\xrightarrow{(\hat{F}^{j})_{j\in J },1}\big{(}\underline{\mathsf{V}\text{-}\mathcal{C}at}\big{(}(\mathcal{A}_{i })_{i\in\phi^{-1}j};\mathcal{B}_{j}\big{)}\big{)}_{j\in J},\underline{\mathsf{ V}\text{-}\mathcal{C}at}\big{(}(\mathcal{B}_{j})_{j\in J};\mathcal{C}\big{)}\] \[\xrightarrow{\underline{\mathsf{V}\text{-}\mathcal{C}at}\big{(}( \mathcal{A}_{i})_{i\in I};\mathcal{C}\big{)}}.\]
On objects it takes \(G:(\mathcal{B}_{j})_{j\in J}\to\mathcal{C}\) to \((F^{j})_{j\in J}\mathbin{\mathchoice{\mathbin{\mathbin{\mathbin{\mathbin{ \mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin{\mathbin \mathbin{\mathbinmathbin{\mathbinmathbinmathbinmathbinmathbinmathbinmathbinmathbinmathbin{\mathbinmathbinmathbinmathbinmathbinmathbin{ \mathbinmathbinmathbinmathbinmathbin{\mathbinmathbin{\mathbinmathbin \mathbin{\mathbinmathbinmathbin{\cdotcdot{\cdotcdot{\cdotcdotcdot{ \cdot
**3.1.4 Proposition**.: On morphisms
\[LW:\underline{\mathsf{V}\text{-}\mathcal{C}at}\big{(}(\mathcal{B}_{j})_{j\in J}; \mathcal{C}\big{)}(G,H)\to\underline{\mathsf{V}\text{-}\mathcal{C}at}\big{(}( \mathcal{A}_{i})_{i\in I};\mathcal{C}\big{)}\big{(}(F^{j})_{j\in J}\mathbin{ \text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\texttext{ \text
On objects it takes \(\big{(}F^{j}:(\mathcal{A}_{i})_{i\in\phi^{-1}j}\to\mathcal{B}_{j}\big{)}_{j\in J}\) to \((F^{j})_{j\in J}\mathrel{\mathop{\hbox to 0.0pt{\vrule height 6.0pt depth -0.0pt width 0.4pt\vrule hei ght 0.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt} \smash{\vrule height 6.0pt depth -0.0pt width 0.4pt}\smash{\vrule height 6.0pt depth -0.
takes a tuple of natural transformations \(\big{(}\lambda^{j}:F^{j}\to G^{j}:(\mathcal{A}_{i})_{i\in\phi^{-1}j}\to\mathcal{B}_ {j}\big{)}_{j\in J}\) with the components \(\lambda^{j}_{(A_{i})_{i\in\phi^{-1}j}}\in\mathsf{V}\big{(};\mathcal{B}_{j}\big{(} (A_{i})_{i\in\phi^{-1}j}F^{j},(A_{i})_{i\in\phi^{-1}j}G^{j}\big{)}\big{)}\) to \(\nu=(\nu_{(A_{i}\in\mathcal{A}_{i})_{i\in I}}):(F^{j})_{j\in J\diamondsuit}H\to( G^{j})_{j\in J\diamondsuit}H:(\mathcal{A}_{i})_{i\in I}\to\mathcal{C}\),
\[\nu_{(A_{i}\in\mathcal{A}_{i})_{i\in I}}\in\mathsf{V}\big{(};\mathcal{C}\big{(} ((A_{i})_{i\in\phi^{-1}j}F^{j})_{j\in J}H,((A_{i})_{i\in\phi^{-1}j}G^{j})_{j \in J}H\big{)}\big{)},\]
where \(\nu_{(A_{i}\in\mathcal{A}_{i})_{i\in I}}=(\lambda^{j}_{(A_{i})_{i\in\phi^{-1} j}})_{j\in J}\centerdot_{\phi}H_{((A_{i})_{i\in\phi^{-1}j}F^{j})_{j\in J},((A_{i})_ {i\in\phi^{-1}j}G^{j})_{j\in J}}\).
Proof.: Follows from the above statement and Proposition 2.5.2.
### Representable multicategories
**3.2.1 Proposition**.: When multicategory \(\mathsf{V}\) is represented by a symmetric monoidal category \(\mathcal{V}\), the multicategory \(\mathsf{V}\)-\(\mathcal{C}\)_at is representable by the symmetric monoidal category \(\mathsf{V}\)-\(\mathcal{C}\)_at_.
Assume that \(\mathcal{V}\) is Cartesian (closed under arbitrary small products). Equip \(\mathcal{V}\) with finite products as monoidal multiplication. Then \(\mathsf{V}\)-\(\mathcal{C}\)_at_ is also Cartesian.
Proof.: The condition '\(F\) is a multi-entry \(\mathsf{V}\)-functor' in Definition 2.4.1 is expressed by the equations
Precisely the same conditions tell that \(F:\mathbb{B}^{i\in I}\mathcal{A}_{i}\to\mathcal{B}\) is a \(\mathcal{V}\)-functor. Here the monoidal product \(\mathcal{A}=\mathbb{B}^{i\in I}\mathcal{A}_{i}\) of \(\mathcal{V}\)-categories \(\mathcal{A}_{i}\) has objects \(\operatorname{Ob}\mathcal{A}=\prod_{i\in I}\operatorname{Ob}\mathcal{A}_{i}\) and objects of morphisms \(\mathcal{A}\big{(}(X_{i})_{i\in I},(Y_{i})_{i\in I}\big{)}=\otimes^{i\in I} \mathcal{A}_{i}(X_{i},Y_{i})\).
Let \(\mathcal{V}\) be Cartesian. It is proven in Proposition 2.7.1 in the general case of categories enriched over a symmetric multicategory \(\mathsf{V}\) that the multicategory \(\mathsf{V}\)-\(\mathcal{C}\)_at_ is Cartesian. Hence \(\mathcal{V}\)-\(\mathcal{C}\)_at_ is Cartesian.
As shown in Theorem 2.8.1 + [1, Proposition 4.8] for a symmetric closed complete monoidal category \(\mathcal{V}\), the category \(\mathsf{V}\)-\(\mathcal{C}\)_at_ also has all these structures. Equivalence of closedness of \(\mathcal{V}\) and \(\mathsf{V}=\widehat{\mathcal{V}}\) is proven precisely in [1, Proposition 4.8]. As we have noticed, if monoidal category \(\mathcal{V}\) is Cartesian, so is \(\mathsf{V}\)-\(\mathcal{C}\)_at_.
### Strict 2-categories
**3.3.1 Example**.: Let \(\mathsf{V}=\mathds{1}\), final multicategory with \(\operatorname{Ob}\mathds{1}=\mathds{1}=\{1\}\), and \(\mathds{1}((1)_{\mathbf{n}};1)=\mathds{1}\) for all \(n\in\mathbb{Z}_{\geqslant 0}\). Then \(\mathds{1}\)-\(\mathcal{C}\)_at_ is isomorphic to \(\mathsf{Set}\), the symmetric multicategory of small sets, corresponding to \(\mathcal{S}et\), the Cartesian closed category of small sets. Indeed, a small \(\mathds{1}\)-category \(\mathcal{C}\) is a small set \(\operatorname{Ob}\mathcal{C}\) of objects. The other choices are unique. This ensures that required equations hold true. A multi-entry \(\mathds{1}\)-functor \(F:(\mathcal{A}_{i})_{i\in I}\to\mathcal{B}\) is a function \(F=\operatorname{Ob}F:\operatorname{Ob}\mathcal{A}_{1}\times\dots\times \operatorname{Ob}\mathcal{A}_{I}\to\operatorname{Ob}\mathcal{B}\), that is, a morphism in \(\mathsf{Set}\).
**3.3.2 Example**.: Let \(\mathsf{V}=\mathsf{Set}\). This multicategory is closed with \(\underline{\mathsf{Set}}=\mathsf{Set}\). Objects of \(\mathsf{Set}\)-\(\mathcal{C}\)_at_ are (ordinary) small (and locally small) categories. Multi-entry \(\mathsf{Set}\)-functors \(F:(\mathcal{A}_{i})_{i\in I}\to\mathcal{C}\) are (ordinary) functors \(F:\prod_{i\in I}\mathcal{A}_{i}\to\mathcal{C}\). The object of \(\mathsf{Set}\)-transformations \(F\to G:(\mathcal{A}_{i})_{i\in I}\to\mathcal{C}=\) the enriched end in \(\mathsf{Set}\)\(\int_{(A_{i}\in\mathcal{A}_{i})_{i\in I}}\mathcal{C}((A_{i})_{i\in I}F,(A_{i})_{i \in I}G)\), the equalizer in multicategory \(\mathsf{Set}\) of pair
of morphisms (2.3.6). It coincides with the set of natural transformations \(\lambda:F\to G:\prod_{i\in I}\mathcal{A}_{i}\to\mathcal{C}\), which are, of course, elements \(\lambda_{(A_{i}\in\mathcal{A}_{i})_{i\in I}}\in\prod_{(A_{i}\in\mathcal{A}_{i}) _{i\in I}}\mathcal{C}\big{(}(A_{i})_{i\in I}F,(A_{i})_{i\in I}G\big{)}\) such that
\[\begin{CD}(A_{i})_{i\in I}F@>{F_{(A_{i})_{i\in I}}}>{}>(D_{i})_{i\in I}F\\ @V{\lambda_{(A_{i})_{i\in I}}}V{}V=\\ (A_{i})_{i\in I}G@>{G_{(A_{i})_{i\in I}}}>{}>(D_{i})_{i\in I}G\end{CD}\]
**3.3.3 Example**.: Let \(\mathsf{V}=\mathsf{Set}\text{-}\mathcal{C}\mathit{at}\). A \(\mathsf{V}\)-category \(\mathcal{A}\) is a category enriched over the Cartesian closed category \(\mathcal{C}\mathit{at}\) of small categories. Thus, it is the same as a strict \(2\)-category. A \(\mathsf{V}\)-functor \(F:\mathcal{A}\to\mathcal{B}\) is a map \(F=\operatorname{Ob}F:\operatorname{Ob}\mathcal{A}\to\operatorname{Ob} \mathcal{B}\) with functors \(F=F_{A,E}:\mathcal{A}(A,E)\to\mathcal{B}(AF,EF)\) such that
\[\begin{CD}\mathcal{A}(A,D)\times\mathcal{A}(D,E)@>{\kappa_{A,D,E}}>{}> \mathcal{A}(A,E)\\ @V{F_{A,D}\times F_{D,E}}V{}V@V{}V{=}V\\ \mathcal{B}(AF,DF)\times\mathcal{B}(DF,EF)@>{\kappa_{AF,DF,EF}}>{}>\mathcal{B}( AF,EF)\end{CD}\]
and \(F_{A,A}:\mathcal{A}(A,A)\to\mathcal{B}(AF,AF)\) maps the identity object to the identity object. Thus, \(F\) is a strict \(2\)-functor.
The subcategory \(\mathsf{Set}\text{-}\mathcal{C}\mathit{at}\text{-}\mathcal{C}\mathit{at}( \mathcal{A},\mathcal{C})(F,G)\subset\prod_{A\in\operatorname{Ob}\mathcal{A}} \mathcal{C}(AF,AG)\) (see (2.3.6)) is equipped with the functors
\[p_{D}=\big{[}\mathsf{Set}\text{-}\mathcal{C}\mathit{at}\text{-}\mathcal{C} \mathit{at}(\mathcal{A},\mathcal{C})(F,G)@>{}>{}>\prod_{A\in\operatorname{Ob} \mathcal{A}}\mathcal{C}(AF,AG)@>{\mathbb{P}\!P_{D}}>{}>\mathcal{C}(DF,DG)\big{]}.\]
By definition, it is the biggest subcategory, for which
\[\begin{CD}\mathcal{A}(A,D)\times\mathsf{Set}\text{-}\mathcal{C}\mathit{at} \text{-}\mathcal{C}\mathit{at}(\mathcal{A},\mathcal{C})(F,G)@>{F_{A,D}\times p _{D}}>{}>\mathcal{C}(AF,DF)\times\mathcal{C}(DF,DG)\\ @V{G_{A,D}\times p_{A}}V{}V@V{}V{=}V\\ \mathcal{C}(AG,DG)\times\mathcal{C}(AF,AG)@>{c}>{}>\mathcal{C}(AF,AG)\times \mathcal{C}(AG,DG)@>{\mathcal{C}(AF,DG)}>{}>\mathcal{C}(AF,DG)\end{CD} \tag{3.3.1}\]
In particular, objects of \(\mathsf{Set}\text{-}\mathcal{C}\mathit{at}\text{-}\mathcal{C}\mathit{at}( \mathcal{A},\mathcal{C})(F,G)\) are collections of \(1\)-cells \(\lambda=(\lambda_{A})_{A\in\operatorname{Ob}\mathcal{A}}\), \(\lambda_{A}\in\operatorname{Ob}\mathcal{C}(AF,AG)\), such that for all \(\nu:f\to g\in\mathcal{A}(A,D)\) the following square
commutes in \(\mathcal{C}\), that is,
in the sense of strong (\(2\)-categorical) composition in \(\mathcal{C}\). Terminology is that of Gray [10, SSI,2.3]. Here \(\lambda_{A}\in\operatorname{Ob}\mathcal{C}(AF,AG)\), \(\lambda_{D}\in\operatorname{Ob}\mathcal{C}(DF,DG)\), \(\nu F_{A,D}:fF_{A,D}\to gF_{A,D}\in\mathcal{C}(AF,DF)\), \(\nu G_{A,D}:fG_{A,D}\to gG_{A,D}\in\mathcal{C}(AG,DG)\), \(\lambda_{A}\centerdot(\nu G_{A,D}):\lambda_{A}\centerdot(\nu G_{A,D})\to \lambda_{A}\centerdot(gG_{A,D})\in\mathcal{C}(AF,DG)\), \((\nu F_{A,D})\centerdot\lambda_{D}:(fF_{A,D})\centerdot\lambda_{D}\to(gF_{A,D}) \centerdot\lambda_{D}\in\mathcal{C}(AF,DG)\). The last equation says that \(\lambda_{A}\centerdot(\nu G_{A,D})=(\nu F_{A,D})\centerdot\lambda_{D}\). Therefore, the collection \(\lambda\) is a \(\mathcal{C}\mathit{at}\)-natural transformation [10, SSI,2.3] =strict \(2\)-natural transformation (\(1\)-transform in terminology of Crans [11]).
Let \(\lambda,\mu\in\operatorname{Ob}\underline{\mathsf{Set}\text{-}\text{-}\text{-} \text{Cat}(\mathcal{A},\mathcal{C})}(F,G)\),
\[m=(m_{A})_{A\in\operatorname{Ob}\mathcal{A}}\in\underline{\mathsf{Set}\text{-} \text{Cat}\text{-}\text{Cat}(\mathcal{A},\mathcal{C})}(F,G)(\lambda,\mu).\]
Then for any \(1\)-cell \(f\in\operatorname{Ob}\mathcal{A}(A,D)\) we have \(fF_{A,D}\in\operatorname{Ob}\mathcal{C}(AF,DF)\), \(fG_{A,D}\in\operatorname{Ob}\mathcal{C}(AG,DG)\), \(\lambda_{A},\mu_{A}\in\operatorname{Ob}\mathcal{C}(AF,AG)\), \(\lambda_{D},\mu_{D}\in\operatorname{Ob}\mathcal{C}(DF,DG)\), \(m_{A}\in\mathcal{C}(AF,AG)(\lambda_{A},\mu_{A})\), and, furthermore, \(m_{D}\in\mathcal{C}(DF,DG)(\lambda_{D},\mu_{D})\). We have also
\[m_{A}\operatorname{\boldsymbol{\cdot}}\left(fG_{A,D}\right) \in\mathcal{C}(AF,DG)(\lambda_{A}\operatorname{\boldsymbol{\cdot}} \left(fG_{A,D}\right),\mu_{A}\operatorname{\boldsymbol{\cdot}}\left(fG_{A,D} \right))\\ =\mathcal{C}(AF,DG)(\lambda_{A}\operatorname{\boldsymbol{\cdot }}\left(fG_{A,D}\right),\left(fF_{A,D}\right)\operatorname{\boldsymbol{\cdot }}\mu_{D}),\\ (fF_{A,D})\operatorname{\boldsymbol{\cdot}}m_{D}\in\mathcal{C}( AF,DG)(\left(fF_{A,D}\right)\operatorname{\boldsymbol{\cdot}}\lambda_{D},\left(fF_{A,D} \right)\operatorname{\boldsymbol{\cdot}}\mu_{D})\\ =\mathcal{C}(AF,DG)(\lambda_{A}\operatorname{\boldsymbol{\cdot }}\left(fG_{A,D}\right),\left(fF_{A,D}\right)\operatorname{\boldsymbol{\cdot }}\mu_{D}),\]
where \(\operatorname{\boldsymbol{\cdot}}\) is the composition in \(2\)-category \(\mathcal{C}\). So the condition on the collection \(m\) is \(m_{A}\operatorname{\boldsymbol{\cdot}}\left(fG_{A,D}\right)=(fF_{A,D}) \operatorname{\boldsymbol{\cdot}}m_{D}\), or, in terms of pastings,
Therefore, \(\underline{\mathsf{Set}\text{-}\text{Cat}\text{-}\text{Cat}(\mathcal{A}, \mathcal{C})}(F,G)(\lambda,\mu)\) consists of modifications \(m:\lambda\to\mu:F\to G:\mathcal{A}\to\mathcal{C}\) (see e.g. [10, SSI,2.3]). On the other hand, for any \(2\)-cell \(\nu\) of \(\mathcal{A}\) and any modification \(m\) diagram (3.3.1) evaluated on element \((\nu,m)\) commutes (exercise). Thus, \(\underline{\mathsf{Set}\text{-}\text{Cat}\text{-Cat}(\mathcal{A},\mathcal{C})} (F,G)\) is precisely the category of strict \(2\)-natural transformations and their modifications.
## 4 Short spaces
Similarly to [13, Section 2] we consider a partially ordered commutative monoid \(\mathbb{L}\) equipped with the operation \(+\) and neutral element \(0\). Of course, we assume that \(a\leqslant b\), \(c\leqslant d\) imply \(a+c\leqslant b+d\). We assume that \(\mathbb{L}\) satisfies the following conditions:
1. for all \(a,b\in\mathbb{L}\) there is \(c\in\mathbb{L}\) such that \(a\leqslant c\), \(b\leqslant c\) (that is, \((\mathbb{L},\leqslant)\) is directed);
2. for all \(a,b\in\mathbb{L}\) there is \(c\in\mathbb{L}\) such that \(c\leqslant a\), \(c\leqslant b\) (that is, \(\mathbb{L}^{\text{op}}\) is directed);
3. for all \(a\in\mathbb{L}\) there is \(c\in\mathbb{L}\) such that \(a+c\geqslant 0\).
If \(\mathbb{L}\) is a directed group (satisfies (i)), then \(\mathbb{L}\) satisfies (ii) and (iii) as well for obvious reasons.
### First properties
Let \(\mathbb{K}\) denote one of two fields, \(\mathbb{R}\) or \(\mathbb{C}\). By a (generalised) seminorm on a \(\mathbb{K}\)-vector space \(V\) we mean a function \(\|\cdot\|:V\to[0,\infty]\), such that
1. for \(c\in\mathbb{K}\) and \(x\in V\) we have \(\|cx\|=|c|\cdot\|x\|\) (with the convention \(0\cdot\infty=0\)) (absolute homogeneity);
2. \(\|x+y\|\leqslant\|x\|+\|y\) for \(x,y\in V\) (triangle inequality).
**4.1.1 Remark**.: Let \((V,\|\cdot\|)\) be a seminormed \(\mathbb{K}\)-vector space. Then the null space \(\ker\|\cdot\|=\{x\in V\mid\|x\|=0\}\) is a \(\mathbb{K}\)-vector subspace.
**4.1.2 Definition**.: Let \(\mathbb{L}\) be a partially ordered commutative monoid. A _short space_ is a \(\mathbb{K}\)-vector space \((V,(\|\cdot\|_{l})_{l\in\mathbb{L}})\) with a family of seminorms indexed by \(\mathbb{L}\), such that for any \(x\in V\) there is \(l\in\mathbb{L}\) with \(\|x\|_{l}<\infty\) and the inequality \(a\leqslant b\in\mathbb{L}\) implies \(\|x\|_{a}\leqslant\|x\|_{b}\).
**4.1.3 Example**.: Let \((V,(\mathcal{F}^{l}V)_{l\in\mathbb{L}})\) be a filtered \(\mathbb{K}\)-vector space. With each subspace \(\mathcal{F}^{l}V\) a semi-norm is associated
\[\|x\|_{l}=\begin{cases}0,&\text{ if }x\in\mathcal{F}^{l}V,\\ \infty,&\text{ if }x\in V\setminus\mathcal{F}^{l}V.\end{cases}\]
Thus, \(\ker\|\cdot\|_{l}=\mathcal{F}^{l}V\) and \((V,(\|\cdot\|_{l})_{l\in\mathbb{L}})\) is a short space.
Vice versa, a short space \((V,(\|\cdot\|_{l})_{l\in\mathbb{L}})\) with \(\|V\|_{l}\subset\{0,\infty\}\) for all \(l\in\mathbb{L}\) determines an \(\mathbb{L}\)-filtered \(\mathbb{K}\)-vector space \((V,(\mathcal{F}^{l}V)_{l\in\mathbb{L}})\) with \(\mathcal{F}^{l}V=\{x\in V\mid\|x\|_{l}=0\}\) (see Remark 4.1.1).
**4.1.4 Definition**.: Symmetric multicategory \(\mathsf{Short}_{\mathbb{L}}\) has short spaces as objects. Morphisms are short multilinear maps:
\[f:\big{(}X_{1},(_{1}\|\cdot\|_{l})_{l\in\mathbb{L}}\big{)}\times\big{(}X_{2}, (_{2}\|\cdot\|_{l})_{l\in\mathbb{L}}\big{)}\times\cdots\times\big{(}X_{n},(_{ n}\|\cdot\|_{l})_{l\in\mathbb{L}}\big{)}\to\big{(}Y,(\|\cdot\|_{l})_{l\in \mathbb{L}}\big{)}\]
such that for all \(l_{1}\),..., \(l_{n}\in\mathbb{L}\) and all \(x_{1}\in X_{1}\),..., \(x_{n}\in X_{n}\) we have
\[\|f(x_{1},x_{2},\ldots,x_{n})\|_{l_{1}+\cdots\ll_{n}}\leqslant_{1}\|x_{1}\|_{ l_{1}}\cdot\ldots\cdot_{n}\|x_{n}\|_{l_{n}}\]
(here \(0\cdot\infty=\infty\)). When \(n=1\), \(\mathbb{L}=0\), \(X_{1}\) and \(Y\) are Banach spaces, short maps are defined as above and are widely used in calculus. Composition of multilinear maps
indexed by a map \(\phi:I\to J\in\mathcal{S}_{\mathsf{sk}}\) is given by substituting the results of \((g_{j}:(X_{i})_{i\in\phi^{-1}j}\to Y_{j})_{j\in J}\) into \(f:(Y_{j})_{j\in J}\to Z\), thus, \(\mu_{\phi}:((g_{j})_{j\in J},f)\mapsto((g_{j})_{j\in J})f\). The identity morphism \(1_{X}\in\mathsf{Short}_{\mathbb{L}}(X;X)\) is the identity map \(\mathrm{id}_{X}:X\to X\).
**4.1.5 Proposition**.: The multicategory \(\mathsf{Short}_{\mathbb{L}}\) is closed: the internal hom object is a \(\mathbb{K}\)-vector subspace
\[\underline{\mathsf{Short}_{\mathbb{L}}}(X_{1},\ldots,X_{n};Z)\subset\mathrm{ ML}_{\mathbb{K}}(X_{1}\times\cdots\times X_{n},Z)=\underline{\mathbb{K}\cdot \mathrm{Vect}}(X_{1},\ldots,X_{n};Z)\]
of \(\mathbb{K}\)-multilinear maps. The latter is equipped with seminorms
\[\|f\|_{l}=\inf\{c\in\mathbb{R}_{>0}\mid\forall(x_{1},\ldots,x_{n })\in X_{1}\times\cdots\times X_{n}\quad\forall(\lambda_{1},\ldots,\lambda_{n} )\in\mathbb{L}^{n}\\ \|f(x_{1},x_{2},\ldots,x_{n})\|_{\lambda_{1}+\cdots+\lambda_{n}+l} \leqslant c\cdot_{1}\|x_{1}\|_{\lambda_{1}}\cdot\ldots\cdot_{n}\|x_{n}\|_{ \lambda_{n}}\}. \tag{4.1.1}\]
The subspace \(\underline{\mathsf{Short}_{\mathbb{L}}}(X_{1},\ldots,X_{n};Z)\) is defined as
\[\{f\in\mathrm{ML}_{\mathbb{K}}(X_{1}\times\cdots\times X_{n},Z)\mid\exists l \in\mathbb{L}\;\|f\|_{l}<\infty\}.\]
Proof.: The evaluation multi-entry functor \(\mathrm{ev}\) is defined as
\[\big{[}X_{1},\ldots,X_{n},\underline{\mathsf{Short}_{\mathbb{L}} }(X_{1},\ldots,X_{n};Z)\xrightarrow{(1,\ldots,1,\mapsto)}X_{1},\ldots,X_{n}, \underline{\mathbb{K}\cdot\mathrm{Vect}}(X_{1},\ldots,X_{n};Z)\xrightarrow{ \mathrm{ev}}Z\big{]},\] \[(x_{1},x_{2},\ldots,x_{n},f)\xmapsto{\xmapsto{\xmapsto{\xmapsto{ \xmapsto{\xmapsto{\xmapsto{\xmapsto{\xmapsto{\xmapsto{\xmapsto{\x}}}}}}}}}}(x_{1},x_{2}, \ldots,x_{n})f.\]
It is a short map since \(\|(x_{1},x_{2},\ldots,x_{n})f\|_{\lambda_{1}+\cdots+\lambda_{n}+l}\leqslant_{1}\| x_{1}\|_{\lambda_{1}}\cdot\ldots\cdot_{n}\|x_{n}\|_{\lambda_{n}}\cdot\|f\|_{l}\). As \(\widehat{\mathbb{K}\cdot\mathrm{Vect}}\) is closed, for every \(\xi:X_{1},\ldots,X_{n},Y_{1},\ldots,Y_{m}\to Z\in\widehat{\mathbb{K}\cdot \mathrm{Vect}}\) there exists a unique \(\psi:Y_{1},\ldots,Y_{m}\to\widehat{\mathbb{K}\cdot\mathrm{Vect}}(X_{1},\ldots, X_{n};Z)\in\widehat{\mathbb{K}\cdot\mathrm{Vect}}\) such that
\[\xi=\big{[}X_{1},\ldots,X_{n},Y_{1},\ldots,Y_{m}\xrightarrow{(1,\ldots,1,\psi)}X_ {1},\ldots,X_{n},\underline{\mathbb{K}\cdot\mathrm{Vect}}(X_{1},\ldots,X_{n};Z) \xrightarrow{\mathrm{ev}}Z\big{]}.\]
The proposition claims that \(\xi\) is short iff \(\mathrm{Im}\,\psi\subset\underline{\mathsf{Short}_{\mathbb{L}}}(X_{1},\ldots,X_{n };Z)\) and
\[\psi:Y_{1},\ldots,Y_{m}\to\underline{\mathsf{Short}_{\mathbb{L}}}(X_{1},\ldots,X_ {n};Z)\]
is short. Let us prove the claim. We have
\[(x_{1},\ldots,x_{n},y_{1},\ldots,y_{m})\xi=(x_{1},\ldots,x_{n})(y_{1},\ldots,y_{m })\psi.\]
The statement can be rephrased as equivalence of two inequalities:
\[\|(x_{1},\ldots,x_{n})(y_{1},\ldots,y_{m})\psi\|_{\lambda_{1}+ \cdots+\lambda_{n}+\mu_{1}+\cdots+\mu_{m}}\\ \leqslant{}_{X_{1}}\|x_{1}\|_{\lambda_{1}}\cdot\ldots\cdot{}_{X_{ n}}\|x_{n}\|_{\lambda_{n}}\cdot{}_{Y_{1}}\|y_{1}\|_{\mu_{1}}\cdot\ldots\cdot{}_{Y_{m}}\|y_{m }\|_{\mu_{m}}, \tag{4.1.2}\]
\[\|(y_{1},\ldots,y_{m})\psi\|_{\mu_{1}+\cdots+\mu_{m}}\leqslant{}_{Y_{1}}\|y_{ 1}\|_{\mu_{1}}\cdot\ldots\cdot{}_{Y_{m}}\|y_{m}\|_{\mu_{m}}. \tag{4.1.3}\]
(4.1.2) implies (4.1.3) because the requirement of (4.1.1) is satisfied by \(c={}_{Y_{1}}\|y_{1}\|_{\mu_{1}}\cdot\ldots\cdot{}_{Y_{m}}\|y_{m}\|_{\mu_{m}}\). Vice versa, (4.1.3) implies that for any \(\varepsilon>0\)
\[\|(x_{1},\ldots,x_{n})(y_{1},\ldots,y_{m})\psi\|_{\lambda_{1}+ \cdots+\lambda_{n}+\mu_{1}+\cdots+\mu_{m}}\\ \leqslant{}_{X_{1}}\|x_{1}\|_{\lambda_{1}}\cdot\ldots\cdot{}_{X_ {n}}\|x_{n}\|_{\lambda_{n}}\cdot({}_{Y_{1}}\|y_{1}\|_{\mu_{1}}\cdot\ldots\cdot {}_{Y_{m}}\|y_{m}\|_{\mu_{m}}+\varepsilon).\]
Therefore, (4.1.2) holds.
**4.1.6 Remark**.: The category \(\mathbf{Short}_{\mathbb{L}}\) is defined as the case \(n=1\) of Definition 4.1.4. The category \(\mathbf{snS}\) is defined as \(\mathbf{Short}_{0}\) for \(\mathbb{L}=0\). It has seminormed spaces \((V,\|\cdot\|)\) as objects and short maps as morphisms. Define the multicategory of seminormed spaces \(\mathsf{snS}=\mathsf{Short}_{0}\), where \(\mathbb{L}=0\).
Example 4.1.3 gives a symmetric multifunctor \(\iota:\widehat{\mathsf{K-Vect}}_{\mathbb{L}}\to\mathsf{Short}_{\mathbb{L}}\). The image of \(\mathrm{Ob}\,\iota\) consists of short spaces \((V,(\|\cdot\|_{l})_{l\in\mathbb{L}})\) with \(\|V\|_{l}\subset\{0,\infty\}\) for all \(l\in\mathbb{L}\). Besides \(\mathrm{Ob}\,\iota\) the multifunctor consists of bijections
\[\iota:\widehat{\mathsf{K-Vect}}_{\mathbb{L}}(M_{1},\ldots,M_{n};N)\to\mathsf{ Short}_{\mathbb{L}}(\iota M_{1},\ldots,\iota M_{n};\iota N).\]
For any seminormed space \((V,\|\cdot\|)\) the unit ball \(B_{\|\cdot\|}=\{x\in V\mid\|x\|\leqslant 1\}\) is a convex and balanced subset of \(V\). Given a convex balanced2 subset \(W\subset V\) define its Minkowski functional \(\|\cdot\|_{W}:V\to[0,\infty]\) by
Footnote 2: that is, \(aW\subset W\) for \(|a|\leqslant 1\)
\[\|x\|_{W}=\inf\{c\in\mathbb{R}_{>0}\mid x\in cW\}\]
with the convention \(\inf\varnothing=+\infty\). Thus, if \(x\in V\setminus\cup_{c>0}cW\) (that is, \(W\) is not absorbing), then \(\|x\|_{W}=\infty\).
**4.1.7 Exercise**.: Let \(W\subset V\) be a convex balanced subset. Then for all finite families \(v_{i}\in W\), \(i\in I\), and all numbers \(z_{i}\in\mathbb{K}\), \(i\in I\), the condition \(\sum_{i\in I}|z_{i}|\leqslant 1\) implies \(\sum_{i\in I}z_{i}v_{i}\in W\).
**4.1.8 Lemma**.: The Minkowski functional \(\|\cdot\|_{W}\) is a seminorm. The composition of maps
\[\{\text{seminorms on }V\} \to\{\text{convex balanced subsets of }V\}\to\{\text{ seminorms on }V\},\] \[\|\cdot\| \mapsto B_{\|\cdot\|} W\mapsto\|\cdot\|_{W}\]
is the identity map.
Proof.: The second statement follows from the computation
\[\|x\|_{B_{\|\cdot\|}}=\inf\bigl{\{}c\in\mathbb{R}_{>0}\mid x\in c \{y\in V\mid\|y\|\leqslant 1\}\bigr{\}}\\ =\inf\{c\in\mathbb{R}_{>0}\mid\|c^{-1}x\|\leqslant 1\}=\inf\{c\in \mathbb{R}_{>0}\mid\|x\|\leqslant c\}=\|x\|.\]
The first statement is left to the reader as an exercise.
**4.1.9 Question**.: When the symmetric multicategory \(\mathsf{Short}_{\mathbb{L}}\) is representable by a symmetric monoidal category \(\mathbf{Short}_{\mathbb{L}}\)? The tensor product of a family \(\bigl{(}X_{1},(\|\cdot\|_{l})_{l\in\mathbb{L}}\bigr{)}\), \(\bigl{(}X_{2},(\|\cdot\|_{l})_{l\in\mathbb{L}}\bigr{)}\), \(\dots\), \(\bigl{(}X_{n},(_{n}\|\cdot\|_{l})_{l\in\mathbb{L}}\bigr{)}\) seems equal
\[\bigl{(}X_{1}\otimes\dots\otimes X_{n},\bigl{(}\|\cdot\|_{\mathrm{ hull}(\cup_{l_{1}+\cdots+l_{n}=l}B_{1}\|\cdot\|_{l_{1}}\otimes B_{2}\|\cdot\|_{l_{2}} \otimes\dots\otimes B_{n}\|\cdot\|_{l_{n}}}\bigr{)}\bigr{)}_{l\in\mathbb{L}} \bigr{)},\]
where hull means convex balanced hull?
**4.1.10 Remark**.: Assume that \(\mathbb{L}=0\) and \((X_{1},1\|\cdot\|)\), \(\dots\), \((X_{n},n\|\cdot\|)\) are normed spaces. Then for any \(x\in X_{1}\otimes\dots\otimes X_{n}\)
\[\|x\|_{\mathrm{hull}(B_{1\|\mathbb{I}}\otimes\dots\otimes B_{n\|\mathbb{I}})}= \|x\|_{\mathrm{proj}}\stackrel{{\mathrm{def}}}{{=}}\inf\{\sum_{i \in I}|\alpha_{i}|_{1}\|x_{1}^{i}\|\dots_{n}\|x_{n}^{i}\|\mid x=\sum_{i\in I} \alpha_{i}x_{1}^{i}\otimes\dots\otimes x_{n}^{i},\ I\ \mathrm{finite}\}.\]
Proof.: For arbitrary subsets \(S_{k}\subset V_{k}\), \(1\leqslant k\leqslant n\), we have
\[\mathrm{hull}(S_{1}\otimes\dots\otimes S_{n})=\big{\{}\sum_{i\in I}\gamma_{i} y_{1}^{i}\otimes\dots\otimes y_{n}^{i}\mid I\ \mathrm{finite},\ \sum_{i\in I}|\gamma_{i}|\leqslant 1,\ \forall 1 \leqslant k\leqslant n\ \forall i\in I\ y_{k}^{i}\in S_{k}\big{\}}.\]
Therefore,
\[\|x\|_{\mathrm{hull}(B_{1\|\mathbb{I}}\otimes\dots\otimes B_{n\| \mathbb{I}})}\] \[\qquad\qquad=\inf\big{\{}c\in\mathbb{R}_{>0}\mid\exists y_{k}^{i} \in X_{k}\setminus 0_{k}\|y_{k}^{i}\|\leqslant 1,\exists\gamma_{i}\in \mathbb{K}\sum_{i\in I}|\gamma_{i}|\leqslant 1,x=c\sum_{i\in I}\gamma_{i}y_{1}^{i} \otimes\dots\otimes y_{n}^{i}\big{\}}\] \[\qquad\qquad=\inf\big{\{}\sum_{i\in I}|\beta_{i}|\mid\beta_{i} \in\mathbb{K},\ \exists y_{k}^{i}\in X_{k}\setminus 0_{k}\|y_{k}^{i}\|\leqslant 1, \,x=\sum_{i\in I}\beta_{i}y_{1}^{i}\otimes\dots\otimes y_{n}^{i}\big{\}}\] \[\qquad\qquad=\inf\big{\{}\sum_{i\in I}|\beta_{i}|\mid\beta_{i} \in\mathbb{K},\ \exists x_{k}^{i}\in X_{k}\setminus 0,\,x=\sum_{i\in I}\frac{ \beta_{i}}{1\|x_{1}^{i}\|\dots_{n}\|x_{n}^{i}\|}x_{1}^{i}\otimes\dots\otimes x _{n}^{i}\big{\}}\] \[\qquad\qquad=\inf\big{\{}\sum_{i\in I}|\alpha_{i}|_{1}\|x_{1}^{i} \|\dots_{n}\|x_{n}^{i}\|\mid\alpha_{i}\in\mathbb{K},\ x_{k}^{i}\in X_{k} \setminus 0,\,x=\sum_{i\in I}\alpha_{i}x_{1}^{i}\otimes\dots\otimes x_{n}^{i}\big{\}}\] \[\qquad\qquad=\|x\|_{\mathrm{proj}}.\]
Hence, the norm \(\|\cdot\|_{\mathrm{hull}(B_{1\|\mathbb{I}}\otimes\dots\otimes B_{n\|\mathbb{I}})}\) equals the projective norm.
### Completeness of the multicategory of short spaces
**4.2.1 Proposition**.: The \(\text{product}\prod_{i\in I}^{\mathbf{Short}_{\mathbb{L}}}M_{i}\) of a family of short spaces \(\big{(}(M_{i},(\|\cdot\|)_{l\in\mathbb{L}})\big{)}_{i\in I}\) exists and consists of elements \(m=(m_{i})_{i\in I}\in\prod_{i\in I}^{\mathbb{K}\text{-Vect}}M_{i}\) such that for at least one \(l\in\mathbb{L}\) the value
\[\prod\|m\|_{l}=\sup_{i\in I}i\|m_{i}\|_{l}\]
is finite. This formula defines seminorms \(\prod\|\cdot\|_{l}\) for \(\prod_{i\in I}^{\mathbf{Short}_{\mathbb{L}}}M_{i}\).
Proof.: There are embeddings of \(\mathbb{K}\)-vector spaces
\[\mathbf{Short}_{\mathbb{L}}\big{(}N,\ \prod_{i\in I}^{\mathbf{Short}_{ \mathbb{L}}}M_{i}\big{)}\subset\mathbb{K}\text{-Vect}\big{(}N,\ \prod_{i\in I}^{\mathbf{Short}_{\mathbb{L}}}M_{i}\big{)} \subset\mathbb{K}\text{-Vect}\big{(}N,\ \prod_{i\in I}^{\mathbb{K}\text{-Vect}}M_{i}\big{)}\] \[\cong\prod_{i\in I}^{\mathbb{K}\text{-Vect}}\mathbb{K}\text{-Vect} (N,M_{i})\supset\prod_{i\in I}^{\mathbb{K}\text{-Vect}}\mathbf{Short}_{ \mathbb{L}}(N,M_{i}).\]
Let us consider an arbitrary \(f:N\to\prod_{i\in I}^{\mathbb{K}\text{-Vect}}M_{i}\in\mathbb{K}\text{-Vect}\) and the corresponding family \((f_{i}:N\to M_{i}\in\mathbb{K}\text{-Vect})_{i\in I}\). We have to prove that \(f\) is short iff \(f_{i}\) is short for all \(i\in I\).
Assume that \(f:N\to\prod_{i\in I}^{\mathbf{Short}_{\mathbb{L}}}M_{i}\in\mathbf{Short}_{ \mathbb{L}}\). It means that for all \(n\in N\) and for all \(l\in\mathbb{L}\)
\[\sup_{i\in I}i\|f_{i}(n)\|_{l}=\prod_{\prod}\|f(n)\|_{l}\leqslant_{N}\|n\|_{l}.\]
Therefore, \({}_{i}\|f_{i}(n)\|_{l}\leqslant_{N}\|n\|_{l}\) for all \(i\in I\), for all \(n\in N\) and for all \(l\in\mathbb{L}\). Hence, \(f_{i}\in\mathbf{Short}_{\mathbb{L}}\).
Assume now that \(f_{i}\in\mathbf{Short}_{\mathbb{L}}\) for all \(i\in I\). Thus, \({}_{i}\|f_{i}(n)\|_{l}\leqslant_{N}\|n\|_{l}\) for all \(i\in I\), for all \(n\in N\) and for all \(l\in\mathbb{L}\). Therefore,
\[\prod\|f(n)\|_{l}=\sup_{i\in I}i\|f_{i}(n)\|_{l}\leqslant_{N}\|n\|_{l}. \tag{4.2.1}\]
For any \(n\in N\) there is \(l\in\mathbb{L}\) such that \(\prod_{\prod}\|f(n)\|_{l}\) is finite. That is, \(f(N)\subset\prod_{i\in I}^{\mathbf{Short}_{\mathbb{L}}}M_{i}\). Inequality (4.2.1) shows that \(f:N\to\prod_{i\in I}^{\mathbf{Short}_{\mathbb{L}}}M_{i}\) is short.
**4.2.2 Proposition**.: The multicategory \(\mathsf{Short}_{\mathbb{L}}\) has small products (see Definition 1.3.1).
Proof.: Given a family \(\big{(}f_{i}:(X_{j})_{j\in\mathbf{n}}\to V_{i}\in\mathsf{V}\big{)}_{i\in I}\) there is a unique morphism \(f:(X_{j})_{j\in\mathbf{n}}\to\prod_{i\in I}^{\mathbb{K}\text{-}\text{Vect}}V_{i}\) such that for all \(i\in I\)
\[f_{i}=\big{[}(X_{j})_{j\in\mathbf{n}}\xrightarrow{f}\prod_{i\in I}^{\mathbb{K} \text{-}\text{Vect}}V_{i}\xrightarrow{\text{pr}_{i}}V_{i}\big{]},\]
since the multicategory \(\widehat{\mathbb{K}\text{-}\text{Vect}}\) is representable. For any \(n\)-tuple of elements \((x_{j}\in X_{j})_{j\in\mathbf{n}}\) there is an \(n\)-tuple of elements \((l_{j}\in\mathbb{L})_{j\in\mathbf{n}}\) such that \({}_{X_{j}}\|x_{j}\|_{l_{j}}<\infty\). Then
\[\begin{array}{l}\lim_{\prod}\|f(x_{1},x_{2},\dots,x_{n})\|_{l_{1}+\dots+l_{ n}}=\prod_{\prod}\|(f(x_{1},x_{2},\dots,x_{n}))_{i\in I}\|_{l_{1}+\dots+l_{n}}\\ =\sup_{i\in I}v_{i}\|f(x_{1},x_{2},\dots,x_{n})\|_{l_{1}+\dots+l_{n}}\leq{}_{X_ {1}}\|x_{1}\|_{l_{1}}\cdot\dots\cdot{}_{X_{n}}\|x_{n}\|_{l_{n}}<\infty\end{array}\]
Therefore, \(f\) takes values in \(\prod_{i\in I}^{\mathbf{Short}_{\mathbb{L}}}V_{i}\). Moreover, \(f:(X_{j})_{j\in\mathbf{n}}\to\prod_{i\in I}^{\mathbf{Short}_{\mathbb{L}}}V_{i}\) is short.
**4.2.3 Proposition**.: A morphism \(h:B\to A\in\mathbf{snS}\) has a kernel (equalizer of \(h\) and \(0\)) in \(\mathbf{snS}\), which coincides with the kernel \(K=\operatorname{Ker}h\) in \(\mathbb{K}\text{-}\text{Vect}\). The subspace \(K\subset B\) inherits the seminorm from \(B\).
Proof.: In \(\mathbb{K}\text{-}\text{Vect}\) the kernel \((K=\operatorname{Ker}h,i=\ker h)\) exists and satisfies the property which is based on the diagram
Namely,
\[\forall\,j\quad j\mathbin{\boldsymbol{\cdot}}h=0\implies(\exists!\,n\::\:n \mathbin{\boldsymbol{\cdot}}i=j).\]
We have to prove the same property in \(\mathbf{snS}\). First of all, \(i\) is short. Hence, if \(n\) is short, then \(j=n\mathbin{\boldsymbol{\cdot}}i\) is short as well. If \(j\) is short, then for all \(d\in D\)
\[{}_{K}\|nd\|={}_{B}\|ind\|={}_{B}\|jd\|\leqslant{}_{D}\|d\|.\]
Hence, \(n\) is short.
**4.2.4 Corollary**.: By [11, Corollary V.2.2] the category \(\mathbf{snS}\) (and more generally \(\mathbf{Short}_{\mathbb{L}}\)) is complete. The limit of a diagram \(I\to\mathbf{Short}_{\mathbb{L}}\), \(i\mapsto\big{(}M_{i},(i\|\cdot\|_{l})_{l\in\mathbb{L}}\big{)}\) is
\[\lim_{i\in I}(M_{i}\in\mathbf{Short}_{\mathbb{L}})=\big{(}\prod_{i\in\operatorname {Ob}I}^{\mathbf{Short}_{\mathbb{L}}}M_{i}\big{)}\bigcap\lim_{i\in I}(M_{i}\in \mathbb{K}\text{-}\text{Vect}), \tag{4.2.2}\]
where the both \(\mathbb{K}\text{-}\text{vector}\) spaces are viewed as subspaces of \(\prod_{i\in\operatorname{Ob}I}^{\mathbb{K}\text{-}\text{Vect}}M_{i}\). The seminorms on the subspace \(\lim_{i\in I}(M_{i}\in\mathbf{Short}_{\mathbb{L}})\subset\prod_{i\in\operatorname {Ob}I}^{\mathbf{Short}_{\mathbb{L}}}M_{i}\) are induced from the latter short space.
Proof.: According to [11, Theorem V.2.2] the rows of diagram
(where \(\operatorname{pr}_{u}\circ f=\operatorname{pr}_{\operatorname{tgt}u}\), \(\operatorname{pr}_{u}\circ g=M_{u}\circ\operatorname{pr}_{\operatorname{src}u}\)) are equalizers. The both squares on the right (one with upper arrows and another with lower arrows) commute. One easily deduces (4.2.2).
#### 4.2.5 Corollary
The multicategory \(\mathsf{Short}_{\mathbb{L}}\) is complete.
Proof.: Given a functor \(I\to\mathbf{Short}_{\mathbb{L}}\) and a family of morphisms \(h_{i}:(X_{j})_{j\in\mathbf{n}}\to M_{i}\in\mathsf{Short}_{\mathbb{L}}\), \(i\in\operatorname{Ob}I\), such that
\[h_{k}=\big{[}(X_{j})_{j\in\mathbf{n}}\xrightarrow{h_{i}}M_{i}\to M_{k}\big{]}\]
for each \(i\to k\in I\), we see that the map \(h=(h_{i}):(X_{j})_{j\in\mathbf{n}}\to\prod_{i\in I}^{\mathbb{K}\text{-}\text{ \rm{Vect}}}M_{i}\) takes values in each of the subspaces \(\prod_{i\in I}^{\mathsf{Short}_{\mathbb{L}}}M_{i}\) (by Proposition 4.2.2) and \(\lim_{i\in I}(M_{i}\in\mathbb{K}\text{-}\text{\rm{Vect}})\). Hence, in their intersection \(\lim_{i\in I}(M_{i}\in\mathbf{Short}_{\mathbb{L}})\). Since \(h=(h_{i}):(X_{j})_{j\in\mathbf{n}}\to\prod_{i\in I}^{\mathbf{Short}_{\mathbb{ L}}}M_{i}\in\mathsf{Short}_{\mathbb{L}}\) (again by Proposition 4.2.2) we have \(h:(X_{j})_{j\in\mathbf{n}}\to\lim_{i\in I}(M_{i}\in\mathbf{Short}_{\mathbb{L}}) \in\mathsf{Short}_{\mathbb{L}}\).
## Appendix A Symmetric groups and symmetric multicategories
### Action of symmetric groups on a symmetric multicategory
Let \(\sigma:J\to K\in\mathcal{S}_{\mathsf{sk}}\) be a bijection. Let \((Y_{j})_{j\in J}\), \((Z_{k})_{k\in K}\), \(W\) be (families of) objects of a symmetric multicategory \(\mathsf{V}\) such that \(Z_{k}=Y_{\sigma^{-1}k}\). Similarly to [10, Lemma A.2.2] define a map
\[r_{\sigma}=\big{\{}\mathsf{V}\big{(}(Y_{\sigma^{-1}k})_{k\in K};W \big{)}\xrightarrow{(\mathrm{i}_{Y_{\sigma^{-1}k}})_{k\in K}\times 1}[\prod_{k \in K}\mathsf{V}(Y_{\sigma^{-1}k};Z_{k})]\times\mathsf{V}\big{(}(Y_{\sigma^{-1 }k})_{k\in K};W\big{)}\\ \xrightarrow{\mu_{\sigma}}\mathsf{V}\big{(}(Y_{j})_{j\in J};W \big{)}\big{\}}.\]
The following statement is implied by the proof of [10, Theorem A.2.4].
**A.1.1 Proposition**.: Let, furthermore, \(\psi=(I\xrightarrow{\phi}J\xrightarrow{\sigma}K)\in\mathcal{S}_{\mathsf{sk}}\) and \((X_{i})_{i\in I}\) be a family of objects of \(\mathsf{V}\). Then
\[\mu_{\psi}=\big{\{}\big{[}\prod_{k\in K}\mathsf{V}\big{(}(X_{i}) _{i\in\psi^{-1}k};Y_{\sigma^{-1}k}\big{)}\big{]}\times\mathsf{V}\big{(}(Y_{ \sigma^{-1}k})_{k\in K};W\big{)}\\ \xrightarrow{\prod_{\sigma^{-1}}\times r_{\sigma}}[\prod_{j\in J }\mathsf{V}\big{(}(X_{i})_{i\in\phi^{-1}(j)};Y_{j}\big{)}]\times\mathsf{V} \big{(}(Y_{j})_{j\in J};W\big{)}\xrightarrow{\mu_{\phi}}\mathsf{V}\big{(}(X_{ i})_{i\in I};W\big{)}\big{\}}.\] (A.1.1)
Proof.: Applying the associativity property from Figure 1 for maps \(I\xrightarrow{\phi}J\xrightarrow{\sigma}K\) we get the sought equation on the next page.
**A.1.2 Corollary**.: Assume that both \(\phi\) and \(\sigma\) are bijections from \(\mathcal{S}_{\mathsf{sk}}\), \(\psi=(I\xrightarrow{\phi}J\xrightarrow{\sigma}K)\). Then
\[r_{\psi}=\big{[}\mathsf{V}\big{(}(Y_{\sigma^{-1}k})_{k\in K};W\big{)} \xrightarrow{r_{\sigma}}\mathsf{V}\big{(}(Y_{j})_{j\in J};W\big{)} \xrightarrow{r_{\phi}}\mathsf{V}\big{(}(Y_{\phi i})_{i\in I};W\big{)}\big{]}.\]
Proof.: Consider \(X_{i}=Y_{\phi i}\), hence, \(Y_{j}=X_{\phi^{-1}j}\). Rewrite (A.1.1) as
\[\mu_{\psi}=\big{\{}\big{[}\prod_{k\in K}\mathsf{V}\big{(}X_{\psi^ {-1}k};Y_{\sigma^{-1}k}\big{)}\big{]}\times\mathsf{V}\big{(}(Y_{\sigma^{-1}k}) _{k\in K};W\big{)}\\ \xrightarrow{\prod_{\sigma^{-1}}\times r_{\sigma}}[\prod_{j\in J }\mathsf{V}\big{(}X_{\phi^{-1}j};Y_{j}\big{)}]\times\mathsf{V}\big{(}(Y_{j}) _{j\in J};W\big{)}\xrightarrow{\mu_{\phi}}\mathsf{V}\big{(}(X_{i})_{i\in I};W \big{)}\big{\}}.\] (A.1.2)
Substitute \((1_{Y_{\sigma^{-1}k}})_{k\in K}\) into the first factor. We get from the left hand side of (A.1.2)
\[\big{\{}\mathsf{V}\big{(}(X_{\psi^{-1}k})_{k\in K};W\big{)} \xrightarrow{(\mathrm{i}_{X_{\psi^{-1}k}})_{k\in K}\times 1}[\prod_{k \in K}\mathsf{V}(X_{\psi^{-1}k};X_{\psi^{-1}k})\big{]}\times\mathsf{V}\big{(}(X _{\psi^{-1}k})_{k\in K};W\big{)}\\ \xrightarrow{\mu_{\psi}}\mathsf{V}\big{(}(X_{i})_{i\in I};W\big{)} \big{\}}=r_{\psi}.\]
\[\begin{CD}[]{}[\prod_{k\in K}\mathsf{V}((X_{i})_{i\in\psi^{-1}k};Y_{ \sigma^{-1}k})]\times\mathsf{V}((Y_{\sigma^{-1}k})_{k\in K};W)\\ \cong\big{\lceil}\prod_{\sigma^{-1}k}\times 1\\ [\prod_{j\in J}\mathsf{V}((X_{i})_{i\in\phi^{-1}(j)};Y_{j})]\times\mathsf{V}((Y _{\sigma^{-1}k})_{k\in K};W)\\ \big{\downarrow}^{1\times(1_{\mathbf{z}_{k}})_{k\in K}\times 1}\\ [\prod_{j\in J}\mathsf{V}((X_{i})_{i\in\phi^{-1}j};Y_{j})]\times\big{\lceil} \prod_{k\in K}\mathsf{V}(Y_{\sigma^{-1}k};Z_{k})]\times\mathsf{V}((Y_{\sigma^{ -1}k})_{k\in K};W)\\ [\prod_{k\in K}(\mathsf{V}((X_{i})_{i\in\phi^{-1}\sigma^{-1}k};Y_{\sigma^{-1} k})\times\mathsf{V}(Y_{\sigma^{-1}k};Z_{k}))]\times\mathsf{V}((Y_{\sigma^{-1}k})_{k \in K};W)\\ \big{\downarrow}^{1\times\mu_{\phi}}\\ [\prod_{j\in J}\mathsf{V}((X_{i})_{i\in\phi^{-1}j};Y_{j})]\times\mathsf{V}((Y _{j})_{j\in J};W)\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\
From the right hand side of (A.1.2) we get
\[\big{\{}\mathsf{V}\big{(}(Y_{\sigma^{-1}k})_{k\in K};W\big{)}\xrightarrow{ \stackrel{{(\mathrm{i}\mathrm{I}_{\sigma^{-1}k})_{k\in K}\times 1}}{{ \longrightarrow}}}\big{[}\prod_{k\in K}\mathsf{V}(X_{\psi^{-1}k};Y_{\sigma^{-1}k })\big{]}\times\mathsf{V}\big{(}(Y_{\sigma^{-1}k})_{k\in K};W\big{)}\\ \xrightarrow{\prod_{\sigma^{-1}\times r_{\sigma}}}[\prod_{j\in J} \mathsf{V}(X_{\phi^{-1}j};Y_{j})]\times\mathsf{V}\big{(}(Y_{j})_{j\in J};W\big{)} \xrightarrow{\mu_{\phi}}\mathsf{V}\big{(}(X_{i})_{i\in I};W\big{)}\big{\}}\\ =\big{\{}\mathsf{V}\big{(}(Y_{\sigma^{-1}k})_{k\in K};W\big{)} \xrightarrow{r_{\sigma}}\mathsf{V}\big{(}(Y_{\sigma^{-1}k})_{k\in K};W\big{)} \\ \xrightarrow{\stackrel{{(\mathrm{i}_{\sigma^{-1}k})_{k \in K}\times 1}}{{\longrightarrow}}}[\prod_{j\in J}\mathsf{V}(X_{\phi^{-1}j}; Y_{j})\big{]}\times\mathsf{V}\big{(}(Y_{j})_{j\in J};W\big{)}\xrightarrow{\mu_{\phi}} \mathsf{V}\big{(}(X_{i})_{i\in I};W\big{)}\big{\}}\\ =\big{\{}\mathsf{V}\big{(}(Y_{\sigma^{-1}k})_{k\in K};W\big{)} \xrightarrow{r_{\sigma}}\mathsf{V}\big{(}(Y_{\sigma^{-1}k})_{k\in K};W\big{)} \xrightarrow{r_{\phi}}\mathsf{V}\big{(}(X_{i})_{i\in I};W\big{)}\big{\}}.\]
Therefore, \(r_{\phi\sigma}=r_{\sigma}\mathbin{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ \boldsymbolboldsymbol \cdot}}}}}}}}}}}}}}}}} \ \ \ \ \ \ \)\(\boldsymbol{\mathsf{V}\big{(}(X_{i})_{i\in I};W\big{)}\}\}\).
The second identity axiom implies that \(r_{\mathrm{id}}=\mathrm{id}\). Thus, we have an action of a symmetric group on the set of homomorphism sets of a symmetric multicategory \(\mathsf{V}\). Often this action is included in the definition of a symmetric multicategory, which we do not do.
**A.1.3 Example**.: Assume that \(\mathcal{V}\) is a complete closed symmetric monoidal category with \(\otimes^{\mathbf{1}}=\mathrm{Id}\). For \(\mathsf{V}=\widehat{\mathcal{V}}\) (see [1, Proposition 3.22]) we get \(r_{\sigma}=\mathcal{V}(\lambda^{\sigma},W):\mathcal{V}(\otimes^{k\in K}Y_{ \sigma^{-1}k},W)\to\mathcal{V}(\otimes^{j\in J}Y_{j},W)\), where \(\lambda^{\sigma}:\otimes^{j\in J}Y_{j}\to\otimes^{k\in K}Y_{\sigma^{-1}k}\) is the action of symmetric group on tensor products via symmetries.
The following equivariance property seems to be explicitly stated in the literature for the first time, although it should be implied by the proof of [1, Theorem A.2.4].
**A.1.4 Proposition**.: Let the square in \(S_{\mathrm{sk}}\), where vertical arrows are bijections,
commute. Then there is the equivariance property
\[\big{\{}\big{[}\prod_{k\in K}\mathsf{V}\big{(}(X_{\pi^{-1}l})_{l \in\psi^{-1}k};Y_{\sigma^{-1}k}\big{]}\big{]}\times\mathsf{V}\big{(}(Y_{ \sigma^{-1}k})_{k\in K};W\big{)}\\ \xrightarrow{\prod_{j\in J}\mathsf{V}\big{(}(X_{\pi^{-1}l})_{l \in\pi\phi^{-1}j};Y_{j}\big{)}\big{]}\times\mathsf{V}\big{(}(Y_{\sigma^{-1}k}) _{k\in K};W\big{)}\\ \xrightarrow{\prod_{j\in J}r_{\pi_{j}}\times r_{\sigma}}[\prod_{j \in J}\mathsf{V}\big{(}(X_{i})_{i\in\phi^{-1}j};Y_{j}\big{)}]\times\mathsf{ V}\big{(}(Y_{j})_{j\in J};W\big{)}\xrightarrow{\mu_{\phi}}\mathsf{V}\big{(}(X_{i})_{i \in I};W\big{)}\big{\}}\\ =\big{\{}\big{[}\prod_{k\in K}\mathsf{V}\big{(}(X_{\pi^{-1}l})_{l \in\psi^{-1}k};Y_{\sigma^{-1}k}\big{)}\big{]}\times\mathsf{V}\big{(}(Y_{\sigma^{ -1}k})_{k\in K};W\big{)}\\ \xrightarrow{\prod_{k\in K}\mathsf{V}\big{(}(X_{\pi^{-1}l})_{l \in\psi^{-1}k};Y_{\sigma^{-1}k}\big{)}\big{]}\times\mathsf{V}\big{(}(Y_{\sigma^{ -1}k})_{k\in K};W\big{)}\xrightarrow{\mu_{\pi_{\psi}}}\mathsf{V}\big{(}(X_{i}) _{i\in I};W\big{)}\big{\}}\\ =\big{\{}\big{[}\prod_{k\in K}\mathsf{V}\big{(}(X_{\pi^{-1}l})_{l \in\psi^{-1}k};Y_{\sigma^{-1}k}\big{)}\big{]}\times\mathsf{V}\big{(}(Y_{\sigma^{ -1}k})_{k\in K};W\big{)}\xrightarrow{\mu_{\psi}}\mathsf{V}\big{(}(X_{\pi^{-1}l})_ {l\in L};W\big{)}\\ \xrightarrow{r_{\pi}}\mathsf{V}\big{(}(X_{i})_{i\in I};W\big{)} \big{\}}.\] (A.1.3)
Here \(\varpi_{j}=\pi|:\phi^{-1}j\to\pi\phi^{-1}j=\psi^{-1}\sigma j\) and \(\pi_{k}=\varpi_{\sigma^{-1}k}=\pi|:\pi^{-1}\psi^{-1}k\to\psi^{-1}k\) are bijections.
Proof.: Denote \(Z_{k}=Y_{\sigma^{-1}k}\). Applying the associativity property from Figure 1 for maps \(I\xrightarrow{\phi}J\xrightarrow{\sigma}K\) we get the proof of the first equation from (A.1.3) on the following page.
\[[\prod_{k\in K}\mathsf{V}((X_{\pi^{-1}l})_{l\in\varphi^{-1}k};Y_{\sigma^{-1} k})]\times\mathsf{V}((Y_{\sigma^{-1}k})_{k\in K};W)\xrightarrow[\cong]{\prod_{l\in J}\mathsf{V}((X_{i})_{i \in\theta^{-1}\sigma^{-1}k};Y_{\sigma^{-1}k})\times\mathsf{V}((Y_{\sigma^{-1}k}) _{k\in K};W)}\] \[[\prod_{k\in K}\mathsf{V}((X_{i})_{i\in\theta^{-1}\sigma^{-1}k};Y_ {\sigma^{-1}k})\times\mathsf{V}((Y_{\sigma^{-1}k})_{k\in K};W)\] \[[\prod_{k\in K}\mathsf{V}((X_{i})_{i\in\theta^{-1}\sigma^{-1}k};Y_ {\sigma^{-1}k})\times\mathsf{V}((Y_{\sigma^{-1}k})_{k\in K};W)\] \[[\prod_{k\in K}\mathsf{V}((X_{i})_{i\in\theta^{-1}\sigma^{-1}k};Y_ {\sigma^{-1}k})\times\mathsf{V}((Y_{\sigma^{-1}k})_{k\in K};W)\] \[[\prod_{k\in K}\mathsf{V}((X_{i})_{i\in\theta^{-1}\sigma^{-1}k};Y_ {\sigma^{-1}k})\times\mathsf{V}((Y_{\sigma^{-1}k})_{k\in K};W)\] \[[\prod_{k\in K}\mathsf{V}((X_{i})_{i\in\theta^{-1}\sigma^{-1}k};Y_ {\sigma^{-1}k})\times\mathsf{V}((Y_{\sigma^{-1}k})_{k\in K};W)\] \[[\prod_{k\in K}\mathsf{V}((X_{i})_{i\in\theta^{-1}\sigma^{-1}k};Y_ {\sigma^{-1}k})\times\mathsf{V}((X_{\sigma^{-1}k})_{k\in K};W)\] \[[\prod_{k\in K}\mathsf{V}((X_{i})_{i\in\theta^{-1}\sigma^{-1}k};Y_ {\sigma^{-1}k})\times\mathsf{V}((X_{\sigma^{-1}k})_{k\in K};W)\] \[[\prod_{k\in K}\mathsf{V}((X_{i})_{i\in\theta^{-1}\sigma^{-1}k};Y_ {\sigma^{-1}k})\times\mathsf{V}((X_{\sigma^{-1}k})_{k\in K};W)\] \[[\prod_{k\in K}\mathsf{V}((X_{i})_{i\in\theta^{-1}\sigma^{-1}k};Y _{\sigma^{-1}k})\times\mathsf{V}((X_{\sigma^{-1}k})_{k\in K};W)\] \[[\prod_{k\in K}\mathsf{V}((X_{i})_{
In order to prove the second equation from (A.1.3) we substitute into the middle expression the definition of \(r\):
\[\big{[}\prod_{k\in K}\mathsf{V}\big{(}(X_{\pi^{-1}l})_{l\in\psi^{-1 }k^{\ast}};Z_{k}\big{)}\big{]}\times\mathsf{V}\big{(}(Z_{k})_{k\in K};W\big{)} \xrightarrow{\prod_{k\in K}[(\mathrm{i}_{X_{\pi^{-1}l}})_{l\in\psi^{-1}k} \times\mathrm{i}]\times 1]\times 1}\\ \prod_{k\in K}\big{[}\prod_{l\in\psi^{-1}k}\mathsf{V}(X_{\pi^{-1}l };X_{\pi^{-1}l})\times\mathsf{V}\big{(}(X_{\pi^{-1}l})_{l\in\psi^{-1}k};Z_{k} \big{)}\big{]}\times\mathsf{V}\big{(}(Z_{k})_{k\in K};W\big{)}\\ \xrightarrow{\prod_{k\in K}\mu_{\pi_{k}}\times 1}\big{[}\prod_{k \in K}\mathsf{V}\big{(}(X_{i})_{i\in\pi^{-1}\psi^{-1}k};Z_{k}\big{)}\big{]} \times\mathsf{V}\big{(}(Z_{k})_{k\in K};W\big{)}\xrightarrow{\mu_{\pi_{k}} \mathsf{V}}\big{(}(X_{i})_{i\in I};W\big{)}.\]
Transforming this with the help of the associativity property from Figure 1 for maps \(I\xrightarrow{\pi}L\xrightarrow{\psi}K\) we get
\[\big{\{}\big{[}\prod_{k\in K}\mathsf{V}\big{(}(X_{\pi^{-1}l})_{l \in\psi^{-1}k};Z_{k}\big{)}\big{]}\times\mathsf{V}\big{(}(Z_{k})_{k\in K};W \big{)}\xrightarrow{(\mathrm{i}_{X_{\pi^{-1}l}})_{l\in L}\times 1\times 1} \\ \big{[}\prod_{l\in L}\mathsf{V}(X_{\pi^{-1}l};X_{\pi^{-1}l}) \big{]}\times\big{[}\prod_{k\in K}\mathsf{V}\big{(}(X_{\pi^{-1}l})_{l\in\psi^{ -1}k};Z_{k}\big{)}\big{]}\times\mathsf{V}\big{(}(Z_{k})_{k\in K};W\big{)}\\ \xrightarrow{1\times\mu_{\psi}}\big{[}\prod_{l\in L}\mathsf{V}(X _{\pi^{-1}l};X_{\pi^{-1}l})\big{]}\times\mathsf{V}\big{(}(X_{\pi^{-1}l})_{l\in L };W\big{)}\xrightarrow{\mu_{\pi}}\mathsf{V}\big{(}(X_{i})_{i\in I};W\big{)} \big{\}}\\ =\big{\{}\big{[}\prod_{k\in K}\mathsf{V}\big{(}(X_{\pi^{-1}l})_{l \in\psi^{-1}k};Z_{k}\big{)}\big{]}\times\mathsf{V}\big{(}(Z_{k})_{k\in K};W \big{)}\xrightarrow{\mu_{\pi}}\mathsf{V}\big{(}(X_{\pi^{-1}l})_{l\in L};W \big{)}\\ \xrightarrow{(\mathrm{i}_{X_{\pi^{-1}l}})_{l\in L}\times 1} \big{[}\prod_{l\in L}\mathsf{V}(X_{\pi^{-1}l};X_{\pi^{-1}l})\big{]} \times\mathsf{V}\big{(}(X_{\pi^{-1}l})_{l\in L};W\big{)}\xrightarrow{\mu_{\pi }}\mathsf{V}\big{(}(X_{i})_{i\in I};W\big{)}\big{\}}.\]
This is the last expression from (A.1.3) with expanded \(r_{\pi}\).
|
2308.07790 | Rapid-adiabatic-passage-based super-resolution microscopy in
semiconductor quantum dot system | We theoretically investigate rapid adiabatic passage(RAP)-based
super-resolution imaging in a two-level quantum dot system interacting with two
structured beams. To understand the physical mechanism behind the formation of
super-resolution for the experiment of Kaldewey {\it et. al.,}[Nature Photonics
10.1038/s41566-017-0079-y (2018)], we first use Liouville's density matrix
where photon-mediated radiative and non-radiative decays are incorporated. A
suitably chosen spatiotemporal envelope of the structured beams enables the
formation of a super-resolution image. We also find that the feature size of
the image depends on the intensity of the Laguerre Gaussian beam(LG). However,
the created image resolution undergoes distortion due to the existence of a
low-intensity circular ring. The unwanted circular ring arises from the
dominance of the LG beam tail over the super-Gaussian(SG) beam tail, initiating
the residual population transfer from the ground state to the excited state.
This limitation can be overcome by using the Bessel-modulated truncated
structured LG and SG beams. We next study the dynamics of the semiconductor
quantum dot system at finite temperatures wherein the phonon interaction
becomes imperative. We employ the polaron-transformed master equation to
explore the system at higher temperatures. Our numerical results confirm that
the sharpness of the image remains intact at low temperatures with weak phonon
coupling. Hence, the proposed scheme may open up applications in nano-scale
imaging with quantum dots. | Partha Das, Samit Kumar Hazra, Tarak Nath Dey | 2023-08-15T14:14:15Z | http://arxiv.org/abs/2308.07790v1 | # Rapid-adiabatic-passage-based super-resolution microscopy in semiconductor quantum dot system
###### Abstract
We theoretically investigate rapid adiabatic passage(RAP)-based super-resolution imaging in a two-level quantum dot system interacting with two structured beams. To understand the physical mechanism behind the formation of super-resolution for the experiment of Kaldewey _et. al._,[Nature Photonics 10.1038/s41566-017-0079-y (2018)], we first use Liouville's density matrix where photon-mediated radiative and non-radiative decays are incorporated. A suitably chosen spatiotemporal envelope of the structured beams enables the formation of a super-resolution image. We also find that the feature size of the image depends on the intensity of the Laguerre Gaussian beam(LG). However, the created image resolution undergoes distortion due to the existence of a low-intensity circular ring. The unwanted circular ring arises from the dominance of the LG beam tail over the super-Gaussian(SG) beam tail, initiating the residual population transfer from the ground state to the excited state. This limitation can be overcome by using the Bessel-modulated truncated structured LG and SG beams. We next study the dynamics of the semiconductor quantum dot system at finite temperatures wherein the phonon interaction becomes imperative. We employ the polaron-transformed master equation to explore the system at higher temperatures. Our numerical results confirm that the sharpness of the image remains intact at low temperatures with weak phonon coupling. Hence, the proposed scheme may open up applications in nano-scale imaging with quantum dots.
## I Introduction
Conventional optics fail to resolve the spot size of an image beyond a value comparable to the probing light wavelength. Ernst Abbe first realized that the primary constraint of resolution imaging comes from diffraction [1]. Later it was mathematically formulated using Fourier transform theory [2]. Defeating the diffraction barrier has been the key to achieving high-resolution imaging. The use of super-resolution microscopy can overcome the diffraction. The realization of super-resolution microscopy is possible due to the advent of the photolithography technique. In this technique, Scott et al. [3] and Andrew et al. [4] have used photochromic molecules as a mask to imprint a nanoscale pattern on a target material. The mask shows a dramatic response to light interaction. The opaque mask becomes transparent in the presence of the writing beam with wavelength \(\lambda_{1}\), whereas it becomes opaque again by applying inhibitor beam light with wavelength \(\lambda_{2}\). Exploiting these fascinating properties of the mask, they created a nanosize pattern using a writing Gaussian beam and an inhibitor Laguerre-Gaussian (LG) beam. The concept of switching between transparent and opaque has been successfully realized in stimulated emission-depletion (STED) microscopy for nanosize particle imaging [5; 6]. In STED microscopy, excitation and depletion light beams are used to illuminate the sample simultaneously. The excitation beam excites the fluorescent molecules to the bright state, and the depletion beam turns them back to the dark state by stimulated emission. The transition probability between bright and dark states depends on the intensity of the laser beam. Therefore, the depletion beam, which has a doughnut intensity profile, can image only the fluorophores in the small central area where the beam intensity is zero. STED microscopy has opened up a new platform for imaging at the nanoscale, in material science and medical biology [7; 8]. In particular, STED involves fluorescence depletion, which is an incoherent process. There are also other contemporary techniques [9; 10] which give an incoherent response to the laser.
RAP based imaging can further improve the shortfall of STED microscopy. The RAP exploits the coherent response of the laser [11], which indicates coherent manipulation of quantum states to achieve efficient population transfer. It uses a time-advance strong RAP positive chirping pulse that transfers the population from the ground to the excited state. A second time-delayed RAP pulse with negative chirping can de-excite the population from the excited state to the ground state. Hence, the intensity-dependent RAP in a two-level system acts like a nearly ideal 'on' and 'off' switch, which qualifies the critical criterion of super-resolution microscopy. With the advent of short, intense pulse and pulse-shaping technology, selective population transfer has become efficient. Population transfer through a chirp pulse is much more effective than the pulse without chirping [12]. Moreover, a frequency-swept pulse-induced excitation is immune from Rabi oscillation [13; 14]. Under the adiabatic condition, a strong Rabi frequency can transfer the population at the desired level without decay and decoherence inducing losses [15].
Semiconductor quantum dots (QDs) allow precise control over their size, shape, and composition, enabling more tailored RAP implementation than atomic systems
[16]. It explores the effect of coherent control and decoherence in solid-state systems. Experiments on optically driven InGaAs/GaAs QDs found the intensity damping of Rabi rotation due to longitudinal acoustic (LA) phonons [17; 18]. These self-assembled QDs interact with the phonons, limiting the excitonic transition's coherence [19; 20; 21; 22; 23; 24; 25]. Various theoretical approaches have been proposed for investigating phonon interaction's role in the excited state's coherent population distribution. This includes the master equation(ME) using perturbative expansion of the exciton phonon coupling in Markovian [17; 18; 22; 26] and non-Markovian limit [19; 27; 28], numerical techniques with path integral method [21], and correlation expansion [20; 29; 30].
In this work, we provide a detailed theoretical study for the recent experiments on super-resolution imaging in QDs [11]. The experiments were performed in InGaAs QD embedded in GaAs. The strong confinement of charge carriers of the QDs brings about discrete energy levels to mimic an atomic medium. We first use the density matrix equation for two-level system to explain the formation of the spot size beyond the diffraction limit. We show how a truncated spatial envelope of the Bessel-modulated SG and Bessel-modulated LG beam can improve the spot size resolution by stopping residual ground state population excitation. In QD's, longitudinal acoustic phonon interaction is inevitable because of the environment temperature. The interactions between phonon and exciton greatly influence the exciton-photon interaction. Therefore, investigating phonon-mediated dephasing is mandatory for image formation. To illustrate the effect of temperature on imaging, we further study the polaron master equation. We find that the image formation degrades with the increase in temperature. Hence a lower temperature with weak electron-phonon coupling is favourable to form a sharp image. These tunable optical properties of QD find application in sensors, drug delivery, biomedical imaging [31; 32], quantum communication, and quantum information [33].
The paper is organized as follows. Section I contains a brief introduction to super-resolution and its application in the QD medium. In section II we present the level system and theoretical formalism without taking the phonon contribution. In section III we discuss the numerical results. Section IV contains the inclusion of a phonon bath interacting with QD by using the polaron master equation. In section V we present the result including the phonon-induced dephasing. Finally, in section VI, we give a conclusion of the work.
## II Theoretical formulation
### Level system
Controllable population transfer at the excited state is the essence behind super-resolution imaging. A variety of methods, such as stimulated Raman adiabatic passage (STIRAP), super-adiabatic STIRAP (saSTIRAP), and rapid adiabatic passage (RAP), have been used to transfer the population to the desired state. The population inversion is beyond reach for a two-level system due to the thermodynamic limit. RAP can overcome this limitation. The two-level system can achieve an efficient and robust time-dependent population inversion under the RAP. Based on RAP, super-resolution imaging is demonstrated by Kaldewey _et al._ in an ensemble of QD [11]. This intriguing technique can resolve each QD with a spot size of 30 nm (\(\lambda/31\)). Inspired and motivated by this experiment, we provide a detailed theoretical explanation for forming super-resolution imaging beyond the diffraction limit based on a full-density matrix and polaron-transformed master equations. The charge confinement of electron-hole pairs leads to semiconductor QD manifests an atom-like discrete energy level structure. A left-handed circularly polarized light that drives the excited (exciton) state \(|1\rangle\) and ground state \(|2\rangle\) with energy separation \(\hbar\omega_{QD}\) produces a two-level configuration as shown in Fig. 1. The incident light consists of two spatiotemporal beams of opposite chirping interacting with the two-level quantum dot by the induced dipole moment.We have adopted a semi-classical treatment of light-matter interaction where the field is classical, and the energy levels of QD are discrete. The excitation and de-excitation of two beams which couple the states \(|1\rangle\) and \(|2\rangle\) are respectively given as
\[\vec{E}_{G}(r,t) =\hat{\sigma}_{-}E_{G}(r)\mathrm{exp}\left[-\frac{t^{2}}{2\tau^{ 2}}-i\omega_{G}t-i\alpha_{G}\frac{t^{2}}{2}\right]+\mathrm{c.c.}, \tag{1a}\] \[\vec{E}_{D}(r,t) =\hat{\sigma}_{-}E_{D}(r)\mathrm{exp}\left[-\frac{t^{2}}{2\tau^{ 2}}-i\omega_{D}t-i\alpha_{D}\frac{t^{2}}{2}\right]+\mathrm{c.c.}, \tag{1b}\]
where \(\hat{\sigma}_{-}\) is the left circular polarization unit vector, \(\tau\) is chirped pulse width, \(\omega_{G}\) and \(\omega_{D}\) are the corresponding carrier frequencies, \(\alpha_{G}\) and \(\alpha_{D}\) are the linear temporal chirp of the first and second pulse respectively. Specifically, the linear temporal chirp denotes the sweep rate of
Figure 1: Schematic diagram of two-level quantum dot system. Two spatiotemporal beams interact with the system of Rabi frequencies \(\Omega_{G}\) and \(\Omega_{D}\). The spontaneous emission decay rate from \(|1\rangle\) to \(|2\rangle\) is given by \(\gamma\). The two beams interact resonantly with both the levels.
the laser from a negative detuning to a positive detuning or vice-versa. The spatial profiles of the two beams \(E_{G}(r)\) and \(E_{D}(r)\) are taken to be a SG and LG\({}_{0}^{1}\), respectively, expressed as
\[E_{G}(r) =E_{G}^{0}\text{exp}\left[-\left(\frac{r^{2}}{2w_{G}^{2}}\right)^{2 }\right], \tag{2a}\] \[E_{D}(r) =E_{D}^{0}\left(\frac{r}{w_{D}}\right)\text{exp}\left[-\left( \frac{r^{2}}{2w_{D}^{2}}\right)\right]\text{exp}[i\phi], \tag{2b}\]
where \(E_{G}^{0}\) and \(E_{D}^{0}\) are the amplitude, and the beam waist of each beam is denoted by \(w_{G}\) and \(w_{D}\), and \(\phi\) is the phase difference between the two beams, which we have taken \(\phi=\pi/2\). The SG beam transfers the population from the ground state to the excited state, and the LG beam depletes the excited state population to the ground state. In the presence of the two beams, the time-dependent Hamiltonian of the system under electric dipole approximation is given as
\[\mathbf{H}=\mathbf{H}_{0}+\mathbf{H}_{I}, \tag{3a}\] \[\mathbf{H}_{0}=\hbar\omega_{QD}|1\rangle\langle 1|,\] (3b) \[\mathbf{H}_{I}=-\vec{d}_{12}.\left(\vec{E}_{G}(r,t)+\vec{E}_{D}(r,t) \right)|1\rangle\langle 2|+\text{H.c.}, \tag{3c}\]
where \(\vec{d}_{12}=\langle 1|\vec{d}|2\rangle\) is the matrix elements of the induced dipole moment operator \(\hat{d}\) for the transition \(|1\rangle\leftrightarrow|2\rangle\). We make a unitary transformation as
\[\mathbf{U}=e^{-i\nu(t)t|1\rangle\langle 1|}, \tag{4}\]
where \(\nu(t)=\omega_{D}+\alpha_{D}t/2\). Note that the time-dependent frequency signifies the characteristics of the chirp pulse. Now, the effective Hamiltonian in the interaction picture is given by
\[\mathcal{H}_{eff}=\mathbf{U}^{\dagger}\mathbf{H}\mathbf{U}-i\hbar\mathbf{U}^{ \dagger}\frac{\partial\mathbf{U}}{\partial t}. \tag{5}\]
Under rotating wave approximation (RWA) the above Hamiltonian gives
\[\mathcal{H}_{eff}= -\hbar\Delta(t)|1\rangle\langle 1|-\frac{\hbar}{2}(\Omega_{G}(r,t)e ^{i\delta(t)t}+\Omega_{D}(r,t))|1\rangle\langle 2|\] \[-\frac{\hbar}{2}(\Omega_{G}^{*}(r,t)e^{-i\delta(t)t}+\Omega_{D}^{ *}(r,t))|2\rangle\langle 1|,\]
where the detunings are defined as
\[\Delta(t)=(\omega_{D}+\alpha_{D}t)-\omega_{QD}, \tag{6a}\] \[\delta(t)=(\omega_{D}-\omega_{G})-\frac{t}{2}(\alpha_{G}-\alpha_ {D}). \tag{6b}\]
Both the beams interact resonantly with two-level system _i.e._, \(\omega_{D}=\omega_{G}=\omega_{QD}\). The spatiotemporal Rabi frequencies of respective beams are
\[\Omega_{G} =\Omega_{G}^{0}\text{exp}\left[-\left(\frac{r^{2}}{2w_{G}^{2}} \right)^{2}\right]\text{exp}\left[-\frac{t^{2}}{2\tau^{2}}\right], \tag{7a}\] \[\Omega_{D} =i\Omega_{D}^{0}\left(\frac{r}{w_{D}}\right)\text{exp}\left[- \left(\frac{r^{2}}{2w_{D}^{2}}\right)\right]\text{exp}\left[-\frac{t^{2}}{2 \tau^{2}}\right], \tag{7b}\]
where the amplitudes are given as, \(\Omega_{G}^{0}=\vec{\mathbf{d}}_{12}.\hat{\sigma}_{-}E_{G}^{0}/\hbar\) and \(\Omega_{D}^{0}=\vec{\mathbf{d}}_{12}.\hat{\sigma}_{-}E_{D}^{0}/\hbar\).
### Density Matrix Equations
In general, the population in the excited state can decay through radiative or non-radiative processes. This decay mechanism not only changes the population of the excited state but also modifies the atomic coherences. So, to study the system dynamics, we must incorporate damping terms. To do this, we use the following Liouville equation:
\[\frac{\partial\rho}{\partial t}=-\frac{i}{\hbar}[\mathcal{H}_{eff},\rho]+ \mathcal{L}\rho. \tag{8}\]
The last term of Eq.(8) represents radiative and nonradiative processes that can be determined by
\[\mathcal{L}\rho=-\frac{\gamma}{2}(|1\rangle\langle 1|\rho-2|2\rangle\langle 1|\rho |1\rangle\langle 2|+\rho|1\rangle\langle 1|), \tag{9}\]
where \(\gamma\) is the radiative decay rate of the excited state. The equations of motion for the populations and coherences of the two-level system are expressed as
\[\dot{\rho}_{11}= \frac{i}{2}(\Omega_{D}+\Omega_{G}e^{i\delta(t)t})\rho_{21}-\frac{i }{2}(\Omega_{D}^{*}+\Omega_{G}^{*}e^{-i\delta(t)t})\rho_{12}\] \[-\gamma\rho_{11} \tag{10a}\] \[\dot{\rho}_{22}= -\frac{i}{2}(\Omega_{D}+\Omega_{G}e^{i\delta(t)t})\rho_{21}+ \frac{i}{2}(\Omega_{D}^{*}+\Omega_{G}^{*}e^{-i\delta(t)t})\rho_{12}\] \[+\gamma\rho_{11}\] (10b) \[\dot{\rho}_{12}= -\left(\frac{\gamma}{2}-i\Delta(t)\right)\rho_{12}+\frac{i}{2}( \Omega_{D}+\Omega_{G}e^{i\delta(t)t})(\rho_{22}-\rho_{11})\] (10c) \[\dot{\rho}_{21}= \dot{\rho}_{12}^{*} \tag{10d}\]
where the overdots signify time derivatives and star (\(*\)) denotes the complex conjugates. The diagonal elements, \(\rho_{ii}\), (\(i\in 1,2\)) of the density matrix follow the conservation of population, _i.e._, \(\rho_{11}+\rho_{22}=1\).
### RAP in two-level system
Besides spectroscopy, it is of profound importance to prepare a specific quantum state in semiconductors for different fields like quantum computation [34; 35], single and entangled photons [36; 37], Bose-Einstein condensation[38]. RAP is an advantageous way to prepare a state as it remains insensitive to the variation of the laser field intensity or the pulse area beyond the adiabatic threshold. The population change between the two-level atomic states can happen in two distinct adiabatic ways. To understand the two processes, we use the well-known dressed state (adiabatic state) eigenvectors of a time-dependent Hamiltonian of a two-level system
interacting with a chirp pulse of Rabi frequency \(\Omega_{1}\) is given as [39]
\[|\psi_{+}(t)\rangle= \sin\theta(t)\ |2\rangle+\cos\theta(t)\ |1\rangle, \tag{11a}\] \[|\psi_{-}(t)\rangle= \cos\theta(t)\ |2\rangle-\sin\theta(t)\ |1\rangle, \tag{11b}\]
where the instantaneous eigenstates are the linear superposition of the bare states (diabatic state). The mixing angle \(\theta(t)\) between the states is defined by
\[\sin 2\theta= \frac{|\Omega_{1}(t)|}{\sqrt{\Delta_{1}(t)^{2}+|\Omega_{1}(t)|^{ 2}}}, \tag{12a}\] \[\cos 2\theta= \frac{\Delta_{1}(t)}{\sqrt{\Delta_{1}(t)^{2}+|\Omega_{1}(t)|^{ 2}}}, \tag{12b}\]
where \(\Delta_{1}(t)\) is the time-dependent detuning. We can also, obtain the eigenvalues of the dressed state which are given by
\[E_{\pm}=\frac{\hbar}{2}\left[\Delta_{1}(t)\pm\sqrt{\Delta_{1}(t)^{2}+|\Omega_ {1}(t)|^{2}}\right]. \tag{13}\]
Indeed, we can write the Hamiltonian in the adiabatic basis as
\[H_{a}=\hbar\begin{bmatrix}E_{-}&-i\hat{\theta}\\ i\hat{\theta}&E_{+}\end{bmatrix}. \tag{14}\]
The adiabatic condition requires that the maximum rate of change in the adiabatic states, \(|\psi_{\pm}\rangle\) must be smaller than the minimum difference between the eigenvalues. Thus the adiabatic condition reads
\[|\dot{\theta}(t)|\ll|E_{+}(t)-E_{-}(t)|. \tag{15}\]
Now for the case of (i) constant detuning, we can infer from Eq. (13) that the energies of the dressed states remain parallel to each other long before and after the interaction with a pulsed laser. Only during the pulse interaction time, the states are in the superposition of the bare states. Therefore, in this case, the population completely return to its initial state, which is the no-crossing scenario of adiabatic evolution. In another case of (ii) time-dependent detuning, _i.e._, when the frequency sweeps adiabatically from a large negative value to a large positive value (or vice versa) two limits arise, (a) for large negative detuning (\(|\Omega_{1}(t)|\ll|\Delta_{1}(t)|\)),
\[E_{+}\to 0; E_{-}\rightarrow-\hbar\Delta_{1},\] \[|\psi_{+}\rangle\rightarrow|2\rangle; |\psi_{-}\rangle\rightarrow-|1\rangle, \tag{16}\]
and (b) for large positive detuning (\(|\Omega_{1}(t)|\ll|\Delta_{1}(t)|\)),
\[E_{+}\rightarrow\hbar\Delta_{1}; E_{-}\to 0,\] \[|\psi_{+}\rangle\rightarrow|1\rangle; |\psi_{-}\rangle\rightarrow|2\rangle. \tag{17}\]
Both the two limits assert that the initial population in state \(|2\rangle\) adiabatically follows \(|\psi_{+}\rangle\) during the frequency-swept and finally makes an inversion to state \(|1\rangle\). This is called the avoided crossing or anticrossing in adiabatic evolution. It is also called rapid as the process should occur shorter than the lifetime of the excited state.
## III Numerical results
### RAP-based spot formation
Under the adiabatic condition, a two-level system interacting with a Gaussian chirp pulse can robustly transfer the population from one state to another.
We have used two such sequential Gaussian chirp pulses with opposite chirping. To perform the computation with the system parameters, we need to normalize it to a dimen
Figure 3: Population transfer via RAP. The first pulse with positive chirp of 3.24 ps\({}^{-2}\) and pulse area greater than \(\pi\) transfers the population to the excited state and the subsequent pulse with chirp -3.24 ps\({}^{-2}\) and pulse area greater than \(\pi\) bring down the excited state population to ground state. Both the pulse has a width \(1.3\tau_{n}\).
Figure 2: Excited state population as a function of pulse area and chirping. The color bar denotes the population of the excited state.
sionless quantity. We have chosen the normalized frequency \(\gamma_{n}\) = 1 ps\({}^{-1}\) and time \(\tau_{n}\) = \(1/\gamma_{n}\). In Fig. 2, we have plotted the excited state population as a function of the pulse area and chirp (\(\alpha\)) of the pulse. We find if there is no chirp (\(\alpha\tau_{n}^{2}\) = 0), we get the usual Rabi oscillation _i.e._, for the odd multiple of \(\pi\) the population transfer to the excited state whereas for even multiple of \(\pi\) it remains in the ground state. Also, we notice that robust population transfer can happen due to RAP for several parameters that satisfy the conditions, \(|\alpha|\tau_{n}^{2}\)\(>>\) 1 and \(|\alpha|\tau_{n}^{2}\)\(<<\)\(\Omega_{0}^{2}\tau_{n}^{2}\). Here \(\Omega_{0}\) is the peak Rabi frequency of the chirped pulse. As in Fig. 3, the first pulse peaked at t\(\gamma_{n}\) = 40, Rabi frequency \(\Omega_{G}^{0}\) = 4.0\(\gamma_{n}\), positive chirp \(\alpha_{G}\) = 3.24 ps\({}^{-2}\) and having pulse area \(>\pi\) takes the population to the excited state that was initially in the ground state.
The next pulse is peaked at t\(\gamma_{n}\) = 45, Rabi frequency \(\Omega_{D}^{0}\) = 10.4\(\gamma_{n}\), with a negative chirp \(\alpha_{D}\) = - 3.24 ps\({}^{-2}\) which returns back the population to the ground state. The spontaneous decay rate of the excited state \(\gamma\) = 800 ps\({}^{-1}\). The second pulse is implemented in a short interval of the first to make the population return efficiently so that the two pulses act like an on-off switch. This efficient population transfer and return is crucial for imaging based on this scheme. The frequency-swept is adiabatic so that \(\Omega_{0}^{2}\)\(>>\)\(|d\Delta/dt|\).
Now for RAP-based imaging, the system is driven by two spatiotemporal beams with SG and LG spatial profiles shown in Figs. 4(a) and 4(b) respectively. The SG and LG beam waists are 1.7\(l\) and 1.0\(l\), respectively. Here \(l\) is a characteristic length defined by \(l\) = f/(k\(\sigma\)). The lens f's focal length is 3.7mm., the wavevector k = 2\(\pi\)n/\(\lambda\). We take the refractive index of the gallium phosphide (GaP) solid emersion lens, n = 3.5, and wavelength of the laser \(\lambda\) = 940 nm. The spatial extent of the beam before focusing on a lens is taken as \(\sigma\) = 1.2 mm. So, the characteristic length \(l\) becomes 131.86 nm. This can be regarded as the beam's spot size when it is focused through a lens for the above parameters. The Rabi frequency of the SG beam is \(\Omega_{G}^{0}\) = 4.0\(\gamma_{n}\) and for the LG beam, is \(\Omega_{D}^{0}\) = 10.4\(\gamma_{n}\). Successively implementing SG and LG spatiotemporal beams produce a spot size of \(\Delta_{\rm{XFWHM}}/l\) = 0.2 _i.e._, 26.37 nm. In Fig. (5), we show the excited state population distribution at \(\gamma_{n}\)t = 60.
Other than the central peak, small side peaks appear due to the population transition to the excited state due to the LG beam. The tail of the SG beam fails to t
Figure 4: The 3-D intensity distribution of (a) SG beam of width 1.7\(l\) and (b) LG spatiotemporal beam of width 1.0\(l\)
Figure 5: Population of the excited state vs. spatial extent in QD system. The applied spatiotemporal beams SG has waist \(w_{G}\) = 1.7\(l\), and LG has waist \(w_{D}\) = 1.0\(l\). The spatial distribution of the excited state population is shown at \(\gamma_{n}\)t = 60. Other parameters are the same as in Fig. 3.
Figure 6: The 2-D spot size of (a) single QD emitter at (0,0). The outer ring is at a higher radius which is not shown here, (b) Multiple emitters at a preassigned position (-4,-4) bottom left, (-4,3) top left and (4,5) top right. Here we can see the low intense outer rings. The parameters are the same as in Fig. 5.
the excited state, leaving it partially in the ground state. However, the LG beam partially takes this leftover population to an excited state. We have elaborated on the discussion of the occurrence of the side peaks due to the variation of field intensities in the latter part. We choose the beam waist judiciously so that the side peaks remain minimum and far apart from the central maximum. Figure 6(a) shows the central spot in the 2d plane. Also, we can detect multiple QDs at different preassigned locations given in Fig. 6(b).
Thus, applying the SG and LG beams successively give a smaller spot size of the QDs emitters.
Next, we analyzed the dependence of the FWHM of the QD spot with the intensity of the LG beam in Fig. 7. The flat top portion of the SG beam, which exists between the spatial range \(x/l=\pm 1\), takes all population from the ground state to the excited state. The central peak develops due to the population presence in the excited state at the center \(x/l=0\), where the intensity of the LG beam becomes zero. This is true for taking the orbital angular momentum index as 1 (\(LG_{0}^{1}\)) in Eq. (2b). This point singularity is the key to achieving sharp resolution at the QD center. The intensity peaks of the LG beam appear at \(x/l=\pm 1\), leading to stimulated emission from the excited state to the ground state. Subsequently, a dip is observed in the excited state population \(\rho_{11}\) as shown in the inset of Fig. 7. The central peak accompanied by shallow dips gives rise to a larger FWHM. By increasing the LG beam's intensity, the spatial distribution of the excited state population at \(x/l=\pm 1\) goes to zero very sharply. Hence, the inset of Fig. 7 reveals the reason behind the narrow spot size formation, which can be made possible by the doughnut beam-assisted complete excited state depopulation. Before the sharp fall of the FWHM the doughnut beam intensity is very low. By slowly increasing the doughnut beam intensity and keeping the SG beam fixed, much of the population transfers to the ground state. In this context, the peak after the dip in the excited state population occurs due to tail of the LG beam where the intensity is lower than \(x/l=\pm 1\). The inset of Fig. 7 (blue and black solid lines) illustrates these phenomena. The red dotted line in the inset of Fig 7 represents half of the central maximum, and the green solid line displays a typical case of having a side peak below half of the central maximum. It is to be noted that arbitrary enhancement of the intensity up to a larger extent is prohibited because of photobleaching or may even damage the live sample [40; 41]. Now we explain the occurrence of side peaks due to the field intensities variation along the transverse plane. In Fig. 8, the black solid line represents the spot size which is the same as Fig. 5. This has the lowest side peak for \(\Omega_{G}^{0}=4.0\gamma_{n}\) and \(\Omega_{D}^{0}=10.4\gamma_{n}\). Increasing the intensity of the LG beam's spatiotemporal envelope reduces the central spot intensity, enhancing and shifting the side peak intensities to a larger spatial extent. This procedure leads to more residual ground state population transfer to the excited state. All the solid lines in Fig. 8 give evidence of this phenomenon.
Moreover, increasing the SG spatiotemporal beam intensity transfers much of the population to the excited state from the ground state. This population returns to the ground state accompanied by the LG beam. Due to this, the side peak (black, blue dotted lines) slightly shifts in Fig. 8. As the intensity of the SG beam surpasses the intensity of the LG beam, even though much of the population transfer to the excited state, the excited state population partially returns to the ground state. Thus there is a sharp rise of the side peak (red dotted line). All the dashed lines in Fig. 8 suggest the scenario where the SG beam intensity is varied, keeping the LG beam intensity at a fixed value. It is im
Figure 8: Dependence of beam intensity on spot formation. The solid lines correspond to the field value where the SG value is constant varying the LG intensity, and the dotted lines are vice-versa.
Figure 7: FWHM of the spot is plotted against the intensity of the spatiotemporal LG beam. The inset shows the excited state population for different LG intensities. The blue, black, and green solid lines correspond to \(|\Omega_{D}^{0}|^{2}=4,\,9,\,25\) respectively. The red dotted line represents half of the central maximum.
portant to note that we have used SG and LG beams of opposite chirping for state transfer protocol. This fact is unambiguously explained by Eq. (16),(17) in the context of RAP.
### Reduction of side peak
It is evident from the previous analysis that the formation of side peaks around the central maximum of the QD spot is inevitable by considering the fields whose spatial envelopes represent by Eqs. (1a) and (1b). Even though these side peaks are small, these peaks may degrade the resolution where several QDs emitters within the specific range. So, finding the modulated fields that will decrease the side peak nearly to zero is pertinent.
A complete suppression of residual ground state population flow is possible by considering the truncated beams which are formed by the Bessel-modulated SG and Bessel-modulated LG beams given as
\[\vec{E}_{GM}(r,t) =\vec{E}_{G}(r,t)\left[a_{1}J_{0}(r^{16})+a_{2}J_{2}(r^{16}) \right], \tag{18a}\] \[\vec{E}_{DM}(r,t) =\vec{E}_{D}(r,t)\left[b_{1}J_{0}(r^{16})+b_{2}J_{2}(r^{16}) \right]. \tag{18b}\]
Here \(J_{i}(i\in 0,2)\) is the Bessel function of the first kind with order \(i\), \(a_{i}\) and \(b_{i}\) are modulation coefficients (real numbers) which we have chosen, \(a_{1}=a_{2}=1.5\), \(b_{1}=b_{2}=1.0\). The modified Rabi frequencies of modulated LG and SG beams are \(\Omega_{DM}\), \(\Omega_{GM}\) respectively. In Fig. 9, the two modulated beams are truncated at \(x/l=\pm 1.2\), which can be obtained experimentally by finite apertures. The modulated SG beam takes all the population from the ground state to the excited state. Due to the sharp fall of the modulated SG intensity at \(x/l=\pm 1.2\), the ground state population beyond the spatial range, \(x/l=0\) to \(x/l=\pm 1.2\) cannot be excited to the excited state. Similarly, the excited state population goes through stimulated emission by the modulated LG beam keeping only the population at the center \(x/l=0\), which is shown in Fig. 10. Keeping the spot size of the QD emitter the same as Fig. 5, we can reduce the side peak nearly to zero.
## IV Effect of exciton-phonon coupling
The system under consideration is significantly different from the well-studied single-atom emitters due to the solid-state nature of the semiconductor QD emitters. The medium consists of a few InGaAs QDs grown on top of GaAs host material using molecular beam epitaxy. Therefore the host lattice vibration modifies the QD dynamics depending on the environment temperature. In the literature, the quantized form of the vibrational energy in a periodic structure refers to a phonon. Many theoretical and experimental studies confirm the longitudinal acoustic (LA) phonon coupling with QDs via deformation potential. Therefore, various new quantum phenomena were discovered, like the appearance of new features in Mollow triplets [42; 43], emission line broadening [44; 20; 45], and limiting degree of indistinguishability of photons [46; 47]. On the other hand, the QD-phonon interaction model explains several quantum features, such as Rabi rotation [26], rapid adiabatic passage [48; 49; 50], and phonon-assisted state preparation [51; 52]. Hence, the previous model is only suitable for weak QD-phonon coupling at near-absolute-zero temperatures. It is pertinent to include the effect of phonon, interacting with the two-level QD configuration. The system is coupled
Figure 9: Normalized peak intensities of Bessel-modulated SG and Bessel-modulated LG beam are plotted against spatial extent. The red solid line, and the black dotted line correspond to the modulated LG and SG beams, respectively. Both the beams are truncated at \(x/l=1.2\). Other parameters are the same as in Fig. 4.
Figure 10: The modified population of the excited state is plotted as a function of the spatial extent. The modulated fields in Fig. 9 result in reducing the side peak to almost zero. Other parameters are the same as in Fig. 5
to an acoustic phonon bath represented by as a collection of harmonic oscillators with frequency \(\omega_{m}=\text{c}_{\text{s}}\text{k}\) where, \(\text{c}_{\text{s}}\) is the velocity of sound and \(k\) is the wavevector, creation and annihilation operator of the m\({}^{th}\) mode are \(b_{m}^{\dagger}\) and \(b_{m}\) respectively. The coupling constant for exciton phonon mode is \(\lambda_{m}\). So, the effective Hamiltonian in the interaction picture can be written as [53],
\[\mathbf{H}_{I}(t) =\frac{\hbar}{2}(\Omega(t)|1\rangle\langle 2|+\Omega(t)^{*}|2 \rangle\langle 1|)-\hbar\Delta|1\rangle\langle 1|\] \[+\sum_{m}\hbar\omega_{m}b_{m}^{\dagger}b_{m}+\sum_{m}\hbar \lambda_{m}(b_{m}+b_{m}^{\dagger})|1\rangle\langle 1| \tag{19}\]
where \(\Delta\) is given by Eq. (6a) and the complex Rabi frequency \(\Omega(t)=\Omega_{D}+\Omega_{G}e^{i\delta t}\). We make a transformation to polaron frame to get the polaron-transformed Hamiltonian
\[H_{P}=e^{P}H_{I}e^{-P},\text{ where }P=|1\rangle\langle 1|\sum_{m}\frac{ \lambda_{m}}{\omega_{m}}(b_{m}^{\dagger}-b_{m})\]
This transformed Hamiltonian gives the freedom to split the total Hamiltonian into system, bath, and interaction parts which are given as [54; 55],
\[H_{S} =-\hbar\Delta|1\rangle\langle 1|+\langle B\rangle X_{g}(t) \tag{20a}\] \[H_{B} =\sum_{m}\hbar\omega_{m}b_{m}^{\dagger}b_{m}\] (20b) \[H_{I} =X_{g}(t)\zeta_{g}+X_{u}(t)\zeta_{u}. \tag{20c}\]
The phonon-modified system operators can be defined as follows:
\[X_{g}(t) =\frac{\hbar}{2}(\Omega|1\rangle\langle 2|+\Omega^{*}|2\rangle \langle 1|)\] \[X_{u}(t) =\frac{i\hbar}{2}(\Omega|1\rangle\langle 2|-\Omega^{*}|2\rangle \langle 1|)\]
The fluctuation operators induced by the bath are \(\zeta_{g}=(B_{+}+B_{-}+2\langle B\rangle)/2\), \(\zeta_{u}=(B_{+}-B_{-})/2i\). The phonon displacement operator can be expressed as
\[B_{\pm}=\exp\left[\pm\sum\frac{\lambda_{m}}{\omega_{m}}(b_{m}^{\dagger}-b_{m})\right]\]
The displacement operators contain summation over all the phonon modes, we can average it out for a particular temperature T as, \(\langle B_{+}\rangle=\langle B_{-}\rangle\equiv\langle B\rangle\). The expectation value is given by,
\[\langle B\rangle=\exp\left[-\frac{1}{2}\int_{0}^{\infty}\frac{J(\omega)}{ \omega^{2}}\text{coth}\left(\frac{\hbar\omega}{2k_{B}T}\right)d\omega\right] \tag{21}\]
where \(k_{B}\) is the Boltzmann constant, \(J(\omega)=\alpha_{p}\omega^{3}\text{exp}\left[-\omega^{2}/2\omega_{b}^{2}\right]\) is called the phonon spectral function where \(\alpha_{p}\) and \(\omega_{b}\) are the electron-phonon coupling strength and the phonon cutoff frequency, respectively.
To study the dynamics of the system interacting with the phonon we use the ME using polaron transformed Hamiltonian where the interaction Hamiltonian is treated as a perturbation. The Born-Markov approximation is made to the Hamiltonian to describe the system dynamics in the presence of the phonon bath [54; 55]. Further simplification is made by using Markov approximation since phonon relaxation time is much shorter than the system dynamics [55]. Including the radiative and dephasing rates we obtain full polaron ME as,
\[\frac{\partial\rho}{\partial t}=-\frac{i}{\hbar}[H_{S},\rho]+\frac{\gamma}{2 }\mathcal{L}[\sigma^{-}]\rho+\frac{\gamma^{\prime}}{2}\mathcal{L}[\sigma^{+} \sigma^{-}]\rho+\mathcal{L}_{ph}\rho \tag{22}\]
where \(\sigma^{+}=|1\rangle\langle 2|\), \(\sigma^{-}=|2\rangle\langle 1|\) are raising and lowering operators of the system respectively. The Lindblad superoperator, \(\mathcal{L}[\hat{O}]\rho=2\hat{O}\rho\hat{O}^{\dagger}-\hat{O}^{\dagger}\hat{O }\rho-\rho\hat{O}^{\dagger}\hat{O}\) acting on a operator \(\hat{O}\). The radiative decay and dephasing are denoted by \(\gamma\) and \(\gamma^{\prime}\), respectively. The term \(\mathcal{L}_{ph}\) includes phonon bath in system dynamics is given by,
\[\mathcal{L}_{ph}= -\frac{1}{\hbar^{2}}\int_{0}^{\infty}\sum_{l=g,u}\{G_{l}(\tau)[X_ {l}(t),X_{l}(t,\tau)\rho(t)]\] \[+H.c.\}\ d\tau \tag{23}\]
where, \(X_{l}(t,\tau)=e^{-iH_{S}\tau/\hbar}X_{l}(t)e^{iH_{S}\tau/\hbar}\) and the polaron Green functions are,
\[G_{g}(\tau) =\langle B\rangle^{2}\{\cosh[\phi(\tau)]-1\} \tag{24a}\] \[G_{u}(\tau) =\langle B\rangle^{2}\sinh[\phi(\tau)] \tag{24b}\]
which depend on the phonon correlation function which is,
\[\phi(\tau)=\int_{0}^{\infty}\frac{J(\omega)}{\omega^{2}}\left[\text{coth} \left(\frac{\hbar\omega}{2k_{B}T}\right)\cos(\omega\tau)-i\sin(\omega\tau) \right]d\omega\] (25a) This full polaron ME is generally valid in the range, \[\left(\frac{\Omega}{\omega_{b}}\right)^{2}(1-\langle B\rangle^{4})\ll 1 \tag{26}\]
which enables us to investigate the effect of phonons in weak coupling with strong field as well as strong coupling with weak field limit. After performing the commutation algebra and we get a simplified form with different decay rates associated with various phonon-induced processes. So, Eq. (23) reduced to,
\[\frac{\partial\rho}{\partial t}= -\frac{i}{\hbar}[H_{S},\rho]+\frac{\gamma}{2}\mathcal{L}[\sigma^{- }]\rho+\frac{\gamma^{\prime}}{2}\mathcal{L}[\sigma^{+}\sigma^{-}]\rho\] \[+\frac{\Gamma^{\sigma^{+}}}{2}\mathcal{L}[\sigma^{+}]\rho-\Gamma^{ cd}(\sigma^{+}\rho\sigma^{+}+\sigma^{-}\rho\sigma^{-})\] \[+\frac{\Gamma^{\sigma^{-}}}{2}\mathcal{L}[\sigma^{-}]\rho-i\Gamma^{ sd}(\sigma^{+}\rho\sigma^{+}-\sigma^{-}\rho\sigma^{-})\] \[+i\Delta^{\sigma^{+}\sigma^{-}}[\sigma^{+}\sigma^{-},\rho]-[i \Gamma_{gu+}(\sigma^{+}\sigma^{-}\rho\sigma^{+}+\sigma^{-}\rho\] \[-\sigma^{+}\sigma^{-}\rho\sigma^{-})+\text{H.c.}]-[\Gamma_{gu-}( \sigma^{+}\sigma^{-}\rho\sigma^{+}-\sigma^{-}\rho\] \[+\sigma^{+}\sigma^{-}\rho\sigma^{-})+\text{H.c.}] \tag{27}\]
The scattering terms \(\Gamma^{cd+}\), \(\Gamma^{cd-}\) correspond to cross-dephasing rates [56], \(\Gamma^{\sigma^{+}}\) and \(\Gamma^{\sigma^{-}}\) describe the phonon-assisted incoherent excitation and an enhanced radiative decay process respectively. The emission of phonon transfers the system from a higher energy state to a lower energy state whereas, phonon absorption takes the system from a lower energy state to a higher energy state. These processes are temperature dependent. At low temperature phonon emission process is more favorable than the absorption process. As the temperature increases the radiative decay also increases and both the scattering rates become nearly equal.
## V Result: Including phonon-induced decoherence
The phonon-induced decay rates will affect the population distribution among the QD states for different temperatures. To study the dependence we have solved the simplified analytical ME in Eq. (27) where all the decoherence rates are taken into account.
We have taken the additional parameters for InGaAs/GaAs QDs which are used in [52]. The phonon cutoff frequency \(\omega_{b}=1\) meV, and we choose \(\gamma=\gamma^{\prime}=2\)\(\mu\)eV. As earlier we have normalized the parameters to dimensionless quantity by choosing \(\gamma_{n}=1\)ps\({}^{-1}\). The validity of our ME allows us to investigate the weak and strong phonon coupling for certain temperatures with higher and lower field intensity respectively. In the first case, we have taken the Rabi frequency of truncated SG \(\Omega_{G}^{0}=4.0\gamma_{n}\) and truncated LG \(\Omega_{D}^{0}=10.4\gamma_{n}\) which weakly coupled to the phonon bath as, \(\alpha_{p}=2\)\(\times\)\(10^{-3}\) ps\({}^{2}\) with modulation coefficients, \(a_{1}=a_{2}=1.5\), \(b_{1}=b_{2}=1.0\), and the rest parameters are same as previously taken. We find due to weak coupling there is negligible distortion in the spot size as shown in Fig. 11. Whereas in the second case, we consider strong coupling, \(\alpha_{p}=3\)\(\times\)\(10^{-2}\) ps\({}^{2}\) for chirping 1.51 ps\({}^{-2}\) and the Rabi frequency of the truncated SG beam is \(\Omega_{G}^{0}=2\gamma_{n}\) and for the truncated LG beam is \(\Omega_{D}^{0}=4\gamma_{n}\) with modulation coefficients, \(a_{1}=b_{1}=1.0\), \(a_{2}=1.6\), \(b_{2}=1.4\) which is much weaker than the former case. We see that at large coupling the spot size gets distorted which highly diminishes the sharpness of the spot as in Fig. 12. Although it has been reported in [57] about the decoupling of phonons in much higher pulse areas our theory deals with much shorter pulses restricted by the validity condition mentioned earlier, we find that the increase in temperature distorts the spot adequately. Also in both cases, we observe a decrease in population in the central spot due to the phonon decoherence. As we see from the above discussion the phonon-induced decoherence rates deform the QD spot depending on the coupling strength. This hindrance to resolution is due to inefficient population transfer among the quantum states. So, even though the modulated truncated beams can able to minimize the side peak to the maximum extent the above analysis shows that a lower temperature with weak electron-phonon coupling is favorable for resolution measurement. Hence, the polaron-transformed ME gives an overall understanding of super-resolution in solid-state QDs.
## VI Conclusion
In conclusion, we have theoretically studied RAP-based imaging in semiconductor QD systems. For this purpose, we have adopted a semi-classical treatment for
describing the spatiotemporal beams that interact with the two-level QD system. We use dressed state analysis to understand the RAP-assisted population transfer in the presence of chirping pulses. We also show how the QD emitter's spot size depends on the LG field intensity. The compromise of image resolution in an ensemble of dense QDs because of the circular ring. The dominant character of the LG beam tail over the SG beam tail causes residual population flow from the ground state to the excited state, creating the circular ring. Further, we have used the modulated truncated beams to overcome the image distortion. The phonon vibration in a semiconductor quantum dot system is inevitable at finite temperatures. To encompass the temperature-dependent phonon-induced decoherence rates for super-resolution imaging formation, we explore full polaron ME. Our analysis reveals that sharp image formation is possible at finite low temperatures with weak electron-phonon coupling. However, the QD emitter's spot size gets distorted at substantial high temperatures. Hence, this investigation may have potential applications in nano-scale imaging, with scalability and controllability.
## Appendix: Polaron Master Equation Derivation
In this Appendix, we give the derivation of analytically obtained scattering rates for the phonon-mediated scattering processes. The full-polaron ME is,
\[\frac{\partial\rho}{\partial t}=\frac{1}{i\hbar}\left[H_{S}(t),\rho(t)\right]+ \frac{\gamma}{2}\mathcal{L}\left[\sigma^{-}\right]\rho+\frac{\gamma^{\prime}}{ 2}\mathcal{L}\left[\sigma^{+}\sigma^{-}\right]\rho+\mathcal{L}_{ph}\rho, \tag{10}\]
where, \(\mathcal{L}_{ph}\) is given by,
\[\mathcal{L}_{ph}\rho=-\frac{1}{\hbar^{2}}\int_{0}^{\infty}\sum_{l=g,u}\ d\tau\ \{G_{l}(\tau)[X_{l}(t),X_{l}(t,\tau)\rho(t)]+H.c.\}. \tag{11}\]
Here, \(X_{l}(t,\tau)=e^{-iH_{S}(t)\tau/\hbar}X_{l}(t)e^{iH_{S}(t)\tau/\hbar}\). Now, the modified system operators are given below. Note that the time dependent field with Rabi frequency, \(\Omega(t)=\Omega_{D}+\Omega_{G}e^{i\delta t}\) couples with QD system.
\[X_{g}(t) =\frac{\hbar}{2}(\Omega(t)\sigma^{+}+\Omega(t)^{*}\sigma^{-}) \tag{12}\] \[=\frac{\hbar}{2}\{\text{Re}[\Omega(t)](\sigma^{+}+\sigma^{-})-i \text{Im}[\Omega(t)](\sigma^{-}-\sigma^{+})\}\] (13) \[=\frac{\hbar}{2}\{\text{Re}[\Omega(t)]\sigma_{x}-\text{Im}[ \Omega(t)]\sigma_{y}\} \tag{14}\]
and
\[X_{u}(t) =\frac{i\hbar}{2}(\Omega(t)\sigma^{+}-\Omega(t)^{*}\sigma^{-}) \tag{15}\] \[=-\frac{\hbar}{2}\{\text{Im}[\Omega(t)](\sigma^{+}+\sigma^{-})+i \text{Re}[\Omega(t)](\sigma^{-}-\sigma^{+})\}\] (16) \[=-\frac{\hbar}{2}\{\text{Im}[\Omega(t)]\sigma_{x}+\text{Re}[ \Omega(t)]\sigma_{y}\}. \tag{17}\]
Also the polaron Green's functions are
\[G_{g}(\tau)=\langle B\rangle^{2}\{\cosh[\phi(\tau)]-1\}\quad\text{ and }\quad G_{u}(\tau)=\langle B\rangle^{2}\text{sinh}[\phi(\tau)]. \tag{18}\]
The full polaron-transformed system Hamiltonian \(H_{S}=-\hbar\Delta|1\rangle\langle 1|+\langle B\rangle X_{g}(t)\). Using Born-Markov approximation, the two-time phonon system operators can be written in terms of the one-time operators in the interaction picture.
\[X_{g}(t,\tau) =e^{-iH_{S}(t)\tau/\hbar}X_{g}(t)e^{iH_{S}(t)\tau/\hbar} \tag{19}\] \[X_{u}(t,\tau) =e^{-iH_{S}(t)\tau/\hbar}X_{u}(t)e^{iH_{S}(t)\tau/\hbar} \tag{20}\]
So, we get the following,
\[X_{g}(t,\tau)= \frac{\hbar}{2}\bigg{\{}\bigg{[}\frac{\text{Re}[\Omega(t)](\Delta ^{2}\,\cos[\eta(t)\tau]+|\Omega_{R}(t)|^{2})}{\eta(t)^{2}}-\frac{\text{Im}[ \Omega(t)](\Delta\,\sin[\eta(t)\tau])}{\eta(t)}\bigg{]}\sigma_{x}-\frac{2 \Delta\langle B\rangle|\Omega(t)|^{2}(1-cos[\eta(t)\tau])}{\eta(t)^{2}}\] \[\times\sigma^{+}\sigma^{-}-\bigg{[}\frac{\text{Im}[\Omega(t)]( \Delta^{2}\,\cos[\eta(t)\tau]+|\Omega_{R}(t)|^{2})}{\eta(t)^{2}}+\frac{\text {Re}[\Omega(t)](\Delta\,\sin[\eta(t)\tau])}{\eta(t)}\bigg{]}\sigma_{y}\bigg{\}}, \tag{21}\]
\[X_{u}(t,\tau)= -\frac{\hbar}{2}\bigg{\{}\bigg{[}\frac{\mathrm{Re}[\Omega(t)](\Delta \,\sin[\eta(t)\tau])}{\eta(t)}+\mathrm{Im}[\Omega(t)]\mathrm{cos}[\eta(t)\tau] \bigg{]}\sigma_{x}+\bigg{[}\mathrm{Re}[\Omega(t)]\mathrm{cos}[\eta(t)\tau]- \frac{\mathrm{Im}[\Omega(t)](\Delta\,\sin[\eta(t)\tau])}{\eta(t)}\bigg{]}\] \[\times\sigma_{y}+\frac{2\langle B\rangle|\Omega(t)|^{2}\mathrm{ sin}[\eta(t)\tau]}{\eta(t)}\sigma^{+}\sigma^{-}\bigg{\}},\] (A.13)
where \(\eta(t)=\sqrt{|\Omega_{R}(t)|^{2}+\Delta^{2}}\) and \(\Omega_{R}(t)=\langle B\rangle\Omega(t)\). Now from Eqn (A.2) we get
\[\mathcal{L}_{ph}\rho=-\frac{1}{\hbar^{2}}\int_{0}^{\infty}\{G_{g}(\tau)[X_{g} (t),X_{g}(t,\tau)\rho(t)]+\mathrm{H.c.}+G_{u}(\tau)[X_{u}(t),X_{u}(t,\tau)\rho (t)]+\mathrm{H.c.}\}\;d\tau.\] (A.14)
Now putting the values of \(X_{g}(t,\tau)\) and \(X_{u}(t,\tau)\) and considering the following definition of real parameters,
\[f(t,\tau) =\frac{\Delta^{2}\,\cos[\eta(t)\tau]+|\Omega_{R}(t)|^{2}}{\eta(t) ^{2}},\] (A.15) \[g(t,\tau) =\frac{\Delta\,\sin[\eta(t)\tau]}{\eta(t)},\] (A.16) \[h(t,\tau) =\frac{2\Delta\langle B\rangle|\Omega(t)|(1-\cos[\eta(t)\tau])}{ \eta(t)^{2}},\] (A.17) \[q(t,\tau) =\cos[\eta(t)\tau],\] (A.18) \[r(t,\tau) =\frac{2\langle B\rangle|\Omega(t)|^{2}\,\sin[\eta(t)\tau]}{\eta( t)},\] (A.19)
we can get
\[\mathcal{L}_{ph}\rho= -\frac{1}{4}\int_{0}^{\infty}\bigg{\{}G_{g}(\tau)\bigg{[}( \mathrm{Re}[\Omega(t)]\sigma_{x}-\mathrm{Im}[\Omega(t)]\sigma_{y}),\bigg{(}( \mathrm{Re}[\Omega(t)]f(t,\tau)-\mathrm{Im}[\Omega(t)]g(t,\tau))\sigma_{x}-( \mathrm{Im}[\Omega(t)]f(t,\tau)\] \[+\mathrm{Re}[\Omega(t)]g(t,\tau))\sigma_{y}-h(t,\tau)\sigma^{+} \sigma^{-}\bigg{)}\rho(t)\bigg{]}\,+\,\mathrm{H.c.}+G_{u}(\tau)\bigg{[}( \mathrm{Im}[\Omega(t)]\sigma_{x}+\mathrm{Re}[\Omega(t)]\sigma_{y}),-\bigg{(}( \mathrm{Re}[\Omega(t)]g(t,\tau)\] \[+\mathrm{Im}[\Omega(t)]q(t,\tau))\sigma_{x}+(\mathrm{Re}[\Omega(t )]q(t,\tau)-\mathrm{Im}[\Omega(t)]g(t,\tau))\sigma_{y}+r(t,\tau)\sigma^{+} \sigma^{-}\bigg{)}\rho(t)\bigg{]}\,+\,\mathrm{H.c.}\bigg{\}}\;d\tau.\] (A.20)
After performing the commutation algebra we get,
\[\mathcal{L}_{ph}\rho= \int_{0}^{\infty}\bigg{\{}\bigg{[}-\frac{|\Omega(t)|^{2}}{4}(- \mathrm{Re}[G_{g}(\tau)]f(t,\tau)+\mathrm{Im}[G_{g}(\tau)+G_{u}(\tau)]g(t,\tau )-\mathrm{Re}[G_{u}(\tau)]q(t,\tau))\mathcal{L}[\sigma^{+}]\rho\bigg{]}+ \bigg{[}-\frac{|\Omega(t)|^{2}}{4}\] \[\times(-\mathrm{Re}[G_{g}(\tau)]f(t,\tau)-\mathrm{Im}[G_{g}(\tau )+G_{u}(\tau)]g(t,\tau)-\mathrm{Re}[G_{u}(\tau)]q(t,\tau))\mathcal{L}[\sigma^ {-}]\rho\bigg{]}-\bigg{[}\frac{1}{2}((\mathrm{Re}[G_{u}](\tau)q(t,\tau)\] \[-\mathrm{Re}[G_{g}(\tau)]f(t,\tau))(\mathrm{Re}[\Omega(t)^{2}]- \mathrm{Im}[\Omega(t)^{2}])-2\mathrm{Re}[\Omega(t)]\mathrm{Im}[\Omega(t)]( \mathrm{Re}[G_{u}(\tau)]-\mathrm{Re}[G_{g}(\tau)]g(t,\tau)))(\sigma^{+}\rho \sigma^{+}\] \[+\sigma^{-}\rho\sigma^{-})\bigg{]}-i\bigg{[}\frac{1}{2}((\mathrm{ Re}[G_{u}(\tau)]-\mathrm{Re}[G_{g}(\tau)])(\mathrm{Re}[\Omega(t)^{2}]-\mathrm{Im}[ \Omega(t)^{2}])g(t,\tau)+2\mathrm{Re}[\Omega(t)]\mathrm{Im}[\Omega(t)]( \mathrm{Re}[G_{u}(\tau)]q(t,\tau)\] \[-\mathrm{Re}[G_{g}(\tau)]f(t,\tau)))(\sigma^{+}\rho\sigma^{+}- \sigma^{-}\rho\sigma^{-})\bigg{]}+i\bigg{[}\frac{|\Omega(t)|^{2}}{2}((\mathrm{ Re}[G_{g}(\tau)]+\mathrm{Re}[G_{u}(\tau)])g(t,\tau))[\sigma^{+}\sigma^{-},\rho] \bigg{]}-\bigg{[}i\frac{1}{4}(\mathrm{Im}[\Omega(t)]\] \[\times G_{g}(\tau)h(t,\tau)+\mathrm{Re}[\Omega(t)]G_{u}(\tau)r(t, \tau))(\sigma^{+}\sigma^{-}\rho\sigma^{+}+\sigma^{-}\rho-\sigma^{+}\sigma^{-} \rho\sigma^{-})+\mathrm{H.c.}\bigg{]}-\bigg{[}\frac{1}{4}(\mathrm{Re}[\Omega(t )]G_{g}(\tau)h(t,\tau)\] \[-\mathrm{Im}[\Omega(t)]G_{u}(\tau)r(t,\tau))(\sigma^{+}\sigma^{-} \rho\sigma^{+}-\sigma^{-}\rho+\sigma^{+}\sigma^{-}\rho\sigma^{-})+\mathrm{H.c.} \bigg{]}\bigg{\}},\] (A.21)
\[\mathcal{L}_{ph}\rho= \frac{\Gamma^{\sigma^{+}}}{2}\mathcal{L}[\sigma^{+}]\rho+\frac{ \Gamma^{\sigma^{-}}}{2}\mathcal{L}[\sigma^{-}]\rho-\Gamma^{cd}(\sigma^{+}\rho \sigma^{+}+\sigma^{-}\rho\sigma^{-})-i\Gamma^{sd}(\sigma^{+}\rho\sigma^{+}- \sigma^{-}\rho\sigma^{-})+i\Delta^{\sigma^{+}\sigma^{-}}[\sigma^{+}\sigma^{-},\rho]\] \[-[i\Gamma_{gu+}(\sigma^{+}\sigma^{-}\rho\sigma^{+}+\sigma^{-}\rho- \sigma^{+}\sigma^{-}\rho\sigma^{-})+\mathrm{H.c.}]-[\Gamma_{gu-}(\sigma^{+} \sigma^{-}\rho\sigma^{+}-\sigma^{-}\rho+\sigma^{+}\sigma^{-}\rho\sigma^{-})+ \mathrm{H.c.}],\] (A.22)
where all the phonon-mediated scattering rates which contributes in the Lindblad ME are given below.
\[\Gamma^{\sigma^{+}}= -\frac{|\Omega(t)|^{2}}{2}\int_{0}^{\infty}(-\mathrm{Re}[G_{g}(\tau )]f(t,\tau)+\mathrm{Im}[G_{g}(\tau)+G_{u}(\tau)]g(t,\tau)-\mathrm{Re}[G_{u}( \tau)]q(t,\tau))\ d\tau\] \[= \frac{|\Omega_{R}(t)|^{2}}{2}\int_{0}^{\infty}\biggl{(}\mathrm{Re} \biggl{\{}\{\cosh[\phi(\tau)]-1\}\biggl{\{}\frac{\Delta^{2}\,\cos[\eta(t)\tau] +|\Omega_{R}(t)|^{2}}{\eta(t)^{2}}\biggr{\}}\,+\,\sinh[\phi(\tau)]\mathrm{cos}[ \eta(t)\tau]\biggr{\}}\,-\,\mathrm{Im}[e^{\phi(\tau)}-1]\] \[\biggl{\{}\frac{\Delta\,\sin[\eta(t)\tau]}{\eta(t)}\biggr{\}} \biggr{)}\ d\tau,\] (A.23) \[\Gamma^{\sigma^{-}}= -\frac{|\Omega(t)|^{2}}{2}\int_{0}^{\infty}(-\mathrm{Re}[G_{g}( \tau)]f(t,\tau)-\mathrm{Im}[G_{g}(\tau)+G_{u}(\tau)]g(t,\tau)-\mathrm{Re}[G_{ u}(\tau)]q(t,\tau))\ d\tau\] \[= \frac{|\Omega_{R}(t)|^{2}}{2}\int_{0}^{\infty}\biggl{(}\mathrm{Re} \biggl{\{}\{\cosh[\phi(\tau)]-1\}\biggl{\{}\frac{\Delta^{2}\,\cos[\eta(t)\tau] +|\Omega_{R}(t)|^{2}}{\eta(t)^{2}}\biggr{\}}\,+\,\sinh[\phi(\tau)]\mathrm{cos}[ \eta(t)\tau]\biggr{\}}\,+\,\mathrm{Im}[e^{\phi(\tau)}-1]\] \[\biggl{\{}\frac{\Delta\,\sin[\eta(t)\tau]}{\eta(t)}\biggr{\}} \biggr{)}\ d\tau,\] (A.24) \[\Gamma^{cd}= \frac{1}{2}\int_{0}^{\infty}((\mathrm{Re}[G_{u}(\tau)](\tau)q(t, \tau)-\mathrm{Re}[G_{g}(\tau)]f(t,\tau))(\mathrm{Re}[\Omega(t)^{2}]-\mathrm{ Im}[\Omega(t)^{2}])-2\mathrm{Re}[\Omega(t)]\mathrm{Im}[\Omega(t)](\mathrm{Re}[G_{u}( \tau)]\] \[-\,\mathrm{Re}[G_{g}(\tau)])g(t,\tau))\ d\tau\] \[= \frac{1}{2}\int_{0}^{\infty}\langle B\rangle^{2}\biggl{(}\mathrm{ Re}\biggl{\{}\sinh[\phi(\tau)]\mathrm{cos}[\eta(t)\tau]-\{\cosh[\phi(\tau)]-1\} \biggl{\{}\frac{\Delta^{2}\,\cos[\eta(t)\tau]+|\Omega_{R}(t)|^{2}}{\eta(t)^{2 }}\biggr{\}}\biggr{\}}(\mathrm{Re}[\Omega(t)^{2}]\] \[-\,\mathrm{Im}[\Omega(t)^{2}])-2\mathrm{Re}[\Omega]\mathrm{Im}[ \Omega]\mathrm{Re}[1-e^{-\phi(\tau)}]\biggl{\{}\frac{\Delta\,\sin[\eta(t)\tau] }{\eta(t)}\biggr{\}}\biggr{)}\ d\tau,\] (A.25) \[\Gamma^{sd}= \frac{1}{2}\int_{0}^{\infty}((\mathrm{Re}[G_{u}(\tau)]-\mathrm{ Re}[G_{g}(\tau)])g(t,\tau)(\mathrm{Re}[\Omega(t)^{2}]-\mathrm{Im}[\Omega(t)^{2}])+2 \mathrm{Re}[\Omega(t)]\mathrm{Im}[\Omega(t)](\mathrm{Re}[G_{u}(\tau)]q(t,\tau)\] \[-\,\mathrm{Re}[G_{g}(\tau)]g(t,\tau)))\ d\tau\] \[= \frac{1}{2}\int_{0}^{\infty}\langle B\rangle^{2}\biggl{(}\mathrm{ Re}[1-e^{-\phi(\tau)}]\biggl{\{}\frac{\Delta\,\sin[\eta(t)\tau]}{\eta(t)}\biggr{\}}( \mathrm{Re}[\Omega(t)^{2}]-\mathrm{Im}[\Omega(t)^{2}])+2\mathrm{Re}[\Omega(t )]\mathrm{Im}[\Omega(t)]\mathrm{Re}\biggl{\{}\sinh[\phi(\tau)]\] \[\times\cos[\eta(t)\tau]-\{\cosh[\phi(\tau)]-1\}\biggl{\{}\frac{ \Delta^{2}\,\cos[\eta(t)\tau]+|\Omega_{R}|^{2}}{\eta(t)^{2}}\biggr{\}}\biggr{\}} \biggr{)}\ d\tau,\] (A.26) \[\Delta^{\sigma^{+}\sigma^{-}}= \frac{|\Omega_{R}(t)|^{2}}{2}\int_{0}^{\infty}\mathrm{Re}([G_{g}( \tau)]+G_{u}(\tau))g(t,\tau)\ d\tau\] \[= \frac{|\Omega_{R}(t)|^{2}}{2}\int_{0}^{\infty}\mathrm{Re}[e^{\phi (\tau)}-1]\biggl{(}\frac{\Delta\,\sin[\eta(t)\tau]}{\eta(t)}\biggr{)}\ d\tau,\] (A.27) \[\Gamma_{gu+}= \frac{1}{4}\int_{0}^{\infty}(\mathrm{Im}[\Omega(t)]G_{g}(\tau)h( t,\tau)+\mathrm{Re}[\Omega(t)]G_{u}(\tau)r(t,\tau))\ d\tau\] \[= \frac{1}{4}\int_{0}^{\infty}\biggl{(}\mathrm{Im}[\Omega(t)]\{\cosh[ \phi(t)\tau]-1\}\biggl{\{}\frac{2\Delta\langle B\rangle|\Omega(t)|^{2}(1-\cos[ \eta(t)\tau])}{\eta(t)^{2}}\biggr{\}}\,+\mathrm{Re}[\Omega(t)]\mathrm{sinh}[ \phi(\tau)]\] \[\times\biggl{\{}\frac{2\langle B\rangle|\Omega(t)|^{2}\mathrm{ sin}[\eta(t)\tau]}{\eta(t)}\biggr{\}}\biggr{)}\ d\tau,\] (A.28) \[\Gamma_{gu-}= \frac{1}{4}\int_{0}^{\infty}(\mathrm{Re}[\Omega(t)]G_{g}(\tau)h( t,\tau)-\mathrm{Im}[\Omega(t)]G_{u}(\tau)r(t,\tau))\ d\tau\] \[= \frac{1}{4}\int_{0}^{\infty}\biggl{(}\mathrm{Re}[\Omega(t)]\{\cosh[ \phi(t)\tau]-1\}\biggl{\{}\frac{2\Delta\langle B\rangle|\Omega(t)|^{2}(1-\cos[ \eta(t)\tau])}{\eta(t)^{2}}\biggr{\}}\,-\mathrm{Im}[\Omega(t)]\mathrm{sinh}[ \phi(\tau)]\] \[\times\biggl{\{}\frac{2\langle B\rangle|\Omega(t)|^{2}\mathrm{ sin}[\eta(t)\tau]}{\eta(t)}\biggr{\}}\biggr{)}\ d\tau.\] (A.29)
The above phonon-induced scattering rates are a function of time. At each time t the values are calculated by integrating with respect to \(\tau\). These analytically derived scattering rates illustrate the physical picture of electron-phonon coupling in semiconductor QDs. |
2304.01034 | Cyclic Finser metrics on homogeneous spaces | In this paper, we generalize the notion of cyclic metric to homogeneous
Finsler geometry. Firstly, we prove that a homogeneous Finsler space $(G/H, F)$
must be symmetric when it satisfies the naturally reductive and cyclic
conditions simultaneously. Then we prove that a Finsler cyclic Lie group which
is either flat or nilpotent must have an Abelian Lie algebra. Finally, we show
how to induce a cyclic $(\alpha,\beta)$ metric from a cyclic Riemannian metric.
Using this method, we construct a Randers cyclic Lie group. | Ju Tan, Ming Xu | 2023-04-03T14:31:03Z | http://arxiv.org/abs/2304.01034v1 | # Cyclic Finsler Metrics on Homogeneous Spaces
###### Abstract.
In this paper, we generalize the notion of cyclic metric to homogeneous Finsler geometry. Firstly, we prove that a homogeneous Finsler space \((G/H,F)\) must be symmetric when it satisfies the naturally reductive and cyclic conditions simultaneously. Then we prove that a Finsler cyclic Lie group which is either flat or nilpotent must have an Abelian Lie algebra. Finally, we show how to induce a cyclic \((\alpha,\beta)\) metric from a cyclic Riemannian metric. Using this method, we construct a Randers cyclic Lie group.
Mathematics Subject Classification(2010): 53C22, 53C30, 53C60.
Keywords: \((\alpha,\beta)\) metric, cyclic Lie group, cyclic metric, homogeneous Finsler manifold, curvature
*Ming Xu is the corresponding author.
## 1. Introduction
We call a homogeneous Riemannian manifold \((G/H,\mathrm{g})\)_cyclic_ with respect to a given reductive decomposition \(\mathfrak{g}=\mathfrak{h}+\mathfrak{m}\), if the following property is satisfied
\[\langle[x,y]_{\mathfrak{m}},z\rangle+\langle[y,z]_{\mathfrak{m}},x\rangle+ \langle[z,x]_{\mathfrak{m}},y\rangle=0,\quad\forall x,y,z\in\mathfrak{m}. \tag{1.1}\]
Here \(\langle\cdot,\cdot\rangle\) is the \(\mathrm{Ad}(H)\)-invariant inner product on \(\mathfrak{m}=T_{eH}(G/H)\) determined by the \(G\)-invariant metric \(\mathrm{g}\). When \(H\) is trivial (and then the reductive decomposition must be \(\mathfrak{g}=\mathfrak{h}+\mathfrak{m}=0+\mathfrak{g}\)), we call the cyclic \((G,\mathrm{g})\) a _cyclic Lie group_. Moreover, if \(G\) is unimodular, we specify the cyclic condition as the _traceless cyclic_ condition.
The motivation for studying cyclic metrics can be traced back to Tricerri-Vanhecke's classification for homogeneous structures [18]. In their classification, there are three basic ones, i.e., \(\mathcal{S}_{1}\), \(\mathcal{S}_{2}\) and \(\mathcal{S}_{3}\). The cyclic (traceless cyclic) condition corresponds to the type \(\mathcal{S}_{1}\oplus\mathcal{S}_{2}\) (\(\mathcal{S}_{2}\) respectively). Compared with the naturally reductive metrics, which have been more extensively studied and correspond to the type \(\mathcal{S}_{3}\), cyclic metrics have very different properties, worthy to be explored.
Here we briefly survey some related progress. Kowalski and Tricerri studied the traceless cyclic condition in [14]. Gadea, Gonzalez-Davila and Oubina concerned traceless cyclic Lie groups in [11], and characterized cyclic and traceless cyclic metrics in [10]. In above literatures, relatively few examples have been discovered. Only those with low dimensions have been classified [4, 14]. Meanwhile, this project was generalized to homogeneous pseudo-Riemannian geometry in [9]. Calvaruso and Lopez classified Lorentzian cyclic Lie groups of dimension \(3\) and \(4\)[5]. Tohidfa and Zaeim classified cyclic Lie groups of the signature type \((2,2)\)[19].
In this paper, we explore the generalization of the cyclic property in Finsler geometry, replacing (1.1) with the following analog,
\[\langle[x,y]_{\mathfrak{m}},z\rangle_{y}+\langle[y,z]_{\mathfrak{m}},x\rangle _{y}+\langle[z,x]_{\mathfrak{m}},y\rangle_{y}=0,\quad\forall x,z\in\mathfrak{ m},y\in\mathfrak{m}\backslash\{0\}. \tag{1.2}\]
Here the fundamental tensor \(\langle\cdot,\cdot\rangle_{y}\) depends on \(y\), so the Finsler cyclic condition (1.2) is not multi-linear. Though this becomes an essential obstacle, we can still prove some properties and find some examples for cyclic Finsler metrics.
Firstly, we prove
**Theorem 1.1**.: _If the homogeneous Finsler manifold is cyclic and naturally reductive with respect to a given reductive decomposition, then that decomposition is a Cartan decomposition, i.e., that homogeneous Finsler manifold is a symmetric space._
The phenomenon in Theorem 1.1 was pointed out in [18] when the metric is Riemannian. It implies that, in homogeneous Finsler geometry, the subclass of cyclic Finsler metrics have a trivial overlap with that of naturally reductive metrics, i.e., they generalize symmetric Finsler spaces in totally different directions.
Secondly, we generalize Proposition 5.5 and Proposition 3.4 in [11] as follows.
**Theorem 1.2**.: _Non-Abelian nilpotent Lie groups do not admit cyclic left-invariant Finsler metrics._
**Theorem 1.3**.: _If \(F\) is a left invariant cyclic Finsler metric on a Lie group \(G\) which has constant flag curvature \(K\equiv 0\), then \(G\) has an abelian Lie algebra._
Finally, we consider the construction of cyclic Finsler metrics. The following theorem implies that cyclic Randers and \((\alpha,\beta)\) metrics may be induced from those Riemannian ones.
**Theorem 1.4**.: _Let \(G/H\) be a homogeneous manifold with a given reductive decomposition \(\mathfrak{g}=\mathfrak{h}+\mathfrak{m}\), \(\alpha\) an \(\operatorname{Ad}(H)\)-invariant inner product on \(\mathfrak{m}\), and \(\beta=\alpha(X,\cdot)\) in which \(X\in\mathfrak{m}\backslash\{0\}\) is an \(\operatorname{Ad}(H)\)-invariant vector. Suppose that \(\alpha\) and \(F=\alpha\phi(\frac{\beta}{\alpha})\) induce a cyclic Riemannian metric and a non-Riemannian homogeneous Douglas metric on \(G/H\) respectively. Then \((G/H,F)\) is cyclic if and only if_
\[\alpha([y,u]_{\mathfrak{m}},y)[\alpha(y,y)\alpha(X,v)-\alpha(X,y)\alpha(v,y) ]=\alpha([y,v]_{\mathfrak{m}},y)[\alpha(y,y)\alpha(X,u)-\alpha(X,y)\alpha(u, y)]\]
_is satisfied for any \(u,v,y\in\mathfrak{m}\)._
As the application, we constructed left invariant non-Riemannian cyclic Randers metrics on some solvable Lie groups.
## 2. Preliminary in homogeneous Finsler geometry
Let \(G/H\) be a homogeneous manifold, which is endowed with a reductive decomposition \(\mathfrak{g}=\mathfrak{h}+\mathfrak{m}\). Here the reductiveness means that the decomposition is \(\operatorname{Ad}(H)\)-invariant (it implies that, in the Lie algebra level, \([\mathfrak{h},\mathfrak{m}]\subset\mathfrak{m}\)). Then the tangent space \(T_{o}(G/H)\) at the origin \(o=eH\) can be identified as \(\mathfrak{m}\), so that the isotropy action coincides with the \(\operatorname{Ad}(H)\)-action on \(\mathfrak{m}\).
Any \(G\)-invariant Finsler metric \(F\) on \(G/H\) can be one-to-one determined by \(F=F(o,\cdot)\), which is any arbitrary \(\operatorname{Ad}(H)\)-invariant Minkowski norm on \(\mathfrak{m}\)[6]. We call the pair \((G/H,F)\) a _homogeneous Finsler manifold_. For example, a homogeneous \((\alpha,\beta)\) metric can be determined by a Minkowski norm \(F=\alpha\phi(\frac{\beta}{\alpha})\) on \(\mathfrak{m}\), in which \(\alpha\) is an \(\operatorname{Ad}(H)\)-invariant Euclidean norm, \(\beta=\alpha(X,\cdot)\) is the \(\alpha\)-dual of some \(\operatorname{Ad}(H)\)-invariant \(X\in\mathfrak{m}\), and \(\phi(r)\) is some positive smooth function [8].
A homogeneous metric \(F\) on \(G/H\) is called _naturally reductive_ with respect to the given reductive decomposition \(\mathfrak{g}=\mathfrak{h}+\mathfrak{m}\), if
\[\langle[x,u]_{\mathfrak{m}},v\rangle_{y}+\langle[x,v]_{\mathfrak{m}},u\rangle _{y}+2C_{y}([x,y]_{\mathfrak{m}},u,v)=0,\]
for any \(y\in\mathfrak{m}\backslash\{0\},x,u,v\in\mathfrak{m}\)[15]. Here
\[\langle u,v\rangle_{y}=\tfrac{1}{2}\tfrac{\partial^{2}}{\partial s\partial t} |_{s=t=0}F^{2}(y+su+tv),\quad C_{y}(u,v,w)=\tfrac{1}{2}\tfrac{d}{dt}|_{t=0} \langle u,v\rangle_{y+tw}\]
are the _fundamental tensor_ and the _Cartan tensor_ of the Minkowski norm \(F\) on \(\mathfrak{m}\) respectively. See [7] for the equivalent definition and description of this natural reductiveness.
The _spray vector field_\(\eta:\mathfrak{m}\backslash\{0\}\to\mathfrak{m}\) and the _connection operator_\(N:(\mathfrak{m}\backslash\{0\})\times\mathfrak{m}\to\mathfrak{m}\) for a homogeneous Finsler manifold \((G/H,F)\) with respect to a given reductive decomposition
\(\mathfrak{g}=\mathfrak{h}+\mathfrak{m}\) are determined by the following equations respectively [12],
\[\langle\eta(y),u\rangle_{y} = \langle y,[u,y]_{\mathfrak{m}}\rangle_{y},\quad\forall y\in \mathfrak{m}\backslash\{0\},\] \[2\langle N(y,v),u\rangle_{y} = \langle[u,v]_{\mathfrak{m}},y\rangle_{y}+\langle[u,y]_{\mathfrak{ m}},v\rangle_{y}+\langle[v,y]_{m},u\rangle_{y} \tag{2.3}\] \[-2C_{y}(u,v,\eta(y)),\quad\forall y\in\mathfrak{m}\backslash\{0 \},u,v\in\mathfrak{m}.\]
They are useful for presenting curvature formulae of homogeneous Finsler manifolds. For example, we have the following homogeneous Riemann curvature formulae. [12, 13].
**Theorem 2.1**.: _For a left invariant Finsler metric \(F\) on the Lie group \(G\), the Riemann curvature \(R_{y}:\mathfrak{g}=T_{e}G\to T_{e}G=\mathfrak{g}\) for any \(y\in\mathfrak{g}\backslash\{0\}\) can be presented as_
\[R_{y}(u)=D_{\eta}N(y,u)-N(y,N(y,u))+N(y,[y,u])-[y,N(y,u)]. \tag{2.4}\]
More details on the curvatures in general Finsler geometry can be found in [3].
## 3. Cyclic Finsler metric and its properties
Now we are ready to present the precise definition for the Finsler cyclic condition.
**Definition 3.1**.: _A homogeneous Finsler manifold \((G/H,F)\) is said to be cyclic with respect to a given reductive decomposition \(\mathfrak{g}=\mathfrak{h}+\mathfrak{m}\), if we have_
\[\langle[x,y]_{\mathfrak{m}},z\rangle_{y}+\langle[y,z]_{\mathfrak{m}},x\rangle_ {y}+\langle[z,x]_{\mathfrak{m}},y\rangle_{y}=0,\quad\forall y\in\mathfrak{m} \backslash\{0\},x,z\in\mathfrak{m}.\]
_In particular, when \(H=\{e\}\) (and then the reductive decomposition must be \(\mathfrak{g}=\mathfrak{h}+\mathfrak{m}=0+\mathfrak{g}\)), we call the cyclic \((G,F)\) a (Finsler) cyclic Lie group._
**Proof of Theorem 1.1.** Suppose the homogeneous Finsler metric \(F\) is naturally reductive and cyclic with respect to the reductive decomposition \(\mathfrak{g}=\mathfrak{h}+\mathfrak{m}\). Choose any \(y\in\mathfrak{m}\backslash\{0\}\), \(x,z\in\mathfrak{m}\). Then the natural reductiveness provides
\[\langle[x,u]_{\mathfrak{m}},v\rangle_{y}+\langle[x,v]_{\mathfrak{m}},u\rangle _{y}+2C_{y}([x,y]_{\mathfrak{m}},u,v)=0,\quad\forall u,v\in\mathfrak{m}. \tag{3.5}\]
Input \(u=y\) and \(v=z\) into (3.5), we get
\[\langle[x,y]_{\mathfrak{m}},z\rangle_{y}+g_{y}([x,z]_{\mathfrak{m}},y)=0. \tag{3.6}\]
Similar argument also provides
\[\langle[z,y]_{\mathfrak{m}},x\rangle_{y}+g_{y}([z,x]_{\mathfrak{m}},y)=0. \tag{3.7}\]
The cyclic condition provides
\[\langle[x,y]_{\mathfrak{m}},z\rangle_{y}+\langle[y,z]_{\mathfrak{m}},x \rangle_{y}+\langle[z,x]_{\mathfrak{m}},y\rangle_{y}=0. \tag{3.8}\]
The sum of (3.7) and (3.8) minus (3.6) provides \(3\langle[z,x]_{\mathfrak{m}},y\rangle_{y}=0\). So we get \(\langle[\mathfrak{m},\mathfrak{m}]_{\mathfrak{m}},y\rangle_{y}=0\), \(\forall y\in\mathfrak{m}\backslash\{0\}\). It can only happen when \([\mathfrak{m},\mathfrak{m}]\subset\mathfrak{h}\), i.e, \(G/H\) is symmetric.
To prove Theorem 1.2 and Theorem 1.3, we need the following lemma.
**Lemma 3.2**.: _Let \((G,F)\) be a Finsler cyclic Lie group. Then we have the following:_
1. _If_ \(y\in\mathfrak{g}\backslash\{0\}\) _satisfies_ \(\langle[\mathfrak{g},\mathfrak{g}],y\rangle_{y}=0\)_, then_ \(\eta(y)=0\) _and_ \(N(y,\cdot)=-\mathrm{ad}(y)\)_. Moreover, in this situation,_ \(R_{y}=-\mathrm{ad}(y)^{2}\) _and_ \(\mathrm{ad}(y)\) _is self-adjoint with respect to_ \(\langle\cdot,\cdot\rangle_{y}\)_._
2. _Any_ \(y\in\mathfrak{c}(\mathfrak{g})\backslash\{0\}\) _satisfies_ \(\langle[\mathfrak{g},\mathfrak{g}],y\rangle_{y}=0\) _and_ \(R_{y}\equiv 0\)_._
**Proof.** (1) Since \(\langle[\mathfrak{g},\mathfrak{g}],y\rangle_{y}=0\), we have \(\eta(y)=0\). By the cyclic condition, we have
\[\langle[u,y],v\rangle_{y}-\langle[v,y],u\rangle_{y}=\langle[u,y],v\rangle_{y }+\langle[v,u],y\rangle_{y}+\langle[y,v],u\rangle_{y}=0, \tag{3.9}\]
so (2.3) provides
\[2\langle N(y,v),u\rangle_{y} = \langle[u,v],y\rangle_{y}+\langle[u,y],v\rangle_{y}+\langle[v,y],u\rangle_{y}-2C_{y}(u,v,\eta(y))\] \[= \langle[u,y],v\rangle_{y}+\langle[v,y],u\rangle_{y}=2\langle[v,y ],u\rangle_{y}.\]
It implies \(N(y,v)=[v,y]\), \(\forall v\in\mathfrak{g}\), i.e., \(N(y,\cdot)=-\mathrm{ad}(y)\). Input \(\eta(y)=0\) and \(N(y,\cdot)=-\mathrm{ad}(y)\) into (2.4) in Theorem 2.1, we get \(R_{y}=-\mathrm{ad}(y)^{2}\) immediately. Finally, using (3.9), we see that \(N(y,\cdot)\) is self-adjoint with respect to \(\langle\cdot,\cdot\rangle_{y}\).
(2) Assume \(y\in\mathfrak{c}(\mathfrak{g})\backslash\{0\}\). By the cyclic condition, we have \(\forall x,z\in\mathfrak{g}\),
\[\langle[z,x],y\rangle_{y}=\langle[x,y],z\rangle_{y}+\langle[y,z],x\rangle_{y }+\langle[z,x],y\rangle_{y}=0,\]
i.e., \(\langle[\mathfrak{g},\mathfrak{g}],y\rangle_{y}=0\). By (1) of Lemma 3.2, \(R_{y}=-\mathrm{ad}(y)^{2}=0\) follows immediately.
**Proof of Theorem 1.2.** Assume conversely that \(G\) admits a left invariant cyclic Finsler metric and its Lie algebra \(\mathfrak{g}\) is non-Abelian. By the proof of Theorem 5.1 in [12], there exists a nonzero \(y\in\mathfrak{c}(\mathfrak{g})\) with \(\mathrm{Ric}(y)=\mathrm{tr}(R_{y})>0\). This is a contradiction with (2) in Lemma 3.2.
**Proof of Theorem 1.3.** Assume \((G,F)\) is a Finsler cyclic Lie group, i.e., \(F\) is a left invariant Finsler metric on \(G\) which is cyclic with respect to the reductive decomposition \(\mathfrak{g}=\mathfrak{h}+\mathfrak{m}=0+\mathfrak{g}\). We will prove \(\mathfrak{g}\) is Abelian by the following three claims.
**Claim I**: \([\mathfrak{g},\mathfrak{g}]\) is commutative.
The left invariance of \(F\) implies that its Cartan tensor and Landsberg tensor are both bounded. So the argument proving Akbar-Zadeh's theorem [1] can be applied here to prove that \(F\) is locally Minkowskian. We average the fundamental tensor of \(F\) on the indicatrix in each tangent space, with respect to the volume form induced by the Hessian metric, then we get a Riemannian metric \(\overline{F}\) on \(G\). Because \(F\) is locally Minkowskian, \(\overline{F}\) is flat. This averaging process preserves Killing fields and isometries. So \(\overline{F}\) is left \(G\)-invariant. By Theorem 1.5 in [17], \(\mathfrak{g}=\mathfrak{l}+\mathfrak{u}\), in which \(\mathfrak{l}\) is a commutative subalgebra and \(\mathfrak{u}\) is a commutative ideal. So we see \([\mathfrak{g},\mathfrak{g}]\subset\mathfrak{u}\) is commutative, which proves Claim I.
**Claim II**: \(\mathfrak{c}(\mathfrak{g})+[\mathfrak{g},\mathfrak{g}]\neq\mathfrak{g}\). Then there exists a nonzero vector \(y\in\mathfrak{g}\backslash(\mathfrak{c}(\mathfrak{g})+[\mathfrak{g}, \mathfrak{g}])\), such that \(\langle\mathfrak{c}(\mathfrak{g})+[\mathfrak{g},\mathfrak{g}],y\rangle_{y}=0\). Because \(g_{y}([\mathfrak{g},\mathfrak{g}],y)=0\), we can apply the flatness condition for \(F\) and (1) of Lemma 3.2 to see that \(\mathrm{ad}(y)\) is self adjoint with respect to \(\langle\cdot,\cdot\rangle_{y}\) and \(R_{y}(v)=-\mathrm{ad}(y)^{2}v=0\) for any \(v\in\mathfrak{g}\). Then we have
\[\langle\mathrm{ad}(y)v,\mathrm{ad}(y)v\rangle_{y}=\langle\mathrm{ad}(y)^{2}v, v\rangle_{y}=-\langle R_{y}(v),v\rangle_{y}=0,\quad\forall v\in\mathfrak{g},\]
i.e., \(y\in\mathfrak{c}(\mathfrak{g})\). This is a contradiction, which proves Claim II.
**Claim III**: \([\mathfrak{g},\mathfrak{g}]\subset\mathfrak{c}(\mathfrak{g})\).
Choose any \(u\in\mathfrak{g}\) and \(v\in[\mathfrak{g},\mathfrak{g}]\). By Claim II, we have the decomposition \(u=u_{1}+u_{2}\) with \(u_{1}\in\mathfrak{c}(\mathfrak{g})\) and \(u_{2}\in[\mathfrak{g},\mathfrak{g}]\). By Claim I, \([v,u_{2}]=0\), \([v,u]=[v,u_{1}]+[v,u_{2}]=0\). So we see \([\mathfrak{g},\mathfrak{g}]\subset\mathfrak{c}(\mathfrak{g})\), which proves Claim III.
Summarizing Claim II and Claim III, we prove \(\mathfrak{g}=\mathfrak{c}(\mathfrak{g})\), which ends the proof.
## 4. Construction of cyclic \((\alpha,\beta)\) metrics
To prove Theorem 1.4, we need the following lemma.
**Lemma 4.1**.: _Keeping all assumptions and notations in Theorem 1.4, then \(F\) is cyclic if and only if \(\Phi(r)\cdot\Psi(u,v,y)=0\) is satisfied for any \(u,v\in\mathfrak{m}\), \(y\in\mathfrak{m}\backslash\{0\}\), and \(r=\frac{\alpha(X,y)}{\alpha(y,y)^{1/2}}\), in which_
\[\Phi(r)=\phi(r)\phi^{\prime}(r)-r\phi^{\prime}(r)^{2}-r\phi(r)\phi^{\prime \prime}(r)\]
_and_
\[\Psi(u,v,y) = \alpha(y,[y,u]_{\mathfrak{m}})(\alpha(y,y)\alpha(X,v)-\alpha(y,v) \alpha(X,y))\] \[+\alpha(y,[v,y]_{\mathfrak{m}})(\alpha(y,y)\alpha(X,u)-\alpha(y,u )\alpha(X,y)).\]
**Proof.** Direct calculation for the fundamental tensor shows that, for any \(y\in\mathfrak{m}\backslash 0\), \(u,v\in\mathfrak{m}\),
\[\langle u,v\rangle_{y}=\tfrac{1}{2}\tfrac{\partial^{2}}{\partial s \partial t}|_{s=t=0}F^{2}(y+su+tv) \tag{4.10}\] \[= \tfrac{1}{2}\tfrac{\partial^{2}}{\partial s\partial t}|_{s=t=0} \left(\alpha(y+su+tv,y+su+tv)\phi(\tfrac{\alpha(X,y+su+tv)}{\sqrt{\alpha(y+su+tv,y+su+tv)}})^{2}\right)\] \[= \phi(r)^{2}\alpha(u,v)-\phi(r)\phi^{\prime}(r)\cdot\tfrac{\alpha (y,u)\alpha(y,v)\alpha(X,y)}{\alpha(y,y)^{3/2}}\] \[+\phi(r)\phi^{\prime}(r)\cdot\tfrac{\alpha(y,v)\alpha(X,u)+ \alpha(y,u)\alpha(X,v)-\alpha(u,v)\alpha(X,y)}{\alpha(y,y)^{1/2}}\] \[+(\phi^{\prime}(r)^{2}+\phi(r)\phi^{\prime\prime}(r))(\alpha(X,u) -\tfrac{\alpha(y,u)\alpha(X,y)}{\alpha(y,y)})(\alpha(X,v)-\tfrac{\alpha(y,v) \alpha(X,y)}{\alpha(y,y)}).\]
Setting
\[Q_{1}=\alpha(X,[y,u]_{\mathfrak{m}})\alpha(y,v),\quad Q_{2}= \alpha(X,[y,u]_{\mathfrak{m}}),\quad Q_{3}=\alpha(X,[u,v]_{\mathfrak{m}}) \alpha(y,y),\] \[Q_{4}=\alpha(X,[v,y]_{\mathfrak{m}})\alpha(y,u),\quad Q_{5}= \alpha(X,[v,y]_{\mathfrak{m}}),\]
then we can get the following from (4.10),
\[\langle[y,u]_{\mathfrak{m}},v\rangle_{y} = \alpha([y,u]_{\mathfrak{m}},v)\phi(r)^{2}-\phi(r)\phi^{\prime}(r )\cdot\tfrac{\alpha(y,[y,u]_{\mathfrak{m}})\alpha(y,v)\alpha(X,y)}{\alpha(y, y)^{3/2}} \tag{4.11}\] \[+\phi(r)\phi^{\prime}(r)\cdot\tfrac{Q_{1}+\alpha(y,[y,u]_{ \mathfrak{m}})\alpha(X,v)-\alpha(v,[y,u]_{\mathfrak{m}})\alpha(X,y)}{\alpha(y,y)^{1/2}}\] \[+(\phi(r)\phi^{\prime\prime}(r)+\phi^{\prime}(r)^{2})\cdot(Q_{2}- \tfrac{\alpha(y,[y,u]_{\mathfrak{m}})\alpha(X,y)}{\alpha(y,y)})\] \[\cdot(\alpha(X,v)-\tfrac{\alpha(y,v)\alpha(X,y)}{\alpha(y,y)}),\] \[\langle[u,v]_{\mathfrak{m}},y\rangle_{y} = \alpha([u,v]_{\mathfrak{m}},y)\phi(r)^{2}-\phi(r)\phi^{\prime}(r )\cdot\tfrac{\alpha(y,[u,v]_{\mathfrak{m}})\alpha(X,y)}{\alpha(y,y)^{1/2}}\] (4.12) \[+\phi(r)\phi^{\prime}(r)\cdot\tfrac{Q_{3}}{\alpha(y,y)^{1/2}},\] \[\langle[v,y]_{\mathfrak{m}},u\rangle_{y} = \alpha([v,y]_{\mathfrak{m}},u)\phi(r)^{2}-\phi(r)\phi^{\prime}(r )\cdot\tfrac{\alpha(y,[v,y]_{\mathfrak{m}})\alpha(y,u)\alpha(X,y)}{\alpha(y, y)^{3/2}}\] (4.13) \[+\phi(r)\phi^{\prime}(r)\cdot\tfrac{Q_{4}+\alpha(y,[v,y]_{ \mathfrak{m}})\alpha(X,u)-\alpha(u,[v,y]_{\mathfrak{m}})\alpha(X,y)}{\alpha(y,y)^{1/2}}\] \[+(\phi(r)\phi^{\prime\prime}(r)+\phi^{\prime}(r)^{2})\cdot(Q_{5} -\tfrac{\alpha(y,[v,y]_{\mathfrak{m}})\alpha(X,y)}{\alpha(y,y)})\] \[\cdot(\alpha(X,u)-\tfrac{\alpha(y,u)\alpha(X,y)}{\alpha(y,y)}).\]
The cyclic condition for \(\alpha\) implies
\[\alpha([y,u]_{\mathfrak{m}},v)+\alpha([u,v]_{\mathfrak{m}},y)+\alpha([v,y]_{ \mathfrak{m}},u)=0.\]
Since \(F\) is a non-Riemannian Douglas metric, we have \(\alpha([\mathfrak{m},\mathfrak{m}]_{\mathfrak{m}},X)=0\) (see [2] or [16]), so \(Q_{1}=Q_{2}=Q_{3}=Q_{4}=Q_{5}=0\). Using these observations, the sum of (4.11)-(4.13) provides
\[\langle[y,u]_{\mathfrak{m}},v\rangle_{y}+\langle[u,v]_{\mathfrak{m }},y\rangle_{y}+\langle[v,y]_{\mathfrak{m}},u\rangle_{y} \tag{4.14}\] \[= \phi(r)\phi^{\prime}(r)\cdot\tfrac{\alpha(y,[y,u]_{\mathfrak{m}}) (\alpha(y,y)\alpha(X,v)-\alpha(y,v)\alpha(X,y))+\alpha(y,[v,y]_{\mathfrak{m}}) (\alpha(y,y)\alpha(X,u)-\alpha(y,u)\alpha(X,y))}{\alpha(y,y)^{3/2}}\] \[-(\phi(r)\phi^{\prime\prime}(r)+\phi^{\prime}(r)^{2})\tfrac{ \alpha(X,y)}{\alpha(y,y)^{1/2}}\cdot\] \[\tfrac{\alpha(y,[y,u]_{\mathfrak{m}})(\alpha(y,y)\alpha(X,v)- \alpha(y,v)\alpha(X,y))+\alpha(y,[v,y]_{\mathfrak{m}})(\alpha(y,y)\alpha(X,u)- \alpha(y,u)\alpha(X,y))}{\alpha(y,y)^{3/2}}\] \[= \tfrac{1}{\alpha(y,y)^{3/2}}\cdot\Phi(r)\cdot\Psi(u,v,y).\]
So \(F\) is cyclic, i.e., \(\langle[y,u]_{\mathfrak{m}},v\rangle_{y}+\langle[u,v]_{\mathfrak{m}},y\rangle_ {y}+\langle[v,y]_{\mathfrak{m}},u\rangle_{y}\equiv 0\), if and only if \(\Phi(r)\cdot\Psi(u,v,y)\equiv 0\), which proves the lemma.
Now we prove Theorem 1.4, i.e., \(F\) is cyclic if and only if \(\Psi(u,v,y)\equiv 0\).
**Proof of Theorem 1.4.** Assume \(\Psi(u,v,y)\equiv 0\), then Lemma 4.1 indicates \(F\) is cyclic. This proves one side of the theorem.
Assume \(F\) is cyclic. By Lemma 4.1, we have \(\Phi(r)\cdot\Psi(u,v,y)\equiv 0\). Notice that \(\Phi(r)\) can not be constantly zero when \(|r|\leq\alpha(X,X)^{1/2}\), otherwise \(\Phi(r)=(\phi(r)(\phi(r)-r\phi^{\prime}(r)))^{\prime}\equiv 0\), i.e., \(\phi(r)^{2}-r\phi\phi^{\prime}(r)\equiv c\) for some constant \(c\), which can be easily solved and provides the
solutions \(\phi(r)\equiv\sqrt{c_{1}r^{2}+c_{2}}\) for some constants \(c_{1}\) and \(c_{2}\). Then the metric \(F=\alpha\phi(\frac{\beta}{\alpha})\) is Riemannian, which contradicts our assumption. To summarize, there exists a nonempty open subset \(\mathcal{U}\subset\mathfrak{m}\backslash\{0\}\), such that \(\Psi(u,v,y)=0\) when \(y\in\mathcal{U}\). Since \(\Psi(u,v,y)\) is a polynomial, it must vanish identically. This ends the proof for the other side of Theorem 1.4.
Now we use Theorem 1.4 to construct a left invariant cyclic Randers metric.
Let \(G\) be a solvable Lie group. Its Lie algebra \(\mathfrak{g}\) has a chain of ideals,
\[\mathfrak{g}=\mathfrak{g}_{0}\supset\mathfrak{g}_{1}\supset\cdots\supset \mathfrak{g}_{n}=0,\]
in which \(\dim\mathfrak{g}_{i}=n-i\), \(\forall 0\leq i\leq n\). We can find a basis \(\{e_{1},\cdots,e_{n}\}\) of \(\mathfrak{g}\), such that \(\mathfrak{g}_{n-i}=\mathrm{span}\{e_{1},\cdots,e_{i}\}\) for each \(i\), and then an inner product \(\alpha(\cdot,\cdot)\) on \(\mathfrak{g}\), such that \(\langle e_{i},e_{j}\rangle=\delta_{ij}\). The following proposition in [11] answers exactly when this \(\alpha\) induces a left invariant cyclic metric on \(G\).
**Proposition 4.2**.: \((G,\alpha)\) _is cyclic if and only if \(\mathrm{ad}(e_{i})\) is self adjoint on \(\mathfrak{g}_{n-i}\) for each \(i\), or equivalently, the coefficients \(c_{ij}^{k}\) in \([e_{i},e_{j}]=c_{ij}^{k}e_{k}\) satisfy \(c_{ik}^{j}=c_{jk}^{i}\), \(\forall 1\leq i<j<k\leq n\)._
According to Proposition 4.2, \((G,\alpha)\) is cyclic when
\[[e_{i},e_{j}]=0,1\leq i<j\leq n-1,\quad[e_{n},e_{i}]=ae_{i},1\leq i\leq n-1,\]
in which \(a\) is a positive number. We choose \(X=ce_{n}\) for some \(c\in(-1,1)\backslash\{0\}\) and \(\beta=\langle X,\cdot\rangle\). Then \(F=\alpha+\beta\) induces a left invariant Randers metric on \(G\). It is a non-Riemannian Douglas metric because \(\alpha(X,[\mathfrak{g},\mathfrak{g}])=0\)[2, 6]. Calculation shows that for any generic \(u,v,y\),
\[\frac{\alpha(y,y)\alpha(X,u)-\alpha(X,y)\alpha(u,y)}{\alpha([y,u],y)}=-\frac{ c}{a}=\frac{\alpha(y,y)\alpha(X,v)-\alpha(X,y)\alpha(v,y)}{\alpha([y,v],y)},\]
i.e., \(\alpha\) satisfies the equation in Theorem 1.4, so \(F\) is cyclic. Using the same \(\alpha\) and \(\beta\), left invariant cyclic \((\alpha,\beta)\) metrics can be similarly constructed.
**Acknowledgement**. This paper is supported by National Natural Science Foundation of China (No. 12001007, No. 12131012, No. 11821101), Beijing Natural Science Foundation (No. 1222003), Natural Science Foundation of Anhui province (No. 1908085QA03).
|
2301.04792 | A Programming Model for GPU Load Balancing | We propose a GPU fine-grained load-balancing abstraction that decouples load
balancing from work processing and aims to support both static and dynamic
schedules with a programmable interface to implement new load-balancing
schedules. Prior to our work, the only way to unleash the GPU's potential on
irregular problems has been to workload-balance through application-specific,
tightly coupled load-balancing techniques. With our open-source framework for
load-balancing, we hope to improve programmers' productivity when developing
irregular-parallel algorithms on the GPU, and also improve the overall
performance characteristics for such applications by allowing a quick path to
experimentation with a variety of existing load-balancing techniques.
Consequently, we also hope that by separating the concerns of load-balancing
from work processing within our abstraction, managing and extending existing
code to future architectures becomes easier. | Muhammad Osama, Serban D. Porumbescu, John D. Owens | 2023-01-12T03:14:37Z | http://arxiv.org/abs/2301.04792v1 | # A Programming Model for GPU Load Balancing
###### Abstract.
We propose a GPU fine-grained load-balancing abstraction that decouples load balancing from work processing and aims to support both static and dynamic schedules with a programmable interface to implement new load-balancing schedules. Prior to our work, the only way to unleash the GPU's potential on irregular problems has been to workload-balance through application-specific, tightly coupled load-balancing techniques.
With our open-source framework for load-balancing, we hope to improve programmers' productivity when developing irregular-parallel algorithms on the GPU, and also improve the overall performance characteristics for such applications by allowing a quick path to experimentation with a variety of existing load-balancing techniques. Consequently, we also hope that by separating the concerns of load-balancing from work processing within our abstraction, managing and extending existing code to future architectures becomes easier.
load balancing, sparse computation, GPU, scheduling +
Footnote †: journal: Transportation
with a simple, intuitive, powerful abstraction, these load-balancing schedules can be extended to support irregular workloads that are more general than the specific problem for which they were designed. We demonstrate this by using sparse-linear-algebra-based load balancing for data-centric graph traversal kernels.
Writing high-performance load-balancing code is complex, in large part because this code must perform many roles. Among other tasks, it must ingest data from a specific data structure, perform user-defined computation on that data, and schedule that computation in a load-balanced way. The key insight in our abstraction is to separate the concerns between workload mapping (the load-balance task) and work execution (the user-defined computation), where we _map_ sparse formats (such as Compressed Sparse Row (CSR)) to simple abstraction components called work **atoms**, **tiles**, and **sets**. These fundamental components are expressed as composable C++ ranges and range-based for loops, and are used to build load-balancing schedules. Programmers can then use these APIs to build load-balanced, high-performance applications and primitives. Expressed in this way, we can reconstruct existing application-dependent load-balancing techniques that address irregularity to be more _general, portable_, and _programmable_. The contributions of our work are as follows:
1. We present a novel abstraction for irregular-parallel workloads on GPUs. Our abstraction at a high level allows programmers to develop sparse, irregular-parallel algorithms with minimal code while delivering high performance.
2. We design and implement a set of intuitive APIs, available in our open-source GPU load-balancing framework, built on the proposed abstraction using CUDA-C++ ranges and range-based for loops.
3. We show the ease of implementing new load-balancing schedules by implementing a novel cooperative group-based load-balancing schedule, described in Section 5.2, which is a generalization of previous thread-, warp-, and block-level load-balancing schedules [(30)].
4. We provide state-of-the-art SpMV performance as a benchmark with a geomean of speedup of 2.7\(\times\) for the SuiteSparse Matrix Collection [(11)] over cusparse's state-of-the-art implementation using simple heuristics and 3 GPU load-balancing schedules.
## 2. Design Goals
Our programming model focuses on the broad category of fine-grained nested data parallelism. Load-balancing task-level parallelism requires a different approach and is beyond the scope of this work. This section highlights the design goals of our load-balancing abstraction:
_Achieve high performance._ First and foremost, the goal of our work is to achieve the high performance of existing-load balancing algorithms for irregular applications. Our abstraction cannot come at the cost of significant overhead or performance degradation. We measure our success in achieving high performance by comparing the performance of our abstraction against the performance of existing hardwired implementations.
_A composable and programmable interface._ Importantly, we do not want to restrict the user to a library interface that takes control of the larger system. Programmers strongly prefer to adopt new software components that fit into their control structures rather than require them to adopt a new control structure. We want to allow the users to (1) maintain control of GPU kernel boundaries (kernel launches), (2) be able to add new load-balancing algorithms, and (3) compose new load-balanced primitives from existing load-balancing APIs. We measure the programmability of our work by comparing the Lines of Code (LOC) of our abstraction against existing implementations and show composability by implementing a new load-balancing algorithm in terms of our existing APIs.
_Extensible to new applications._ We aim to decouple and extend application-specific load-balancing techniques to new irregular-parallel domains. Our abstraction seeks to promote the reuse of existing load-balancing techniques for new applications. We use SpMV as a benchmark application implemented using three different load-balancing techniques, some of which were previously used to implement parallel graph analytics kernels [(5; 6; 10; 29)].
_Facilitate the exploration of optimizations._ A key goal of our abstraction is to facilitate the exploration of optimizations for a given application by switching the underlying load-balancing algorithms used to balance the work. We want to encourage our users to experiment with heuristics and new load-balancing techniques to discover what works best for their application needs. We measure the success of this goal by optimizing SpMV's performance response for a large corpus of sparse matrices across several different load-balancing techniques.
**Non-Goals**
In addition to the above design goals, we also define our non-goals:
_Targeting other parallel architectures._ Although we believe the lessons learned should apply to other parallel architectures, we explicitly target NVIDIA's CUDA architecture and programming model [(23)]. Many components of our abstraction leverage CUDA's compute hierarchy of threads, warps and blocks mapped onto the physical streaming multiprocessors, the oversubscription model of assigning more work than the number of processors to fully saturate the
underlying hardware, and CUDA's Cooperative Groups programming model (Krishnan et al., 2017), described in Section 5.2, to achieve high performance.
Multi-GPUsupportThis work focuses on load-imbalance issues for a single GPU and does not consider multi-GPU single-node or multi-node systems, although these are interesting directions for future work.
## 3. Our Load-Balancing Abstraction
The key insight behind our GPU load balancing abstraction is the _separation of concerns_ between the mapping of the work items to processing units and work execution. We divide our abstraction into three key concepts (illustrated in Figure 1), each of which describes a different aspect of an implementation: (1) defining the work; (2) defining the workload balance across GPU threads, warps or blocks; and (3) defining the work execution and computation per thread on the balanced work. This separation allows us to cleanly divide the work between an application developer and a load-balanced-library developer and facilitates the exploration of optimizations by mixing different load-balancing techniques and sparse-irregular algorithms. Sidebar 1 presents a practical example of the motivation for our load balancing abstraction.
### Input from Sparse Data Structures
We begin with our input data expressed in some form of sparse data structure. Examples of such data structures include, but are not limited to, Compressed Sparse Row (CSR) and Coordinate (COO) formats. The goal of the first stage of our abstraction is to map the input data format to a common data framework and vocabulary that is the input to the next stage. This vocabulary has three simple components that together express the input data:
1. A **work atom**, a single unit of work that is to be scheduled onto the processors (for example, a non-zero element of a sparse matrix). We assume that all work atoms have an equal cost during execution.
2. A **work tile**, a logical entity represented as a set of work atoms (for example, a row of a sparse matrix). Work tiles may have different costs during execution. As we highlighted in the introduction, work is most _logically_ parallelized over work tiles but is often most _efficiently_ parallelized over work atoms, and mapping between work tiles and work atoms may be expensive and complex.
3. A **tile set**, a set of work tiles that together comprise the entire working problem (for example, a sparse matrix). In our abstraction, the tiles within a tile set must be independent (and thus can run in parallel across multiple processors).
This mapping between sparse formats and atoms/tiles/tile sets is defined by the user. Though we have not implemented all of them, we believe our mapping abstraction here is flexible enough to express a wide variety of existing sparse data formats in the literature (Krishnan et al., 2017) in such a way that they are suitable for load balancing in our abstraction's next stage. As well, we have already included several common sparse formats (CSR, CSC, COO) in our load-balancing library implementation so that users can simply select and use them without having to implement them. Given a mapping to atoms/tiles/tile sets, we can next implement a load-balancing algorithm that can parallelize over work atoms or tiles transparently from the computation's perspective.
### Defining Load Balancing
By expressing workloads through an abstraction that captures work at differing levels of granularity (i.e., tile set, atoms, and tiles), we can more easily distribute computation evenly across the GPU's available resources. Given a user-defined input tile set and associated sequences of atoms and tiles, along with a user-selected partitioning algorithm, our load-balancing stage outputs subsequences of atoms and tiles assigned to processor ids (i.e., where atoms or tiles will be processed).
The resulting assignment of subsequences to processor ids is critical to effectively balancing workloads across processing elements and is generally problem- and dataset-specific. The user must specify the necessary sequences. Ideally, an _oracle_ would take these sequences and select the most optimal subsequences for every processing element. Finding such an oracle is an open problem and thus we provide the next best thing: the ability for users to choose and experiment from a set of predefined schedules and the ability to implement their own schedules. In general, load-balancing algorithm designers must balance between the cost of scheduling and the benefits from better scheduling. A schedule could be as straightforward as assigning processing elements to tiles with arbitrary numbers of atoms (e.g., rows with an arbitrary number of non-zeros in a sparse matrix) to something more complicated/expensive that takes on a more holistic approach to work (e.g., considering work across multiple rows with a varying number of non-zeros in a sparse matrix).
### Defining Work Execution
The final component of our load-balancing abstraction expresses the irregular-parallel computation itself. The previous stage inputs load-imbalanced work and load-balances it; this stage then consumes that load-balanced work by performing computation on it. The scope of what computation can be expressed is extensive, and is only limited by how the load-balanced work, represented as sequences, can be consumed within a CUDA kernel. Since the framework does not assume control of the kernel, anything you can write in a CUDA kernel will also work in our framework. For instance, programmers can express a mathematical operation performed on each atom or each tile of the work, or build cooperative algorithms that not only consume the work assigned to each thread but also combine the results with neighboring threads to implement more complex algorithms such as parallel reduce or scan. Practical examples that we have implemented in our framework (see Section 4.3 and 5.3) using this abstraction include, but are not limited to, sparse-linear algebra kernels, such as Sparse-Matrix and Sparse-Tensor contractions, and data-centric parallel graph algorithms, such as Single-Source Shortest Path (SSSP) and Breadth-First Search (BFS) built on a neighborhood traversal kernel.
We expect typical users of our library will _only_ write their own code for _this_ stage of the abstraction and use standard data structures and load-balancing schedules that are already part of our library. However, those users can also implement custom data formats and load-balancing schedules.
## 4. High-Level Framework Implementation
Our GPU load-balancing framework implements the abstraction described in Section 3 using C++17 and CUDA. In our system, programmers use CUDA/C++ to develop irregular-parallel algorithms and implement new load-balancing schedules. Per our design goals of composable APIs, extensibility, and reuse, this and the following section introduce the implementation details of our API, and how it is used to develop new applications that promote the reuse of high-performance load-balancing techniques available within the framework. We also explore a new load-balancing method (Section 5.2) built on CUDA's Cooperative Groups model. Furthermore, we identify how our work can be used to facilitate the exploration of optimizations for a given application such as SpMV.
### Implementing Sparse Data Structures
Our framework translates sparse data structures (e.g., COO, CSR, CSC) into work atoms, work tiles, and tile sets (Section 3.1) using simple C++ iterators. C++ iterators are objects that point to some element in a range of elements and enable iteration through the elements of that range using a set of operators. For example, a counting_iterator is an iterator that represents a pointer into a range of sequential values (Barton et al., 2017). Our framework requires the user to define three important iterators using C++: (1) an iterator over all work atoms; (2) an iterator over the work tiles; and (3) an iterator over the number of atoms in each work tile. (Our library already supports several common sparse data structures.) Using these iterators, the load-balancing schedule can then determine and distribute load-balanced work across the
Figure 1. Load balancing as a simple pipeline of the three key concepts of our abstraction: (1) sparse data structures represented as iterators, (2) load-balancing algorithm that partitions the work onto threads, and (3) user-defined computation consuming the balanced work and executing on each thread.
underlying hardware. Listing 1 shows how our abstraction expresses the commonly used CSR format as a tile set within our framework.
### Implementing Load-Balancing Schedules
Perhaps the most straightforward schedule is scheduling each work tile onto one GPU thread. This approach is common in the literature and practice (Beng et al., 2015; Chen et al., 2016; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019); although this strategy is ineffective in the presence of significant load imbalance across tiles, we use it here as an example to illustrate how load balancing is defined within our framework.
The inputs are the three iterators from the last stage plus an atom and tile count. The load-balance algorithm developer, then, implements tiles() and atoms() procedure calls, which return the C++ range of tiles and atoms to be processed by the current thread, effectively creating a map between assigned processor ids and segments of the workload. Listing 2 shows a complete example of the thread-mapped schedule. Although a simple algorithm, it can deliver high performance for well-balanced workloads with coarse-grained parallelism (a small number of atoms per tile), such as multiplying a sparse vector by a dense vector. Furthermore, our abstraction is not limited to only simple scheduling algorithms, as Section 5.2 provides examples of more complex load-balancing algorithms.
```
1classschedule_t{
2//Constructathread-mappedschedule.
3__host__device.
4schedulet(atoms_it_tatoms_it,
5tiles_it_ttiles_it,
6atoms_it_tatoms_per_tile_it,
7size_tnum_atoms,size_tnum_tiles):
8m_atoms_it(atoms_it),m_tiles_it(tiles_it),
9m_atomsper_tile_it(atoms_per_tile_it),
10m_num_atoms(num_atoms),
11m_num_tiles(num_tiles){
12//Rangeoftilestoprocessin"this"thread.
13//Stridebygriddimension.
14__host__device__autotites()
15autobegin=m_tiles_it(blockDim.x*blockIdx.x+threadIdx.x);
16autend=m_tiles_it(m_num_tiles);
17returnrange(begin,end)
18.step(gridDim.x*blockDim.x);
19}
20//Rangeofatomstotprocessin"this"thread.
21__host__device__autoatoms(
22conststd::size_t8tile){
23autobegin=m_atoms_per_tile_it[tile];
24autend=m_atoms_per_tile_it[tile+1];
25returnrange(begin,end).step(1);
26}
27};
28};
29usingschedule_t=thread_mapped_schedule_t;
```
Listing 2A thread-mapped load-balancing algorithm expressed as C++ ranges, incorporating the atoms and tiles defined as iterators from Listing 1. Each tile is mapped to a thread, where the thread id corresponds to the index of the tile in the tile set. All atoms within a tile are sequentially processed by the thread. After a tile is processed, a thread is mapped to the next tile, obtained by striding the index by the grid size of the kernel.
### Implementing Work Execution
Our framework is designed to explicitly let the user _own_ the kernel launch boundary. Owning a CUDA kernel boundary means that the user is responsible for maintaining and configuring launch parameters and implementing the CUDA kernel used to define the application. Although this design decision comes at a cost of convenience and simplicity, it offers significant flexibility in what users can express through our abstraction. This design decision is motivated by the following reasons. (1) Users are not required to add a complex dependency to their existing workflow/libraries, therefore making code maintenance simpler and more scalable as they do not have to rely on our framework to incorporate new CUDA constructs and features. (2) Users are free to express anything and everything CUDA allows within their kernels while consuming our load-balanced C++ ranges. This allows
for versatility in what can be expressed, as the users can now specify multiple load-balanced work domains, range-based for loops, and even fusing multiple computations to build more complex algorithms within a single kernel. (3) Higher-level APIs can be used to build simpler higher-level abstractions that _do_ own the kernel boundary and provide simpler APIs at the cost of flexibility.
As an input to this stage, users consume the load-balanced C++ ranges to implement their computation. This can be done in multiple ways, but one of the most common patterns is a nested range-based for loop that loops over all the assigned tiles and atoms ranges. Listing 3 shows a simple example of a CUDA kernel that implements the SpMV algorithm the using CSR format and thread-mapped load-balancing algorithm described in Listings 1 and 2. In this example, the outer for loop within each thread iterates over the assigned rows of the sparse matrix (tiles), and the inner loop sequentially processes the assigned nonzeros (atoms) within each row. In Section 5.3 we implement and discuss more complex kernels and computations.
## 5. Implementation Details
### Flexible, Composable CUDA-enabled Ranges
The composability of load-balanced primitives and applications using our API is a conscious design choice within our framework supported through the use of CUDA-enabled C++ ranges. Our framework does not _own_ the kernel boundary (kernel launch), which forces our APIs to be focused and contained within the kernels. This allows programmers to build and maintain their own kernels while still benefiting from our framework's load-balancing capabilities. This is largely implemented using device-wide C++ functions and classes tagged with CUDA's _device_ keyword.2 We implemented and expose several different types of specialized ranges that were particularly useful in implementing load-balanced schedules:
Footnote 2: A method decorated with the __device_ keyword allows the CUDA compiler to generate a device-callable entry point. This allows the code to be called from within kernels (Leslie et al., 2017).
* step_range: A range that iterates from begin to end in steps of step. Useful for defining load balancing schedules that require a custom stepping range or process a constant number of work items per thread (which can be defined using step).
* infinite_range: A range that iterates from begin to infinity. Useful for defining load balancing schedules in persistent kernel mode (Leslie et al., 2017), where the kernel persistently runs until all work is consumed or an algorithm has converged.
* grid_stride_range: A specialized case of step range that iterates from begin to end in steps of step using the CUDA kernel's grid size. Also supports block and warp stride variants that iterate in steps of the block or warp size, respectively.
### Implementing Non-Trivial Load-Balancing
As we describe in Section 5.1, we can decouple and express existing load-balancing techniques as a set of C++ ranges. To illustrate the potential of this abstraction, we begin by decoupling and expressing a state-of-the-art load-balancing algorithm known as merge-path (Leslie et al., 2017) previously used for balancing CSR-based SpMV and SpMM (Leslie et al., 2017; Leslie et al., 2017), and implement three additional load balancing algorithms (warp-, block- and group-mapped), all of which are available in our library for programmers to use. Our new group-mapped algorithm is a tile-per-group-based schedule, where a group is defined
as a collection of threads of any arbitrary size (not limited to a warp or block size). Our group-mapped schedule is a generalization of the tile-per-thread, -warp or -block schedules (Cuba and Krakela, 2017; Krakela and Krakela, 2018) using CUDA's Cooperative Groups programming model (Krakela and Krakela, 2018).
#### 5.2.1. Merge-path load balancing
In the language of a sparse matrix, merge-path assumes that each non-zero in the matrix and each new row in the matrix are an equivalent amount of work, then evenly divides nnzs+rows work across the set of worker threads. Each thread then performs a 2-D binary search within the nonzero indices and row offsets of a CSR matrix to find the starting position of the row and nonzero it needs to process. Threads then sequentially process the rows and nonzeros from the starting position until they reach the end of their assigned work (Krakela and Krakela, 2018).
We implement this algorithm as a load-balancing schedule in our abstraction by expressing it in two steps: (1) **Setup**: The initialization step of the C++ schedule class computes the number of work units per thread, conducts a binary search as described above, and stores the starting position of each tile and atom in a thread-local variable. (2) **Ranges**: The second step of the algorithm builds the ranges for each thread to process as "complete" tiles and "partial" tiles (Krakela and Krakela, 2018). If a thread's atom range lies entirely within one tile, it is "complete", and is processed in a simple nested loop. If a thread's range crosses a tile boundary, the thread processes its work in a separate nested loop.
Because we decouple the load-balancing method (Section 4.2, and above) from work execution (Section 4.3), we can use this merge-path implementation to implement not only SpMV but also any other algorithm whose work can be divided into tiles and atoms, e.g., a graph neighborhood-traversal algorithm used to implement breadth-first search (Krakela and Krakela, 2018). Just as importantly, the merge-path schedule is now no longer limited to a CSR-based sparse format. Supporting other formats only requires building the necessary slightly more complex iterators that are able to count atoms per tile (the computation that the CSR implementation achieves with the row offsets array in Listing 1).
#### 5.2.2. Warp- and block-level load balancing
The goal of a warp- or block-level load-balancing schedule is to assign an equal share of tiles to each warp or block, which are then sequentially processed. The work atoms within each tile will be processed in parallel by the available threads within a warp or a block. Each thread strides by the size of the warp or block to process a new work atom until the end of work is reached.
The imbalance across different processing units is left for the hardware scheduler to handle. This scheduler depends on the oversubscription model of CUDA, where the programmer can launch a larger number of warps or blocks than the GPU can physically schedule at any given time. As the processing units finish processing their work, new ones are scheduled from the oversubscribed pool (Cuba and Krakela, 2017; Krakela and Krakela, 2018).
#### 5.2.3. Group-level load balancing
Group-level load balancing generalizes warp- and block-level schedules. Instead of requiring that group sizes are the size of a warp or block, as above, this method leverages CUDA's Cooperative Groups (CG) programming model (Krakela and Krakela, 2018) to allow programmer-specified dynamically sized groups of arbitrary size. Within these groups, the CG model permits detailed control of the group's synchronization behaviors as well as simple parallel group-level collectives such as reduce or scan. We leverage this powerful tool to implement a generalized group-level load balancing schedule, effectively giving us the warp- and block-level schedules above for free when the group size equals that of a warp or a block.
Our schedule assigns work tiles to a group, and each group looks at its equal share of tiles and computes the number of atoms for each tile and stores it in a scratchpad memory (CUDA's _shared memory_). The group then performs a parallel prefix-sum, a widely used parallel algorithm that inputs an array and produces a new array where the element at any position is a sum of all previous elements (Cuba and Krakela, 2017). We use this prefix-sum array for two purposes: (1) the last element of a prefix-sum array indicates the aggregated number of work atoms that a group has to process, and (2) the position of each sum in the prefix-sum array corresponds to the work tile to which those atoms belong. The setup phase of the schedule builds the prefix-sum arrays per group in the scratchpad memory, and the ranged-loop of the schedule returns the atom to process in each thread. The corresponding tile, if needed, is obtained by a simple get_tile(atom_id) operation, which executes a binary search within the prefix-sum array to find the tile corresponding to the atom being processed.
Relying on the CG model for this load-balancing schedule has a unique advantage of configuring the group size (effectively software constructs that directly map onto the hardware) per the shape of the problem and the underlying hardware architecture. For example, targeting GPUs where the warp size is not 32 threads (AMD's GPU architecture supports a warp size of 64 (Krakela and Krakela, 2018)) is now possible with a simple compile-time constant, or configuring the group size to perfectly align with the structure of the problem.
### Application Space
Our work definition (Section 3.1), composable APIs (Section 5.1), and multiple sophisticated, high-performance load-balancing schedules (Section 5.2) together provide for a versatile and extensible framework with plenty of room for application-specific optimizations. In Listing 3 we already demonstrated how to implement the SpMV algorithm using
our framework. A simple and natural extension is to implement Sparse-Matrix Matrix Multiplication (SpMM). Listing 4 shows the minor change necessary, which adds another loop over the columns of the **B** matrix around the existing code from Listing 3 to implement SpMM. This implementation could also be extended to support Gustavson's General Sparse Matrix-Matrix Multiplication (SpGEMM), using two kernels and an allocation stage; the first kernel would compute the size of the output rows used to allocate the memory for the output sparse matrix and the second kernel would perform the multiply-accumulation.
Beyond sparse linear algebra, we can use our framework to address applications in other domains. Listing 5 implements the graph primitive Single-Source Shortest Path (SSSP) using our group-level load-balancing schedule. SSSP's performance on GPUs is largely gated by good load balancing (Beng et al., 2019; Chen et al., 2020), but if the programmer chooses a load-balancing schedule from our library, the details of load balancing are completely hidden. Moreover, the same schedules that were used in one application domain (e.g., sparse linear algebra) are easily reusable in this different application domain.
## 6. Evaluation
We aim to show that our framework, built on our load balancing abstraction, enables both high performance and better programmability for sparse-irregular problems. Our evaluation below uses our SpMV implementation as a benchmark against state-of-the-art implementations provided within NVIDIA's (open-source) CUB library and production (closed-source) cuSparse library. We considered (and implemented) several additional applications for evaluation, including SSSP, BFS, and SpMM. We found they led to similar high-level conclusions. Thus our evaluation here focuses on SpMV. Our test corpus consists of approximately the _entire_ SuiteSparse Matrix Collection (Mohan et al., 2017) with a broad scope of sparse matrices from many different high-performance computing domains. We ran all experiments on a Ubuntu 20.04 LTS-based workstation with an NVIDIA Tesla V100 GPU and CUDA 11.7.
### Performance Overhead
Our first and foremost goal is to ensure that the elements within our abstraction do not add any additional performance overhead to the existing load balancing techniques and algorithms developed using them. To verify this, we
Figure 2. SpMV runtime comparison: our merge-path SpMV implementation vs. CUB across all SuiteSparse datasets. Our runtimes almost perfectly match CUB’s for all datasets. The small number of datasets where CUB is faster is due to a simple heuristic that CUB uses for single-column sparse matrices (i.e., a sparse vector).
compare the runtime performance of our SpMV implementation using the merge-path schedule to the implementation provided by NVIDIA's CUB library (NVIDIA et al., 2019) (also used for Merrill and Garland's merge-path SpMV paper (Merrill and Garland, 2019)) on the Suite-Sparse collection. As previously mentioned, and in contrast to our design, CUB contains a hardwired implementation of the merge-path scheduling algorithms and does not decouple workload balancing from the actual SpMV computation. CUB's approach is not reusable for any other irregular parallel problem without significant changes to the implementation.
Figure 2 plots the number of nonzeros (i.e., the total work) vs. runtime for our work vs. CUB's implementation. Our implementation has minimal performance overhead when using our abstraction: a geomean slowdown of 2.5% vs. CUB, with 92% of datasets achieving at least 90% of CUB's performance. Figure 2 shows our implementation almost perfectly matches CUB for all datasets, except for some datasets with fewer than 100,000 nonzeros. Upon further investigation, we identify that CUB uses a simple heuristic to launch a thread-mapped SpMV kernel where the number of columns of a given input matrix equals 1 (i.e., a sparse vector). Unlike our more general implementation, CUB's simple (but specialized) thread-mapped SpMV kernel has no load-balancing overhead for a perfectly balanced workload such as SpVV computation.
### Improved Performance Response
We also compare our work to NVIDIA's vendor library for sparse computations, cuSparse. Figure 3 shows the performance response of our SpMV implementation using each of our scheduling algorithms individually vs. cuSparse's state-of-the-art implementation. Switching between any of our implementations requires very little code change; in the case of merge-path and thread-mapped, we need only update a single C++ enum (identifier) to select the desired load-balancing schedule.
We then combine our scheduling algorithms into one implementation for SpMV (Figure 4), demonstrating noticeable performance improvements over cuSparse. This is primarily possible due to our ability to quickly experiment with different heuristic schemes with a variety of available load-balancing schedules. Here, we use merge-path unless either the number of rows or columns are less than the threshold \(\alpha\) and the nonzeros of a given matrix are less than threshold \(\beta\) (we choose \(\alpha=500\) and \(\beta=10000\) for SuiteSparse). In this case, we use thread-mapped or group-mapped load balancing instead of merge-path. Our system shows a peak performance speedup of 39\(\times\) and a geomean performance speedup of 2.7\(\times\) vs. cuSparse.
Our framework not only allows programmers to express computations efficiently and simply (i.e., without worrying about the load-balancing algorithms), but also quickly optimize a given application using a range of scheduling algorithms, both with minor code changes.
### Lines of Code (LOC)
We are able to achieve these performance gains with minimal code complexity. Table 1 shows lines of code (LOC) for our framework when compared to the state-of-the-art open-source implementation of merge-path and thread-mapped within NVIDIA's CUB library. We deliver the same performance results as highlighted in the previous sections with 14\(\times\) and 1\(\times\) fewer lines of code for merge-path and thread-mapped scheduling algorithms, respectively. Using our merge-path implementation only requires \(\sim\)15 additional LoC to the trivial thread-mapped schedule.
Furthermore, we extend the same SpMV computation to our novel _group-mapped_ load balancing schedule (that can also be specialized to perform _block-_ and _warp-mapped_ load balancing) within the same 30 LoC.
## 7. Related Work
Load balancing is the key to achieving high performance on GPUs for sparse, irregular parallel problems. Several high-performance computing applications deploy sophisticated load balancing algorithms on the GPUs. For instance, high-performance sparse-matrix vector multiplication (SpMV) leverages merge-path (Merrill and Garland, 2019) (discussed in detail in this paper) or a nonzero splitting algorithm, which partitions the number of non-zeros in a sparse-matrix evenly across the number of threads (Becker et al., 2017; Dosov et al., 2018; Dosov et al., 2019). Sparse-matrix matrix multiplication (SpMM) and sparse matricized tensor times Khatri-Rao product (SpMTTKRP) use binning and bundling algorithms (Khatri and Raghavan, 2017; Merrill and Garland, 2019; Merrill and Garland, 2019), which attempt to bin like-length work together such that they are processed together.
While some applications actively perform work to load-balance a given input, others store the input in more efficient, already-load-balanced/-partitioned formats. These include the F-COO format (a variant of coordinate format)
\begin{table}
\begin{tabular}{l c c} \hline \hline Load Balancing Algorithm & NVIDIA/CUB & Our Work \\ \hline Merge-Path & 503 & 36 \\ Thread-Mapped & 22 & 21 \\ Group-Mapped & N/A & 30 \\ Warp-Mapped & N/A & 30 (free) \\ Block-Mapped & N/A & 30 (free) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Lines of code (LOC) comparison for NVIDIA’s CUB library versus our work for SpMV application implemented using merge-path, thread-mapped and group-mapped (warp- and block-mapped use the _exact_ same code for group-mapped) load balancing algorithms. We report only non-commented lines of code, formatted using the clang-format tool with the Chromium style guide (Khatri and Raghavan, 2017), that contributes to the kernel implementation.
used for SpMTTKRP and Sparse-Tensor Tensor Multiplication (SpTTM), where each thread gets the same number of nonzeros to process [19].
Many of the above GPU load-balancing algorithms, along with other novel techniques, were first described in the graph analytics domain. Davidson et al. and Merrill and Garland were the first to present Warp, Block-level and Thread-Warp-CTA dynamic load balancing techniques for Single-Source
Figure 3: Complete performance landscape of SpMV across all SuiteSparse datasets using 3 load balancing schedules vs. NVIDIA’s cuSparse library. This performance comparison highlights the impact of different approaches to load-balancing SpMV for a given dataset and number of nonzero entries within each dataset. Later in Figure 4 we use this insight to select the fastest schedule for an improved overall performance. Additionally, our 3 different SpMV implementations are made possible with very little code change.
Shortest Path (SSSP) and Breadth-First Search (BFS) respectively [10, 21]. Logarithmic Radix Binning (LRB) is a particularly effective technique for binning work based on a logarithmic work estimate, used for the Triangle Counting graph algorithm and more [13, 16]. Gunrock, GraphIT, and GraphBLAST are graph analytics libraries that implement several different graph algorithms such as BFS, SSSP, PageRank, Graph Coloring, and more, built on these previously mentioned load-balancing techniques [5, 29, 31]. Although many of these are effective load balancing techniques with high-performance implementations, they all tightly couple workload scheduling with the application itself. Our framework is designed to separate these two concerns, allowing the application to be independent of the load-balancing algorithm, and therefore be expressed simply. Our approach also allows these previously proposed techniques to be implemented within our framework, and be used for applications beyond those originally targeted.
Relatively few GPU works target generalized load balancing for irregular workloads. Most of these are focused on providing a singular, dynamic load-balancing solution centered on task parallelism, often using a GPU queue-based data structure. Cederman and Tsigas proposed a task-based approach to load balancing an octree partitioning workload using lock-free and lock-based methods [7]. Two Tzeng works provide task-management frameworks that implement load balancing of tasks using a single monolithic task queue and distributed queues with task stealing and donation [27, 28]. CUIRRE, a framework for load balancing and characterizing irregular applications on GPUs, also uses a task-pool approach [33], and more recently, Atos, a task-parallel GPU dynamic scheduling framework, targets asynchronous algorithms [8]. All of these works deploy either a centralized or a distributed queue-like data structure on the GPUs, each making design decisions on how the queue is to be partitioned and updated. Except for the most recent Atos work, most earlier works focus on a coarse-grained parallelism approach of effectively distributing tasks to the GPU. Our work takes advantage of more modern GPU architectures, which are more effectively utilized by a fine-grained parallelism approach (parallelizing over work atoms instead of work titles). Unlike our abstraction, these aforementioned works also rely on a singular load-balancing solution, whereas our abstraction flexibly adapts to many different load-balancing techniques, static and dynamic, and allows for new schedules to be implemented within our framework.
## 8. Conclusion
In this paper, we present a programming model for GPU load balancing for sparse irregular parallel problems. Our model is built on the idea of separation of concerns between workload mapping and work execution. In the future, we are interested in expanding our model to a multi-GPU environment, and implementing load-balancing schedules that span across the GPU boundary covering multiple devices and nodes for massive parallel problems. Our current work focuses solely on load balancing, but we also identify locality to be another key factor for high performance. We are interested in identifying an orthogonal model that builds an abstraction for caching and locality into our existing load-balancing framework.
## Acknowledgments
This material is based upon work supported by Defense Advanced Research Projects Agency (DARPA) under Contract No. HR0011-18-3-0007 and the National Science Foundation under Contract No. OAC-1740333. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the U.S. Government. Distribution Statement "A" (Approved for Public Release, Distribution Unlimited). We would like to acknowledge Michael Garland and Duane Merrill from NVIDIA for their guidance on the framework. We would also like to acknowledge Toluwanimi Odemuyiwa, Jonathan Wapman, Matthew Drescher and Muhammad Awad for research discussions and feedback on the work. We also acknowledge the support of AMD, Inc. (Jalal Mahmud and AMD Research) in the form of travel funding, which enables us to attend the conference to present this work.
|
2307.03373 | All in One: Exploring Unified Vision-Language Tracking with Multi-Modal
Alignment | Current mainstream vision-language (VL) tracking framework consists of three
parts, \ie a visual feature extractor, a language feature extractor, and a
fusion model. To pursue better performance, a natural modus operandi for VL
tracking is employing customized and heavier unimodal encoders, and multi-modal
fusion models. Albeit effective, existing VL trackers separate feature
extraction and feature integration, resulting in extracted features that lack
semantic guidance and have limited target-aware capability in complex
scenarios, \eg similar distractors and extreme illumination. In this work,
inspired by the recent success of exploring foundation models with unified
architecture for both natural language and computer vision tasks, we propose an
All-in-One framework, which learns joint feature extraction and interaction by
adopting a unified transformer backbone. Specifically, we mix raw vision and
language signals to generate language-injected vision tokens, which we then
concatenate before feeding into the unified backbone architecture. This
approach achieves feature integration in a unified backbone, removing the need
for carefully-designed fusion modules and resulting in a more effective and
efficient VL tracking framework. To further improve the learning efficiency, we
introduce a multi-modal alignment module based on cross-modal and intra-modal
contrastive objectives, providing more reasonable representations for the
unified All-in-One transformer backbone. Extensive experiments on five
benchmarks, \ie OTB99-L, TNL2K, LaSOT, LaSOT$_{\rm Ext}$ and WebUAV-3M,
demonstrate the superiority of the proposed tracker against existing
state-of-the-arts on VL tracking. Codes will be made publicly available. | Chunhui Zhang, Xin Sun, Li Liu, Yiqian Yang, Qiong Liu, Xi Zhou, Yanfeng Wang | 2023-07-07T03:51:21Z | http://arxiv.org/abs/2307.03373v1 | # All in One: Exploring Unified Vision-Language Tracking with Multi-Modal Alignment
###### Abstract
Current mainstream vision-language (VL) tracking framework consists of three parts, _i.e._, a visual feature extractor, a language feature extractor, and a fusion model. To pursue better performance, a natural modus operandi for VL tracking is employing customized and heavier unimodal encoders, and multi-modal fusion models. Albeit effective, existing VL trackers separate feature extraction and feature integration, resulting in extracted features that lack semantic guidance and have limited target-aware capability in complex scenarios, _e.g._, similar distractors and extreme illumination. In this work, inspired by the recent success of exploring foundation models with unified architecture for both natural language and computer vision tasks, we propose an All-in-One framework, which learns joint feature extraction and interaction by adopting a unified transformer backbone. Specifically, we mix raw vision and language signals to generate language-injected vision tokens, which we then concatenate before feeding into the unified backbone architecture. This approach achieves feature integration in a unified backbone, removing the need for carefully-designed fusion modules and resulting in a more effective and efficient VL tracking framework. To further improve the learning efficiency, we introduce a multi-modal alignment module based on cross-modal and intra-modal contrastive objectives, providing more reasonable representations for the unified All-in-One transformer backbone. Extensive experiments on five benchmarks, _i.e._, OTB99-L, TNL2K, LaSOT, LaSOT\({}_{\mathrm{Ext}}\) and WebUAV-3M, demonstrate the superiority of the proposed tracker against existing state-of-the-arts on VL tracking. Codes will be made publicly available.
Unified vision-language tracking, Multi-modal alignment, Transformer, Foundation model.
## I Introduction
Vision-language (VL) tracking [1, 2, 3, 4, 5], one of the fundamental and challenging problems at the intersection of computer vision and natural language understanding, aims to locate the object in each frame of a video based on a given natural language prompt and an initial object box. It plays a crucial role in human-machine interaction, transportation surveillance, virtual reality, autonomous driving, delivery, _etc_. Compared with traditional visual object tracking [6, 7] using a bounding box to describe the object of interest, VL tracking has the potential to achieve more robust tracking by leveraging the complementary superiority of multiple modalities.
In the past few years, two-stream VL trackers [2, 4, 5, 8], which extract visual features and language features separately and then perform feature interaction in a fusion model (as shown in Fig 1(a)), have emerged as a domain framework and obtained significant progresses. For instance, Feng _et al._[4] proposed a Siamese natural language region proposal network for multi-stage feature extraction, and then applied an aggregation module to dynamically combine predictions from both visual and language modalities. Guo _et al._[5] suggested an asymmetrical modeling architecture to learn adaptive VL representations. Following the two-stream pipeline, the latest transformer-based VL tracker JointNLT [9] formulates grounding and tracking as a unified task of establishing relation between visual-language references and test images via a multi-source relation modeling architecture.
Despite the convincing designs of existing two-stream VL trackers, they still suffer from the fundamental challenge of learning target-aware capability in some complex and corner scenarios, _e.g._, similar distractors, occlusion, and extreme illumination [1, 5]. Firstly, the separation of feature extraction and integration prevents the model from performing early multi-modal feature interaction, resulting in limited object-background discriminative power [10, 11]. Although some works have attempted to design complicated [8] or multi-stage [4, 5] fusion models to enhance the associations between modalities, the lack of mutual interaction remains an insurmountable gap. More seriously, heavy fusion models increase the number of parameters, leading to significant computational inefficiency. Secondly, performing feature interaction directly ignores the huge distribution discrepancies between the vision and language modalities in the feature
Fig. 1: Existing VL tracking framework _vs._ our All-in-One. (a) Existing VL tracking methods obtain multiple modality features from separate extractors before fusion. The feature interaction relies on a carefully-designed fusion model. (b) We aim to build a foundation model, _i.e._, All-in-One, for VL tracking, which achieves joint feature extraction and multi-modal interaction using a versatile transformer encoder.
space [12], leading to significant learning inefficiency in VL representation learning.
To tackle the above issues, we propose a unified framework (as shown in Fig 1(b)), namely All-in-One, for VL tracking. The core idea is to establish bidirectional information flow between well aligned visual and language signals as early as possible via a unified transformer backbone. Our All-in-One framework brings multiple new benefits for multi-modal VL tracking. (1) The unified architecture not only simplifies the model, but also leads to more efficient VL representation learning. (2) It has great potential to serve as a foundation model for VL tracking. With this framework, we develop a general VL tracking model, which generalizes well to complex, user-defined language descriptions/prompts on various VL tracking datasets. (3) Compared with the two-stream vision language foundation models (_e.g_., CLIP [13]), our All-in-One framework follows the simple and general one-stream route [10, 11, 14].
Specifically, we introduce a versatile All-in-One transformer, as shown in Fig. 2, to embed raw visual and language signals into joint VL representations, and the produced visual search region features can be directly used for object location without additional fusion model. The visual inputs (_i.e_., search region and template) and language input are first mapped by a patch embed and a text tokenizer, respectively, and then flattened into the same dimension embeddings. A modal mixup operation is used to inject language information into the visual embeddings (_i.e_., template embeddings and search region embeddings), followed by a stack of transformer encoder layers enabling iteratively feature integrating between template and search region embeddings with language information guidance. Thus, both template and search region embeddings can be enhanced dynamically with strong target-aware capability. In addition, we introduce a multi-modal alignment (MMA) module to alleviate the huge distribution discrepancies between multiple modalities based on contrastive learning (CL) [15]. The MMA module includes cross-modal alignment (CMA) and intra-modal alignment (IMA), forcing the visual and language signals from the same video to be close in the feature space, while making the distribution of multi-modal features more uniform and reasonable in the entire feature space, which can promote feature integration. In conclusion, our main contributions can be summarized as follows:
* We propose a simple, compact and effective one-stream framework for VL tracking, namely All-in-One, which learns VL representations from raw visual and language signals end-to-end in a unified transformer backbone.
* We develop a novel multi-modal alignment module incorporating cross-modal and intra-modal alignments to enable efficient multi-modal learning by aligning multiple signals in the feature space before learning.
* Extensive experiments demonstrate that the proposed approach achieves higher accuracy against state-of-the-art (SOTA) trackers.
## II Related Work
### _Vision-Language Tracking_
In recent years, the two-stream framework [2, 4, 5, 8] has emerged as a dominant VL tracking paradigm (see Fig. 1(a)). They first extract features using two independent unimodal feature extractors, and then model relation of visual features and language features in sequential manner by a lightweight [5] or relatively heavy [9] network. Early work [3] contains a visual specification network and a lingual specification network, and further selectively focuses on parts of language prompts using a lingual specification attention network. Later, GTI [16] and [2] decompose the VL tracking problem into three sub-tasks of visual tracking, grounding and integration. \(\text{VLT}_{\rm TT}\)[5] suggests to learn VL representations through an asymmetrical modeling architecture. JointNLT [9] introduces a joint visual grounding and tracking framework by localizing the referred object based on the visual-language references. However, these works rely on separate visual and language encoders to extract multi-modal features, leading to limited information interaction. We note that several works [10, 11, 14] introduce one-stream framework for visual object tracking. Different from them, we extend the one-stream framework to multi-modal VL tracking by training jointly on videos and language prompts. As shown in Fig. 1(b), for the first time we seamlessly integrate feature extraction and interaction into a unified backbone architecture for VL tracking. The proposed framework not only enables information flow from language to vision, but also allows bidirectional integration of information between visual and language features.
### _Transformer for Unified Architecture_
Thanks to the scalability to very large-scale model and capability to handle sequential and non-sequential data, transformer has become a prevailing architecture in both natural language [17, 18] and computer vision [19, 20] communities. Following ViT [19], a series of variants of ViT have been developed to improve the performance on vision tasks, including reducing computational cost [20, 21], and architecture design [22, 23]. Additionally, transformer has extensively used in various multi-modal tasks [24, 25, 26].
In consideration of the capacity of the unified transformer model to handle unimodal input or multi-modal input with a shared encoder, a few pioneering works have tried to explore unified multi-modal encoders [27, 28, 29]. For instance, ViLT [28] proposed a vision-language transformer without using regional features or deep convolutional visual embeddings. VATT [29] developed a video-audio-text transformer using multi-modal self-supervised pre-training to improve the performance of video action recognition and audio event classification. In this paper, we follow the trend of unified architecture for multi-modal data. To the best of our knowledge, the proposed All-in-One transformer is the first unified backbone architecture for multi-modal VL tracking.
### _Multi-Modal Learning_
Recently, a potential multi-modal learning paradigm is to adopt transformer to process and relate information from multiple modalities [13, 30, 31, 32]. CLIP [13] applied language prompts as supervisory signals to learn better visual representations. VisualBERT [31], VilBERT [33], and Unicoder-VL [32] combined visual and textual features into transformers to capture cross-modal relationships. However, previous works mainly focus on how to learn multi-modal representations by exploiting the complementary advantages of multiple modalities, or to fuse multi-modal features for prediction. Multi-modal alignment, discovering relationships and correspondences between fine-grained elements (_e_.\(g\)., objects and words) of instances (_e_.\(g\)., images and languages) from two or more modalities, has been rarely explored. In this work, we propose the MMA module with CMA and IMA based on self-supervised CL [15] to explore efficient multi-modal learning for VL tracking.
## III Proposed Method
This section presents the All-in-One, a simple yet effective framework for the VL tracking task. Our All-in-One framework consists of an All-in-One transformer backbone, a multi-modal alignment module, and a tracking head, as shown in Fig. 2. The All-in-One transformer backbone is used to achieve feature extraction and information interaction between visual inputs (_i_.\(e\)., visual search region and visual template) and language input simultaneously in a unified architecture. Before that, visual embeddings and language embeddings are aligned via a multi-modal alignment module, providing more reasonable feature embeddings in the feature space. The output features of the visual search region are sent to the tracking head to predict the location of the target.
### _Problem Formulation_
Before detailing the architecture of our All-in-One framework, we briefly review the transformer tracking [10, 11, 14, 34, 35], which achieves remarkable tracking performance. Given a video with a pair of visual template and visual search region \(\mathcal{X}_{xz}\), an initial target box \(\mathcal{B}_{0}\), the transformer tracking can be formulated as \(F_{trans}:\{\mathcal{X}_{xz},\mathcal{B}_{0}\}\rightarrow\mathcal{B}\), where \(F_{trans}\) is the transformer tracker, \(\mathcal{B}\) is the predicted box of the target in all subsequent search frames. In general, the transformer tracker \(F_{trans}\) can be decomposed into \(\Phi\circ f\), where \(f:\{\mathcal{X}_{xz},\mathcal{B}_{0}\}\rightarrow\mathcal{H}\) denotes the backbone (_e_.\(g\)., ViT [19]) for feature extraction and interaction function, \(\mathcal{H}\) represents the output features of visual search region, and the tracking head \(\Phi:\mathcal{H}\rightarrow\mathcal{B}\) is adopted to predict the target box.
Specifically, a pair of images, namely visual search region \(\mathbf{x}\in\mathbb{R}^{3\times H_{x}\times W_{x}}\) and visual template \(\mathbf{z}\in\mathbb{R}^{3\times H_{z}\times W_{z}}\) are divided into \(N_{x}\) and \(N_{z}\) non-overlapping image patches of resolution \(P\times P\), where \(N_{x}=H_{x}W_{x}/P^{2}\) and \(N_{z}=H_{z}W_{z}/P^{2}\) are the number of patches of search region and template, respectively. Then, a linear projection is applied to these image patches to generate 1D tokens \(\mathcal{H}_{x}\in\mathbb{R}^{N_{x}\times D}\) and \(\mathcal{H}_{z}\in\mathbb{R}^{N_{x}\times D}\), where \(D\) is the token dimension. Two learnable positional embeddings are added to \(\mathcal{H}_{x}\) and \(\mathcal{H}_{z}\) to retain the position information. After that, these tokens are concatenated as a sequence \(\mathcal{H}_{xz}^{0}=[\mathcal{H}_{x};\mathcal{H}_{z}]\) and fed to a \(L\)-layer transformer encoder. Here, we represent \(\mathcal{H}_{xz}^{l-1}\) as inputs to the \(l\)-th encoder layer \(E^{l}\). Formally, the forward operation of the \(l\)-th encoder layer can be written as:
\[\mathcal{H}_{xz}^{l}=E^{l}(\mathcal{H}_{xz}^{l-1}),l=1,2,3,...,L \tag{1}\]
\[\mathcal{B}=\Phi(\mathcal{H}_{xz}^{L}), \tag{2}\]
where each transformer encoder layer contains a multi-head self-attention (MHSA), and a feed-forward network (FFN). Each sub-layer is constructed as a residual connection, where layer normalization (LN) is followed by the residual connection. The visual search region tokens \(\mathcal{H}_{x}^{L}\) of the last transformer encoder layer is taken as the input of tracking head \(\Phi\) for target box prediction.
For VL tracking [1, 2, 36], it introduces an extra language prompt \(\mathcal{T}\) for each video to express the attribute, behavior, position (relative location), and surroundings of the target. Accordingly, VL tracking can be formulated as \(F_{VL}:\{\mathcal{X}_{xz},\mathcal{B}_{0},\mathcal{T}\}\rightarrow\mathcal{B}\), where \(F_{VL}\) is the VL tracker. Similarly, the VL tracker \(F_{VL}\) can also be decomposed into
Fig. 2: Overview of the proposed All-in-One framework. The multi-modal alignment module is introduced before All-in-One transformer backbone to align visual and language embeddings in the feature space. The All-in-One transformer backbone is applied to achieve joint feature extraction and interaction. The tracking head is used to predict object location.
\(\Phi\circ f^{*}\), where \(\Phi\) is the tracking head, and \(f^{*}\) represents the proposed unified backbone architecture in this work.
### _Unified Vision-Language Tracking_
Fig. 2 gives an overview of our All-in-One framework for VL tracking. To optimize the VL tracker \(F_{VL}\), a pair of visual template and visual search region \(\mathcal{X}_{xz}=\{\mathcal{X}_{x},\mathcal{X}_{z}\}\), and an extra language prompt \(\mathcal{T}\) are first fed to a patch embed (_i.e_., a linear projection) and a text tokenizer [18], respectively. They are mapped and flattened into \(D\)-dimension embeddings, where \(D=768\). We denote them as vision tokens (_i.e_., \(\mathcal{H}_{x}^{0}\) and \(\mathcal{H}_{z}^{0}\)), where \(\mathcal{H}_{x}^{0}\in\mathbb{R}^{N_{x}\times D}\) and \(\mathcal{H}_{z}^{0}\in\mathbb{R}^{N_{x}\times D}\) are visual search region tokens and visual template tokens, and language tokens \(\mathcal{H}_{t}^{0}\in\mathbb{R}^{N_{t}\times D}\), where \(N_{t}\) is the number of language tokens. Following [18], a special classification token (([CLS]) is attached at the beginning of the language tokens. Then, \(\mathcal{H}_{x}^{0}\), \(\mathcal{H}_{z}^{0}\), and \(\mathcal{H}_{t}^{0}\) are aligned with the multi-modal alignment module (see section III-C) in the embedding space. It is worth noting that the well aligned vision embeddings and language embeddings can facilitate multi-modal representation learning and interaction [30]. Here, we still refer to the aligned vision embeddings and language embeddings as \(\mathcal{H}_{x}^{0}\), \(\mathcal{H}_{z}^{0}\), and \(\mathcal{H}_{t}^{0}\), respectively. Afterwards, we perform a modal mixup operation [5] between the aligned vision embeddings and language embeddings as follows:
\[\mathbf{F}_{x}^{0}=\mathcal{H}_{x}^{0}\odot Linear(\mathcal{H}_{t}^{0})+ \mathcal{H}_{x}^{0}, \tag{3}\]
\[\mathbf{F}_{z}^{0}=\mathcal{H}_{z}^{0}\odot Linear(\mathcal{H}_{t}^{0})+ \mathcal{H}_{z}^{0}, \tag{4}\]
where \(\odot\) represents the Hadamard product, \(Linear(\cdot)\) is a linear projection layer. In this way, the language information is injected into vision embeddings via the modal mixup operation. Moreover, Eqs. (3)-(4) also construct a bidirectional information flow between vision and language modalities that allows mutual guidance for multi-modal feature extraction and interaction. By establishing a bidirectional information flow between well aligned visual and language signals as early as possible via a unified transformer backbone, we can avoid the loss of discriminative information and thus make the extracted features highly target-aware [10].
Formally, the operations of the \(l\)-th encoder of our All-in-One transformer backbone can be expressed as:
\[\mathbf{Q}=\mathbf{K}=\mathbf{V}=[\mathbf{F}_{x}^{l};\mathbf{F}_{z}^{l}], \tag{5}\]
\[[\mathbf{F}_{x}^{l\text{-}};\mathbf{F}_{z}^{\prime l}]=\mathrm{LN}([\mathbf{F }_{x}^{l};\mathbf{F}_{z}^{l}]+\mathrm{MHSA}(\mathbf{Q},\mathbf{K},\mathbf{V})), \tag{6}\]
\[[\mathbf{F}_{x}^{l\text{-}1};\mathbf{F}_{z}^{l\text{-}1}]=\mathrm{LN}([ \mathbf{F}_{x}^{l\text{-}};\mathbf{F}_{z}^{\prime l}]+\mathrm{FFN}([\mathbf{F }_{x}^{l\text{-}};\mathbf{F}_{z}^{\prime l}])), \tag{7}\]
where \(\mathbf{Q}\), \(\mathbf{K}\), and \(\mathbf{V}\) represent the query, key, and value embeddings, \([;]\) denotes the concatenation operation, \(\mathbf{F}_{x}^{l}\) and \(\mathbf{F}_{z}^{l}\) are the input embeddings of the \(l\)-th transformer encoder. Therefore, the language information-injected vision embeddings are jointly processed by the transformer encoder, enabling seamless multi-modal feature extraction and integration. Finally, the visual search region embeddings of the last layer of the transformer encoder are reshaped into a 2D feature map. The feature map is fed into the tracking head to predict the location of the target.
To model the interaction between language and vision features, recent VL trackers [5, 9] adopted a customized fusion model to directly serialize vision and language embeddings into sequences to learn a joint multi-modal embedding. Although, our All-in-One transformer backbone using the pretrained ViT [19] has the ability to model long-range dependencies of sequential data, alleviating the negative effects of modal differences for multi-modal learning, the vision embeddings and language embeddings lying in different feature spaces is still challenging for the transformer encoder to learn their interactions [37, 38]. To tackle this limitation, we further propose a self-supervised MMA module, which is used before feature extraction and integration. The MMA module includes CMA and IMA, which can efficiently learn more reasonable feature distributions as shown in Fig. 3.
### _Multi-Modal Alignment Module_
**Cross-modal Alignment.** Since the vision and language embeddings from the same video are distributed in different feature spaces, a natural thought is to enforce them close in the feature space to reduce the difficulty for multi-modal interaction. Absorb this in mind, we introduce the CMA to pull the matched vision and language embeddings closer in feature space, while pushing away mismatched pairs. Actually, the goal of CMA is to maximize the mutual information (MI) [15] between vision and language that are matched, which contain the same semantics. Fig. 3 presents an example, the high-level language embedding (_i.e_., green star) and sparse vision embedding (_i.e_., yellow star) from the same video are pulled closer in the feature space. Specifically, visual search region tokens \(\mathcal{H}_{x}^{0}\), visual template tokens \(\mathcal{H}_{z}^{0}\) and language tokens \(\mathcal{H}_{t}^{0}\) are projected into the same dimension through three linear projections, which we denote as \(\mathbf{f}_{x}\in\mathbb{R}^{C}\), \(\mathbf{f}_{z}\in\mathbb{R}^{C}\), and \(\mathbf{f}_{t}\in\mathbb{R}^{C}\), respectively, where \(C=256\). To maximize the MI of vision and language tokens, we optimize the InfoNCE loss [15] between vision and language, denoting the lower bound of their MI. Formally, InfoNCE losses of vision-to-language are defined as:
\[\mathcal{L}_{x24}(\mathbf{f}_{x}^{i},\mathbf{f}_{\mathbf{f}}^{i},\widetilde{ \mathbf{f}}_{t})=-\mathbb{E}_{(\mathbf{f}_{x},\mathbf{f}_{t})}[\log\frac{\exp( sim(\mathbf{f}_{x}^{i},\mathbf{f}_{t}^{i})/\tau)}{\sum_{j=1}^{N-1}\exp(sim(\mathbf{f}_{x}^{i}, \widetilde{\mathbf{f}}_{t}^{j})/\tau)}],\]
Fig. 3: Illustration of MMA module, which contains CMA and IMA. For CMA only, the second vision embedding (yellow star) is pulled towards its matched language embedding (green star). By incorporating IMA, it can learn more reasonable embedding (yellow square to green square).
\[\mathcal{L}_{z2t}(\mathbf{f}_{z}^{i},\mathbf{f}_{t}^{i},\widetilde{\mathbf{f}}_{t})=- \mathbb{E}_{(\mathbf{f}_{z},\mathbf{f}_{t})}[\log\frac{\exp(sim(\mathbf{f}_{z}^{ i},\mathbf{f}_{t}^{i})/\tau)}{\sum_{j=1}^{N-1}\exp(sim(\mathbf{f}_{z}^{i}, \widetilde{\mathbf{f}}_{t}^{j})/\tau)}], \tag{9}\]
where \(\mathbf{f}_{z}^{i}\), \(\mathbf{f}_{z}^{i}\), and \(\mathbf{f}_{t}^{i}\) are two vision tokens and language tokens of the same video, respectively, \(\widetilde{\mathbf{f}}_{t}=\{\widetilde{\mathbf{f}}_{t}^{1},...,\widetilde{ \mathbf{f}}_{t}^{N-1}\}\) is a set of negative language examples for \(\mathbf{f}_{x}^{i}\) or \(\mathbf{f}_{z}^{i}\), \(N\) is the batch size, \(sim(\mathbf{f}_{x}^{i},\mathbf{f}_{t}^{i})=\mathbf{f}_{x}^{i}\cdot\mathbf{f} _{t}^{i}/(||\mathbf{f}_{x}^{i}||||\mathbf{f}_{t}^{i}||)\), \(\tau\) is a temperature parameter. The InfoNCE losses, _i.e._, \(\mathcal{L}_{t2x}(\mathbf{f}_{t}^{i},\mathbf{f}_{x}^{i},\widetilde{\mathbf{f} }_{x})\) and \(\mathcal{L}_{t2z}(\mathbf{f}_{t}^{i},\mathbf{f}_{z}^{i},\widetilde{\mathbf{f} }_{z})\), of language-to-vision can be calculated similarly. Hence, the CMA loss can be formulated as \(\mathcal{L}_{cma}=\frac{1}{2}[\mathcal{L}_{x2t}(\cdot)+\mathcal{L}_{z2t}( \cdot)]+\frac{1}{2}[\mathcal{L}_{t2z}(\cdot)+\mathcal{L}_{t2x}(\cdot)]\).
Intuitively, by optimizing the CMA loss, vision and language embeddings can be well aligned in the feature space as in Fig. 3. However, the CMA ignores the significant intra-modal supervisory signals (_i.e._, visual template and visual search region) for learning desired multi-modal features. Aligning the visual template with the visual search region enables learning _temporal-invariant features_[39, 40, 41], which are crucial to enhance the discriminative ability of tracking models. To this end, we further propose the IMA to fully utilize the intra-modal temporal supervision information.
**Intra-modal Alignment.** The language prompt mainly contains global/static semantic meaning of the target, while the visual modality contains the temporal information of the target (_e.g._, motion, and appearance variation through the video) [1, 5]. As mentioned earlier, IMA aims to learn _temporal-invariant features_ within the same modality of positive and negative samples. Therefore, we only consider visual modality in IMA. Specifically, we consider visual search region tokens \(\mathbf{f}_{x}\in\mathbb{R}^{C}\), and visual template tokens \(\mathbf{f}_{z}\in\mathbb{R}^{C}\) from the same video as positive pairs, while tokens from different videos as negative pairs. We also apply the contrastive loss to maximize the MI between \(\mathbf{f}_{x}\) and \(\mathbf{f}_{z}\). Formally, InfoNCE losses between vision tokens can be defined as:
\[\mathcal{L}_{x2z}(\mathbf{f}_{x}^{i},\mathbf{f}_{z}^{i},\widetilde{\mathbf{f} })=-\mathbb{E}_{(\mathbf{f}_{x},\mathbf{f}_{z})}[\log\frac{\exp(sim(\mathbf{f }_{x}^{i},\mathbf{f}_{z}^{i})/\tau)}{\sum_{j=1}^{2(N-1)}\exp(sim(\mathbf{f}_{ x}^{i},\widetilde{\mathbf{f}}^{j})/\tau)}], \tag{10}\]
\[\mathcal{L}_{z2x}(\mathbf{f}_{z}^{i},\mathbf{f}_{x}^{i},\widetilde{\mathbf{f} })=-\mathbb{E}_{(\mathbf{f}_{z},\mathbf{f}_{z})}[\log\frac{\exp(sim(\mathbf{f }_{z}^{i},\mathbf{f}_{x}^{i})/\tau)}{\sum_{j=1}^{2(N-1)}\exp(sim(\mathbf{f}_{ z}^{i},\widetilde{\mathbf{f}}^{j})/\tau)}], \tag{11}\]
where \(\widetilde{\mathbf{f}}=\{\widetilde{\mathbf{f}}_{x}^{1},...,\widetilde{ \mathbf{f}}_{x}^{N-1},\widetilde{\mathbf{f}}_{z}^{1},...,\widetilde{\mathbf{f} }_{z}^{N-1}\}\) is a set of negative examples for \(\mathbf{f}_{x}^{i}\) or \(\mathbf{f}_{z}^{i}\), \(N\) is the batch size. Then, the IMA loss can be formulated as \(\mathcal{L}_{ima}=\frac{1}{2}[\mathcal{L}_{x2z}(\cdot)+\mathcal{L}_{z2x}(\cdot)]\).
The IMA loss encourages learning representations by aligning temporal-invariant positive pairs within visual modality. Importantly, it enforces the uniformity of vision and language, resulting in a uniform distribution across the whole feature space [38, 42]. Therefore, CMA and IMA have complementary advantages in multi-modal learning: on the one hand, CMA encourages matched vision and language embeddings close in the feature space. On the other hand, IMA maximizes the temporal-invariant features between visual tokens, and make the multi-modal features evenly distributed in the feature space. As shown in Fig. 3, combining them makes the learned representations more reasonable, and further facilitates joint multi-modal feature learning and interaction.
### _Tracking Head and Loss_
Following [10], the tracking head is decomposed into two branches of classification and bounding box regression. As shown in Fig. 2, the learned visual search region tokens are first reshaped into a 2D feature map according to the original spatial resolution, followed by a 4-layer fully convolutional network to predict the target classification score map. In the classification branch, a weighted focal loss \(\mathcal{L}_{cls}\)[43], is adopted to enhance the model's ability to distinguish objects from background. The bounding box regression branch is used to predict the center coordinate offset of the object and the size of the object. To regress the center coordinate offset and size of objects, we combine the \(\ell_{1}\) loss and the generalized IoU loss \(\mathcal{L}_{giou}\)[44]. The regression loss is calculated as \(\mathcal{L}_{reg}=\lambda_{giou}\mathcal{L}_{giou}+\lambda_{1}\mathcal{L}_{1}\), where \(\lambda_{giou}\) and \(\lambda_{1}\)are two hyper-parameters.
To train our model in an end-to-end manner, we convert it into a multi-task optimization problem [45], simultaneously optimizing classification loss, regression loss, CMA loss, and IMA loss. Finally, the overall loss function for our model is defined as:
\[\mathcal{L}_{total}=\mathcal{L}_{cls}+\mathcal{L}_{reg}+\lambda_{cma}\mathcal{L}_ {cma}+\lambda_{ima}\mathcal{L}_{ima}, \tag{12}\]
where \(\lambda_{cma}\) and \(\lambda_{ima}\) are trade-off weights to balance the multi-task optimization problem.
## IV Experiments
To demonstrate the effectiveness and generalization ability of our approach, we conduct experiments on all five public VL tracking benchmarks to date, including UAV scenes (_i.e._, WebUAV-3M [1]), generic scenes (_i.e._, LaSOT [36], LaSOT\({}_{\rm{Ext}}\)[46], OTB99-L [3]), and real-synthetic scenes (_i.e._, TNL2K [2]).
### _Implementation Details_
We adopt ViT-Base [19] as the architecture of All-in-One transformer backbone. It is stacked by \(L\) (_i.e._, \(12\)) transformer encoder layers, and each layer contains two sub-layers, _i.e._, a multi-head self-attention layer and a feed-forward network. Each sub-layer is a residual connection structure, followed by a layer normalization. To accelerate convergence, we initialize our backbone with MAE-pretrained weights [12]. The visual template and visual search region are \(2^{2}\) times and \(4^{2}\) times of the target bounding box, and then resized to \(128\times 128\) and \(256\times 256\), respectively. We use bert-base-uncased tokenizer [18] to tokenize language prompts.
Our experiments are conducted on an Ubuntu server with two NVIDIA RTX 3090 GPUs. The training data includes training splits of LaSOT [36], GOT-10k [6], TrackingNet [47], COCO [48], OTB99-L [3], TNL2K [2], WebUAV-3M [1], and VisualGenome [49]. For several datasets (_i.e._, [6, 47, 48]) without natural language prompts, we use class names as pseudo language labels similar to [5]. The tracker is optimized using an AadmW optimizer [50] with initial learning rate \(4\times 10^{-4}\). The total epoch is 300. The weight decay factor is \(1\times 10^{-4}\) after 240 epochs. Following [10], the hyper-parameters \(\lambda_{giou}\) and \(\lambda_{1}\) are set to 2 and 5. While \(\lambda_{cma}\) and
\(\lambda_{ima}\) are set to 1 and 1 without parameter optimization. We set the temperature parameter \(\tau=0.5\). The batch size \(N\) is 32. Following [1, 7], we adopt the one-pass evaluation with five metrics, \(i\)._e_., precision (\(P\)), normalized precision (\(P_{norm}\)), success rate (AUC), complete success rate (cAUC), and accuracy (ACC) to measure the tracking performance.
### _Ablation Study and Analysis_
We first conduct ablation experiments trained on the LaSOT training set and evaluated on the LaSOT test set to validate different components of our approach.
**Impact of All-in-One Transformer (AOT).** To analysis the impact of AOT, we train two trackers, \(i\)._e_., AOT-[CLS] and AOT-[Mean] using [CLS] token and mean token of language prompt in AOT. From Tab. I, we can find that the AOT obviously boosts the tracking performance. Specifically, the \(P\) scores are improved by \(1.6\%\) (from \(66.3\%\) to \(67.9\%\)) and \(2.1\%\) (from \(66.3\%\) to \(68.4\%\)), respectively compared with the baseline method. Importantly, using the mean token is slightly better than the [CLS] token. We speculate that the possible reason is that the mean token can provide more semantic information about the target compared with the [CLS] token. Therefore, the mean token is as our default setting in AOT.
**Impact of Cross-modal Alignment (CMA).** From Tab. I, we can see the CMA component improves tracking performance by \(0.1\%\), \(0.1\%\), and \(0.2\%\) in terms of AUC, \(P_{norm}\), and \(P\) scores, respectively. This validates the CMA is beneficial to align vision and language embeddings in the feature space and improve tracking accuracy.
**Impact of Intra-modal Alignment (IMA).** By combining the CMA and IMA, we improve the tracking AUC by \(0.4\%\) (from \(64.0\%\) to \(64.4\%\)), \(P_{norm}\) by \(0.2\%\) (from \(72.6\%\) to \(72.8\%\)), and \(P\) by \(0.4\%\) (from \(68.4\%\) to \(68.8\%\)), as shown in Tab. I. The significant performance gains demonstrate that the MMA module makes the distributions of vision and language embeddings more reasonable in the feature space, and facilitates feature learning and interaction.
**Visualization.** To further investigate the effectiveness of our All-in-One framework, we visualize the response maps and the tracking results in Fig. 4. With the AOT, our approach highlights the target region due to language prompt, even with complex background distractions. Combining AOT and MMA, our approach has a more unambiguous and discriminative response, and predict a more precise bounding box. Previous visual search regions also demonstrate that our approach can focus on real target when facing some complex scenarios, such as occlusion and background clutter.
**Sentence Prompts _vs._ Class Prompts.** To analysis the impact of language prompts, we train two trackers, \(i\)._e_., Ours-S and Ours-C with sentence (original language prompt) and class (class name of video) prompts on the LaSOT training set. From Tab. II, we have some observations upon inspection. First, better tracking results are achieved when the language prompts for training and testing are consistent, \(i\)._e_., training using sentences/classes and testing using sentences/classes. Second, the best performance (\(64.6\%\) in AUC, \(73.2\%\) in \(P_{norm}\), \(68.9\%\) in \(P\)) is obtained using class prompts training and class prompts testing on the LaSOT dataset. We speculate that trackers are sensitive to ambiguous language prompts. Compared with sentence prompts, class prompts may bring about less ambiguity for both training and evaluation [1, 9]. Additionally, as shown in Fig. 5, given ambiguous language prompts our trackers fail to localize the real object. A potential solution is to provide clear sentence prompts or clear class prompts (see Fig. 5), both of which enable our trackers to accurately localize the real object.
**Speed Analysis.** Real-time tracking is urgently demanded in many practical applications [2, 58]. Our one-stream framework achieves joint multi-modal feature extraction and interaction, and has great efficiency. Tab. IV shows that the average inference speed of our approach is around 60 frames per second (FPS) without model acceleration. The speed of our approach is obviously faster than that of many (state-of-the-art) SOTA real-time trackers [55, 57] and common video flow [59], demonstrating that applying it to real-world applications is imperceptible in terms of time consumption.
Fig. 4: Visualization for revealing the target-aware capability of the All-in-One framework: “AOT” denotes our approach only with All-in-One transformer, “AOT+MMA” denotes our approach with both All-in-One transformer and multi-modal alignment module.
### _Evaluation on UAV Scenes_
**WebUAV-3M.** WebUAV-3M [1] is the latest million-scale UAV tracking dataset with visual bounding box, language and audio annotations, which contains 4,500 videos and offers over 200 target categories. UAV tracking scenes are extremely challenging due to continuous viewpoints changes, motion blurs, low-resolutions, _etc_. As reported in Tab. III, All-in-One outperforms other visual trackers and VL trackers in tracking accuracy. Furthermore, our tracker improves \(P\)/AUC/\(P_{norm}\)/cAUC by a large margin as shown in Fig. 6. Notably, with a simple and general unified model architecture, our tracker outperforms the most competitive VL tracker \(\text{VLT}_{\text{TT}}\)[5] by \(7.7\%\) in \(P\), \(9.5\%\) in \(P_{norm}\), \(9.8\%\) in AUC, and \(9.9\%\) in cAUC.
### _Evaluation on Generic Scenes_
**LaSOT.** LaSOT [36] is a densely annotated large-scale VL tracking dataset that contains 1,120 videos for training and 280 long-term videos for evaluation. In this dataset, objects disappear and reappear frequently, making long-term tracking in generic scenes highly challenging. From Tab. IV, we can observe that our approach sets a new SOTA on LaSOT, which provides compelling evidence for long-term tracking and suggests that our approach is capable of recognizing objects in extremely long videos. Fig. 7 demonstrates that All-in-One outperforms other trackers on eight challenging attributes, \(i\)._e_., background clutter and motion blur, illustration variation, low resolution, fast motion, full occlusion, deformation, and aspect ration change.
**LaSOT\({}_{\text{Ext+}}\).** LaSOT\({}_{\text{Ext}}\)[46] is the extended version of [36], which comprises 150 manually annotated videos. Tab. I indicates All-in-One surpasses all previous advanced trackers and obtains the best AUC score of \(71.7\%\), gaining a significant improvement of \(2.6\%\) compared with the current SOTA one-stream tracker OSTrack [10].
**OTB99-L.** OTB99-L [3] is an early VL tracking dataset contains 51 videos for training and 48 videos for public evaluation. As shown in Tab. IV, compared with recent SOTA trackers, our tracker achieves comparable results, which validates the effectiveness of our tracker.
### _Evaluation on Real-Synthetic Scenes_
**TNL2K.** TNL2K [2] is a recently released dataset, which comprises 1,300 videos for training and 700 videos for evaluation in real and synthetic (_e_.\(g\)., cartoon videos and virtual game videos) scenes with diverse challenging fctors, such as significant appearance variation and adversarial samples. Results in Tab. IV show that our approach achieves the highest AUC (\(55.3\%\)) and \(P\) (57.2%) scores, demonstrating the generalization ability of All-in-One.
### _Qualitative Performance_
As shown in Fig. 8, we compare All-in-One with three SOTA trackers (_i_.\(e\)., \(\text{VLT}_{\text{TT}}\)[5], TransT [35], and SiamRPN++ [57]) on three videos from the LaSOT test set, in which the main challenges include similar distractors, severe viewpoint changes, background clutter, appearance variation, occlusion and extreme illumination. We can see that All-in-One is more robust than other methods. For instance, the prior most competitive VL tracker \(\text{VLT}_{\text{TT}}\) gradually loses the target as the target appearance varies in the video of _Sepia-16_ (the second video in Fig. 8). By contrast, All-in-One provides accurate and stable prediction results, demonstrating the effectiveness of our unified framework in complex environments.
## V Conclusion and Discussion
**Conclusion.** In this work, we present All-in-One, a new unified framework for multi-modal VL tracking, which includes the All-in-One transformer and the multi-modal alignment
Fig. 5: Analysis of the effect of ambiguous language prompts on the LaSOT test set. \({}^{*}\) indicates that our approach is tested with clear sentence prompt or clear class prompt.
Fig. 6: Evaluation on the WebUAV-3M test set. The scores of \(P\)/AUC/\(P_{norm}\)/cAUC are presented in the legend.
module. The core insight is to is establish bidirectional information flow between well aligned visual and language signals as early as possible via a unified transformer backbone. Besides, the multi-modal alignment module based on cross-modal and intra-modal contrastive objectives enables to learn more reasonable VL representations, which effectively facilitates joint multi-modal feature learning and interaction. Extensive experiments on multiple VL tracking benchmarks have demonstrated the effectiveness and generalization of our approach against state-of-the-art trackers.
**Discussion.** We first provide a discussion to demonstrate that developing a foundation model, _e.g._, All-in-One, for VL tracking is valuable in the era of large language/vision models. (1) As the echoes of large language models (_e.g._, ChatGPT [64], GPT-4 [65]) remarkable success continue to permeate the natural language community, its formidable successors, _e.g._, ViT-22B [66], have emerged in the computer vision community. Although they have emergent abilities [64], the huge training cost (_e.g._, thousands of GPUs) and terrible environment unfriendliness cannot be ignored [67]. Instead, we believe that training a foundation model for a specific task is more flexible and affordable for research purposes. (2) Despite the breakthroughs in large multi-modal models [68, 69, 70], they have not achieved the same success as large language models, highlighting the need to explore foundation models in the multi-modal domain. All-in-One is designed to be such a foundation model for multi-modal VL tracking. (3) The All-in-One has great potential to become a foundation model for multi-modal tracking because it can enable more accurate and efficient processing of multi-modal data, which fully utilizes both vision and language information. Our model not only learns all modalities in one backbone (All-in-One), but trains once and generalizes well to all VL tracking datasets (Once-for-All) with complex and user-defined language prompts. (4)
Fig. 7: Attribute-based evaluation on the LaSOT test set. AUC score is used to rank different trackers.
Additionally, having a streamlined and standardized foundation model for multi-modal tracking can facilitate the development of more complex and specialized models in the future, allowing for even more sophisticated analysis and understanding of multi-modal data.
Our work still has the following two limitations. (1) Our approach is designed to localize objects of interest based on object boxes and language prompts. Inevitably, it suffers from inaccurate language prompts, such as ambiguous language descriptions, and the states (_e.g._, position and appearance) of objects change significantly in videos making them inconsistent with language prompts. (2) While All-in-One is a unified framework for multi-modal VL tracking, it currently focuses mainly on language prompts. Actually, All-in-One has great potential to be extended to leverage more types of prompts, such as audio, point, mask, and scribble prompts [71, 72]. We leave it for the future work.
|
2310.17911 | Hyper-Skin: A Hyperspectral Dataset for Reconstructing Facial
Skin-Spectra from RGB Images | We introduce Hyper-Skin, a hyperspectral dataset covering wide range of
wavelengths from visible (VIS) spectrum (400nm - 700nm) to near-infrared (NIR)
spectrum (700nm - 1000nm), uniquely designed to facilitate research on facial
skin-spectra reconstruction. By reconstructing skin spectra from RGB images,
our dataset enables the study of hyperspectral skin analysis, such as melanin
and hemoglobin concentrations, directly on the consumer device. Overcoming
limitations of existing datasets, Hyper-Skin consists of diverse facial skin
data collected with a pushbroom hyperspectral camera. With 330 hyperspectral
cubes from 51 subjects, the dataset covers the facial skin from different
angles and facial poses. Each hyperspectral cube has dimensions of
1024$\times$1024$\times$448, resulting in millions of spectra vectors per
image. The dataset, carefully curated in adherence to ethical guidelines,
includes paired hyperspectral images and synthetic RGB images generated using
real camera responses. We demonstrate the efficacy of our dataset by showcasing
skin spectra reconstruction using state-of-the-art models on 31 bands of
hyperspectral data resampled in the VIS and NIR spectrum. This Hyper-Skin
dataset would be a valuable resource to NeurIPS community, encouraging the
development of novel algorithms for skin spectral reconstruction while
fostering interdisciplinary collaboration in hyperspectral skin analysis
related to cosmetology and skin's well-being. Instructions to request the data
and the related benchmarking codes are publicly available at:
\url{https://github.com/hyperspectral-skin/Hyper-Skin-2023}. | Pai Chet Ng, Zhixiang Chi, Yannick Verdie, Juwei Lu, Konstantinos N. Plataniotis | 2023-10-27T06:10:35Z | http://arxiv.org/abs/2310.17911v1 | # Hyper-Skin: A Hyperspectral Dataset for
###### Abstract
We introduce Hyper-Skin, a hyperspectral dataset covering wide range of wavelengths from visible (VIS) spectrum (400nm - 700nm) to near-infrared (NIR) spectrum (700nm - 1000nm), uniquely designed to facilitate research on facial skin-spectra reconstruction. By reconstructing skin spectra from RGB images, our dataset enables the study of hyperspectral skin analysis, such as melanin and hemoglobin concentrations, directly on the consumer device. Overcoming limitations of existing datasets, Hyper-Skin consists of diverse facial skin data collected with a pushbroom hyperspectral camera. With 330 hyperspectral cubes from 51 subjects, the dataset covers the facial skin from different angles and facial poses. Each hyperspectral cube has dimensions of 1024\(\times\)1024\(\times\)448, resulting in millions of spectra vectors per image. The dataset, carefully curated in adherence to ethical guidelines, includes paired hyperspectral images and synthetic RGB images generated using real camera responses. We demonstrate the efficacy of our dataset by showcasing skin spectra reconstruction using state-of-the-art models on 31 bands of hyperspectral data resampled in the VIS and NIR spectrum. This Hyper-Skin dataset would be a valuable resource to NeurIPS community, encouraging the development of novel algorithms for skin spectral reconstruction while fostering interdisciplinary collaboration in hyperspectral skin analysis related to cosmetology and skin's well-being. Instructions to request the data and the related benchmarking codes are publicly available at: [https://github.com/hyperspectral-skin/Hyper-Skin-2023](https://github.com/hyperspectral-skin/Hyper-Skin-2023).
## 1 Introduction
Hyperspectral imaging offers a comprehensive and non-invasive approach for facial skin analysis, capturing detailed spatio-spectral information across a wide range of wavelengths [1; 2]. This three-dimensional hyperspectral cube surpasses the limitations of single-point measurements, providing a deeper understanding of facial skin characteristics and spatial distribution [3]. Previous studies have demonstrated the potential of hyperspectral skin analysis in dermatology [4], cosmetics [5], and skin's well-being [6], paving the way for advanced analysis and applications in these domains. This paper introduces "hyper-skin", a hyperspectral skin dataset uniquely designed to facilitate the development of algorithms targeting on consumer-based cosmetology applications. This unique dataset is curated with this specific goal in mind, focusing its practical relevance within the consumer-based cosmetology and skin beauty.
Despite the potential of hyperspectral skin analysis on cosmetology and skin beauty, the high cost and limited accessibility of hyperspectral imaging systems have limited their widespread adoption. Consumer cameras, particularly those embedded in smartphones, have become an integral part of daily life and are extensively used for capturing selfies and everyday images. Hence, many works study the use of RGB images from consumer cameras for skin analysis [7; 8; 9]. While RGB
images have been used for certain skin analysis tasks [10; 11], they lack the ability to capture the comprehensive spatio-spectral information provided by hyperspectral imaging, limiting the depth of skin analysis. In light of the prevalence of consumer cameras, an intriguing idea emerges: Can we reconstruct valuable information from expensive hyperspectral cubes using accessible RGB images, enabling hyperspectral skin analysis directly on consumer devices?
This highlights the need for a comprehensive dataset to develop computational reconstruction methods for the question above. While RGB datasets such as those from the International Skin Imaging Collaboration (ISIC) competition series (2016 - 2020) capture visual information, they lack the corresponding hyperspectral data required for studying hyperspectral reconstruction [12; 13; 14]. On the other hand, hyperspectral datasets enable the exploration of relationships between skin spectra and spatial distribution. Although the RGB counterpart can be synthetically generated from a given hyperspectral cube using a known camera response function, publicly available hyperspectral datasets focusing specifically on facial skin analysis are limited and often inaccessible to the public. Furthermore, existing hyperspectral datasets primarily focus on the visible (VIS) spectrum (400nm - 700nm), disregarding the valuable near-infrared (NIR) spectrum (700nm - 1000nm). These limitations highlight the necessity for a hyperspectral dataset that addresses these gaps and facilitates the development of low-cost and accessible hyperspectral skin analysis on consumer devices.
Our ContributionsOur Hyper-Skin dataset is uniquely designed to unlock the potential of hyperspectral skin analysis directly on the consumer device. With high spatial and spectral resolution, i.e., 1024\(\times\)1024\(\times\)448, Hyper-Skin offers an extensive collection of hyperspectral cubes, yielding over a million spectra per image. Notably, we offer synthetic RGB images synthesized from 28 real camera response functions, allowing for versatile experimental setups. What sets Hyper-Skin apart is its comprehensive spectral coverage, including both the VIS and NIR spectrum, facilitating a holistic understanding of various aspects of human facial skin, enabling new possibilities for consumer applications to see beyond the visual appearance of their selfies and gain valuable insights into their skin's physiological characteristics, such as melanin and hemoglobin concentrations.
## 2 Related Work
The potential hyperspectral solutions in the skin-related analysis have encouraged the curation of hyperspectral datasets. This section reviews existing hyperspectral datasets related to skin analysis, as summarized in Table 2, and reconstruction aiming to provide affordable hyperspectral solutions accessible to consumers.
Figure 1: A glimpse of our Hyper-Skin dataset, covering the skin spectra in the visible spectrum (400nm - 700nm) and near-infrared spectrum (700nm - 1000nm).
Skin-related DatasetsSpectraCam and SpectraFace hyperspectral cameras [20] has been used to collect the data of normal and pathological skin with a spectral resolution of 31 wavelengths in the VIS spectrum [21]. The hyperspectral dermoscopy dataset consists of 330 images, including 80 melanoma images, 180 dysplastic nevus images, and 70 images of other skin lesions, with a spatial resolution of 512\(\times\)272 pixels and 16 spectral bands ranging from 465nm to 630nm [15]. Hyperspectral dataset of 20 nude mice was collected by [16] as an alternative to human skin [22] to study acute changes in oxygenation and perfusion in irradiated skin. For markerless tracking in spinal navigation, [17] captured hyperspectral images of the skin from 17 healthy volunteers, with a spatial resolution of 1080\(\times\)2048 and 41 spectral bands in the VIS to NIR range (450-950nm). Both work in [18] and [4] used the same Specim Spectral Camera PS V10E to acquire hyperspectral data covering the visible to NIR range (380-1055nm) with 1040 bands and a spatial resolution of 450\(\times\)1310. The former dataset by [18], containing data from 80 subjects, focused on vein localization, whereas [4] used the dataset for a routine dermatological examinations. While references [18; 4; 17] involve capturing NIR spectral information, it's important to note that these datasets are not publicly accessible. Despite the potential of hyperspectral imaging in skin-related applications, most of these existing datasets relied on expensive imaging systems with a primary focus on scientific applications, and no efforts were made to provide low-cost hyperspectral solutions for consumer devices.
Skin Spectral Reconstruction DatasetsAlthough datasets specifically focused on skin spectra reconstruction are limited, there have been notable contributions in this field. One such dataset is the Skin Hyperspectral Reflectance Database (SHRD) [19], which includes 144 skin directional-hemispherical curves obtained through a novel hyperspectral light transport model. This dataset provides valuable insights into reconstructing hyperspectral information on human skin. Another study explores the reconstruction of hyperspectral information in mice skin [9], using 26 SKH-1 hairless albino mice as a model system [23]. The researchers propose a mathematical approach to reconstruct hyperspectral data from RGB images, allowing visualization of hemoglobin content across a large skin area without the need for expensive hyperspectral imaging systems. However, these skin-spectral reconstruction datasets have limitations in terms of sample size and spatial resolution, particularly in their coverage of facial images from various angles and poses. Consequently, progress in skin-spectral reconstruction has been relatively slower compared to hyperspectral reconstruction on natural scenes or everyday objects.
Natural Scenes and Everyday Objects Reconstruction DatasetsCompared to proprietary and unavailable skin-related datasets, several hyperspectral datasets on natural scenes and everyday objects have been made publicly available for the study of hyperspectral reconstruction. The CAVE dataset consists of 32 scenes with a spatial resolution of 512 \(\times\) 512 pixels, 31 spectral bands ranging from 400nm to 700nm at 10nm intervals [24]. The HARVARD dataset includes 50 images captured under daylight illumination, with 31 spectral bands spanning from 420nm to 720nm, and a spatial resolution of 464x346 pixels [25]. The KAIST dataset comprises 30 hyperspectral images with a spatial resolution of 3376 \(\times\) 2704 pixels and a spectral range from 420nm to 720nm [26]. The
\begin{table}
\begin{tabular}{l|l l l l l l} \hline \hline
**Work** & Experimental & Number of & Spectral & Spectral & Spatial & Acquisition \\ & Subjects & Subjects & Range & Resolution & Resolution & Device \\ \hline Ours & Full face & 51 & 400 - 1000 nm & 1.34 nm & 1024 x 1024 & Specim FX10 \\ \([1]\) & Arm & 2 & 467 - 857 nm & 13 nm & NA & SkinSpect dermoscope \\ \([2]\) & Facial skin & 204 & 400 - 700 nm & 3.3 nm & 1148 x 948 & SpectraCam\# \\ \([15]\) & Skin lesions & 330\({}^{\circ}\) & 465 - 630 nm & 10.3125 nm & 512 × 272 & Ximena MQQ22HG-IM-SM4X4-VIS \\ \([16]\) & Hairless mice & 10 & 500 - 660 nm & NA & NA & OxyVuTrd -2, HyperMed T \\ \([17]\) & Spine area & 17 & 450 - 950 nm & 12.2 nm & 500 × 250 & snapshot hyperperspectral imaging (HSI) camera \\ \([18]\) & For arm & 80 & 380 - 1055 nm & 2.8 nm & 450 × 1310 & Specim\# Spectral Camera PS V10E \\ \([4]\) & Hand & 45\({}^{**}\) & 397 - 1030 nm & 0.79 nm & 899 × 1312 & PFD-V10E line-scan camera \\ \([19]\) & Skin reflectance & 144\({}^{***}\) & 250 - 2500 nm & 5 nm & NA & NA \\ \([9]\) & Mouse & 100\({}^{**}\) & 500 - 1000 nm & 5 nm & 640 × 480 & custom-built LCFT \\ \hline \hline \end{tabular}
* Dermoscopy images, \({}^{**}\) Reflectance images, \({}^{***}\) Synthetic reflectance
\end{table}
Table 1: Comparison between Skin-related Data
KAIST-depth dataset features 16 indoor scenes with a spectral range of 420nm to 680nm, 27 spectral channels, and a spatial resolution of 2824 \(\times\) 4240 pixels [27]. The New Trends in Image Restoration and Enhancement (NTIRE) series of datasets, including NTIRE2018, NTIRE2020, and NTIRE2022, have significantly advanced spectral reconstruction from RGB images, with varying sizes, spectral resolutions, and spectral ranges [28; 29; 30; 31]. Despite primarily focusing on natural scenes and generic objects, these publicly available datasets offer valuable resources, including facilitating the development of pre-trained models, that can be extended to skin spectral reconstruction tasks.
## 3 Hyper-Skin Data Curation and Preparation
This section outlines the methodology we employed for collecting facial skin data and provides a detailed description of our carefully curated Hyper-Skin dataset.
### Data Collection
The data collection process was conducted carefully, taking into account the setup of devices and recruitment of participants while ensuring adherence to the university's research ethics protocol. We successfully recruited 51 participants who contributed a total of 306 hyperspectral data. To maintain the privacy and sensitivity of the human subjects involved, we have implemented a credentialization procedure to access the dataset. Interested users will be required to digitally sign an End User License Agreement (EULA) online, which outlines the terms and conditions for using the dataset, including provisions for using only the authorized image in future publications. Detailed instructions for requesting the dataset will be publicly available in our GitHub repository, where users can find a digital EULA form to facilitate the data access request. Once the EULA form is signed and submitted, users will receive a secure link via email to download the data within 24 hours.
Data Acquisition Devices and Set UpThe Hyper-Skin dataset was obtained using a Specim FX10 camera, covering 448 spectral bands from 400nm to 1000nm. Consider multiple factors, including participant safety, image quality, and spectral resolution, we opted to use a pushbroom camera rather than the Liquid Crystal Tunable Filter (LCTF) system used by [32]. The camera was moved using a customized scanner for precise scanning, as shown in Figure 1. The distance between the camera and the face was set at 40cm, providing a spatial resolution of 1024x1024 pixels. The scanner and camera were controlled by a computer running LUMO recorder software. Further setup details are available in the supplementary material. With a frame rate of 45Hz for one line, it took approximately 22.7 seconds to capture all 1024 lines. To minimize artifacts from line scanning, participants used a chin rest for stability. Halogen lamps illuminated the scene across the visible to near-infrared spectrum. Since ensuring participant safety (particularly for the eyes area) was a top priority, particularly for their eyes, the illumination level of the halogen lamps is carefully adjusted following the manufacturer's advice to prevent any risk to participants' eyes. Using the Specim FX10 camera resulted in high-quality images, setting our dataset apart from the mentioned dataset [32], which contains noisy and blurry images that affect skin texture visibility. Differing from the CMU dataset [32], which features a 10nm step size for 65 bands, our dataset encompasses both VIS and NIR spectrum with finer resolution.
Figure 2: The image on the right illustrates our experimental setup for data collection, while the accompanying schematic representation provides a visual depiction of the setup.
Data Acquisition ProcessParticipants were recruited through online forums and email advertisements, and their participation involved signing an informed consent form in accordance with the human research ethics protocol. The approved ethics protocol can be found in the supplementary materials. During the data acquisition process, participants were seated on a stool and asked to rest their face on a chin rest while maintaining stillness. Initially, participants were instructed to have a neutral facial expression, and three face images were captured from different viewpoints (front, left, and right) by rotating the chin rest. This process was then repeated with participants instructed to smile. A total of six images were collected for each participant. Throughout the camera scanning, a halogen light remained on. It's worth noting that even with minimal participant movement, slight shifting may occur as the FX10 camera scans line by line. To ensure high-quality images, the captured images were manually inspected by the investigator, and if any shifting was observed, the image was retaken until satisfactory results were achieved. Throughout the entire process, participant anonymity and confidentiality were strictly maintained.
Participants Demographic and Cosmetology ConditionOur data collection campaign attracted 51 participants, most participants are in their early 20s and 30s, with a smaller representation from other age groups (10s, 40s-50s). Male participants slightly outnumbered females, potentially due to the gender distribution in the Department of Electrical and Computer Engineering. The majority of participants identified as Asian, with a smaller number identifying as European or Latino. To improve the generalizability of our findings, we have applied for an extension of our research ethics protocol to conduct another data collection next year, aiming to include a more diverse sample.
### Data Preparation
The Hyper-Skin dataset was created by collecting RAW hyperspectral data, which were then radiometrically calibrated and resampled into two separate 31-band datasets. One dataset covers the visible spectrum from 400nm to 700nm, while the other dataset covers the near-infrared spectrum from 700nm to 1000nm. Additionally, synthetic RGB and Multispectral (MSI) data were generated, including RGB images and an infrared image at 960nm. The Hyper-Skin dataset consists of two types of data: (RGB, VIS) and (MSI, NIR), offering different skin analysis capabilities. The visible spectrum data allows for the analysis of surface-level skin characteristics, such as melanin concentration, blood oxygenation, pigmentation, and vascularization. On the other hand, the near-infrared spectrum data enables the study of deeper tissue properties, including water content, collagen content, subcutaneous blood vessels, and tissue oxygenation. As summarized in Table 2, by providing these two distinct ranges of hyperspectral data, the Hyper-Skin dataset caters to different needs in skin analysis and facilitates comprehensive investigations of various skin features.
Data PreprocessingWe applied radiometric calibration on the RAW hyperspectral data to extract spectral reflectance information. This involved capturing a white reference image, representing a spectrally neutral surface with consistent reflectance values across all bands. A dark reference image was also obtained by closing the camera lens during capture. For precise calibration, selecting an appropriate white reference was crucial. After consultation with the camera vendor, we opted for cost-effective Teflon instead of Spectralon panels, as it provided satisfactory spectral response. The preprocessing steps included subtracting dark reference values to eliminate noise, and dividing by white reference values to normalize and convert data to reflectance values, yielding the desired spectral reflectance data.
RGB and MSI data GenerationThe raw hyperspectral cube with 448 bands was resampled into two sets of 31-band data using SciPy's interpolation function. This downsampling to 31 bands
\begin{table}
\begin{tabular}{l|l l} \hline \hline
**Description** & (RGB, VIS) & (MSI, NIR) \\ \hline Input & RGB & MSI (RGB + Infrared at 960nm) \\ Output & VIS (400nm - 700nm) & NIR (700nm - 1000nm) \\ Skin physiological & surface-level characteristics (e.g., pigmentation and melanin map) & deeper tissue properties (e.g., collagen content and hemoglobin map) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Hyper-Skin data pairs
is in line with existing practices in hyperspectral reconstruction studies, similar to the CAVE and NTIRE2018-2022 datasets. It strikes a balance between data richness and size. This approach retains hyperspectral differentiation and computational efficiency for analysis, making the data more accessible compared to the 448-band dataset, which exceeds 1TB in size. While the complete 448-band dataset is substantial in terms of size, we are prepared to provide it upon specific request. The availability of the 31-band data addresses data transfer constraints, while the backup 448-band data ensures comprehensive access to the dataset.
For realistic RGB data generation, we adopted the HSI2RGB simulation pipeline based on ideal color-matching functions as outlined in [33]. Our emulation of consumer camera-captured images incorporates 28 camera response functions from [34] and [35], encompassing various cameras like DSLR and smartphones. Further details on the measurement setup and gathering camera spectral sensitivity information can be found in [34]. While we do possess actual RGB data captured by smartphone sensors, their current utility is constrained by concerns related to alignment quality issues stem from variations in camera models, viewing angles, and overlapping fields of view. In pursuit of maintaining high-quality aligned pairs, we have chosen to provide synthetic RGB images that are perfectly aligned with their hyperspectral counterparts. This approach aligns with existing practices such as those in the NTIRE2018-2022 challenges.
Data DescriptionWe intentionally chose 4 participants from the total of 51 to form the testing data. Each participant contributed 6 images, covering 2 facial expressions and 3 face poses. This selection ensured a comprehensive representation of facial poses and expressions in the testing dataset. The choice of these 4 participants was deliberate, based on their explicit consent for image use in publications, adhering to ethical standards. The remaining data was exclusively used for offline training. Collecting diverse participant images encompassed a variety of natural facial poses and expressions observed in selfies. Opting for a participant-specific approach, rather than random partitioning, prevents data overlap within the same participant in the training set, reducing potential bias. This strategic participant selection safeguards dataset integrity and subsequent analysis.
## 4 Evaluation and Benchmarks
This section discusses the benchmark design with the facial skin-spectra reconstruction and then presents the experimental results from both the spatial and spectral domains.
### Facial Skin-spectra Reconstruction
The facial skin-spectra reconstruction task focuses on reconstructing the hyperspectral cube of the facial skin using the provided RGB image. Given a pair of RGB data represented as \(R\in\mathbb{R}^{w\times h\times c}\) and the hyperspectral cube denoted as \(H\in\mathbb{R}^{w\times h\times C}\), where \(c<<C\), the objective of the reconstruction task can be formulated as follows:
\[H=f(R;\Theta), \tag{1}\]
where the goal is to find the function \(f(\cdot;\Theta)\) parameterized by \(\Theta\) that maps the RGB data \(R\) to the hyperspectral cube \(H\). Given the extensive research on hyperspectral reconstruction for natural scenes or everyday objects, we can leverage existing hyperspectral reconstruction models as baseline models for our specific facial skin-spectra reconstruction problem.
#### 4.1.1 Baseline Models
Numerous methods have been developed to address hyperspectral reconstruction from RGB images. Interested readers can refer to the survey by [36] for a list of representative models that have been developed over the past two decades. For our benchmark design, we specifically consider three models, i.e., Hyperspectral Convolutional Neural Network (HSCNN) [37], Hierarchical Regression Network (HRNet) [38], and Multi-stage spectral-wise transformer (MST++) [39], that have emerged as winners in the NTIRE competition series held in conjunction with CVPR from the year of 2018, 2020 to 2022, respectively.
#### 4.1.2 Evaluation Metrics
We consider two types of metrics: Structural Similarity Index (SSIM) for spatial evaluation and Spectral Angle Mapper (SAM) for spectral evaluation. Our emphasis lies in assessing the facial skin spectra reconstruction in human subjects, excluding the background image from analysis. This approach allows us to precisely gauge physiological properties like melanin and hemoglobin concentrations, essential for effective hyperspectral skin analysis.
Let \(H\in\mathbb{R}^{w\times h\times C}\) represent the ground truth hyperspectral cube, and \(\tilde{H}\in\mathbb{R}^{w\times h\times C}\) denote the reconstructed cube, where \(C=31\) for the 31-band data. In order to focus the evaluation specifically on the facial skin-spectra components within the hyperspectral cube \(H\), we can utilize a mask \(M\in[0,1]^{w\times h}\) to exclude the background during the assessment. The mask \(M(i,j)=1\) indicates that the pixel at location \((i,j)\) corresponds to the human subject, while \(M(i,j)=0\) signifies the background pixels that should be discarded in the evaluation process. Let \(H_{s}\in\mathbb{R}^{w\times h\times C}\) be the resulting matrix that has all the background removed. To obtain \(H_{s}\), we need to compute the element-wise multiplication between the mask \(M\) and \(H\) at \(C=k\), i.e.,
\[H_{s}(:,:,k)=H(:,:,k)\odot M \tag{2}\]
This element-wise multiplication is repeated for all channels of \(H\) to obtain the final channel-wise multiplication matrix \(H_{s}\).
Spatial EvaluationThe evaluation from the spatial domain focuses on the quality of the reconstructed cube at each band in terms of spatial similarity. For this, we use SSIM to compute the spatial similarity between the ground truth and reconstructed HSI. Let \(h_{s}^{(P)}\) and \(\tilde{h}_{s}^{(P)}\) be be the patch of the ground truth and reconstructed HSI, then SSIM between each patch can be described as follows:
\[SSIM(h_{s}^{(P)},\tilde{h}_{s}^{(P)})=l(h_{s}^{(P)},\tilde{h}_{s}^{(P)})^{ \alpha}c(h_{s}^{(P)},\tilde{h}_{s}^{(P)})^{\beta}s(h_{s}^{(P)},\tilde{h}_{s}^{ (P)})^{\gamma} \tag{3}\]
where \(\alpha,\beta\) and \(\gamma\) are the weighting parameters for the luminance, contrast, and structural comparison functions. For the detailed formulation of each component, please refer to [40]. To account for mask-edge effects, if a patch is only partially covered by the mask, SSIM calculations consider only the masked pixels. This approach focuses evaluation on relevant information and minimizes the influence of background pixels on SSIM scores. However, in cases where a patch is entirely background, division by zero occurs in SSIM calculation. To avoid mathematical errors, such patches are excluded from evaluation. This ensures meaningful and reliable SSIM scores for informative patches within the mask.
Spectral EvaluationOn the other hand, the spectral domain evaluation aims to assess the accuracy of the spectral signature in the reconstructed cube at specific pixel positions. For this, we used SAM to compute the cosine angle between each spectra vector of the ground truth and reconstructed HSI. Let \(\mathbf{h}_{ij}=H(i,j,:)\in\mathbb{R}^{C}\) be a \(C\)-dimensional spectra vector for the ground truth hyperspectral at location \(i\) and \(j\), and let \(\tilde{\mathbf{h}}_{ij}\) be the corresponding spectra vector for the reconstructed cube, we can compute the SAM between these 2 spectra vectors as follows:
\[SAM(\mathbf{h}_{ij},\tilde{\mathbf{h}}_{ij})=\cos^{-1}\left(\frac{\sum_{k=1}^ {C}\mathbf{h}_{ij}(k)\tilde{\mathbf{h}}_{ij}(k)}{\sqrt{\sum_{k=1}^{C}\mathbf{ h}_{ij}(k)^{2}}\sqrt{\sum_{k=1}^{C}\tilde{\mathbf{h}}_{ij}(k)^{2}}}\right), \tag{4}\]
where \(\mathbf{h}_{ij}(k)\) denote the pixel value at band \(k\) for a spectra vector located at \(i\) and \(j\) of the HSI cube. Contrary to the perception that the cosine angle restricts similarity, it's important to clarify that SAM computes the inverse cosine, yielding a range of values between 0 and 3.142. This metric is chosen to quantify the spectral alignment between two spectra in an \(n\)-dimensional space, where the dimensionality corresponds to the number of spectral bands [41]. A lower SAM value signifies a better alignment with the reference (ground truth) spectrum.
### Implementation Details and Experimental Results
We conducted the experiment using two datasets prepared in Section 3.2 and evaluated the performance of baseline models (MST++, HRNet, and HSCNN) [42], that were trained with the NTIRE2022 dataset licensed under the GNU General Public License v3.0 [43]. We then proceeded to re-train
these models with our Hyper-Skin dataset for 100 epochs using an RTX5000 GPU and 16-bit memory. To address memory constraints, we randomly cropped the input size to 128x128 during training, while the entire 1024x1024 image was used for testing. The hyperparameters used for re-training the models remained the same, except for reducing the number of HSCNN blocks to 20 for the (MSI, NIR) experiments and adjusting the number of input channels to 4 for the (MSI, NIR) pair of data. Adam optimizer with a learning rate of 0.0004 was employed for training all three models.
#### 4.2.1 Spatial Domain
Table 3 presents the spatial evaluation results using SSIM for two types of data: (RGB, VIS) and (MSI, NIR). The evaluation was conducted with and without the background, where the background was removed using a mask to focus on the human subject. Figure 3 provides a result illustration with pre-trained HSCNN. The pre-trained models were only applied to the (RGB, VIS) data pair since they were trained on the NTIRE dataset, which focuses on hyperspectral reconstruction in the visible spectrum. The (MSI, NIR) data pair requires an additional channel, which is not supported by the pre-trained models, thus they were not used for evaluation. After re-training the models with our Hyper-Skin dataset, significant improvements in performance were observed, as indicated in Table 3. Comparing the results with and without the background, it is evident that most of the reconstruction issues are associated with the background. For applications that focus on human skin, such as hyperspectral skin analysis, the performance of skin reconstruction is crucial. The
\begin{table}
\begin{tabular}{l|l|c|c||c|c} \hline \hline & \multicolumn{3}{c||}{Pre-trained Models} & \multicolumn{2}{c}{Re-trained Models} \\ \cline{3-6} & **Data** & (RGB, VIS) & (MSI, NIR) & (RGB, VIS) & (MSI, NIR) \\ \hline
**with** & HSCNN [37] & 0.683 \(\pm\) 0.027 & - & 0.916 \(\pm\) 0.013 & 0.943 \(\pm\) 0.007 \\
**Back-** & HRNet [38] & 0.704 \(\pm\) 0.023 & - & 0.933 \(\pm\) 0.021 & 0.955 \(\pm\) 0.006 \\
**ground** & MST++ [39] & 0.602 \(\pm\) 0.042 & - & 0.923 \(\pm\) 0.011 & 0.959 \(\pm\) 0.006 \\ \hline \hline
**w/o** & HSCNN [37] & 0.816 \(\pm\) 0.021 & - & 0.950 \(\pm\) 0.011 & 0.964 \(\pm\) 0.006 \\
**Back-** & HRNet [38] & 0.813 \(\pm\) 0.023 & - & 0.961 \(\pm\) 0.014 & 0.971 \(\pm\) 0.005 \\
**ground** & MST++ [39] & 0.766 \(\pm\) 0.035 & - & 0.954 \(\pm\) 0.010 & 0.974 \(\pm\) 0.004 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Spatial Evaluation with SSIM
Figure 3: The top-right diagram represents the camera response function used to produce the RGB image. We conducted spectral evaluation across the 31 bands of images. The evaluation of the pre-trained HSCNN model’s spectral performance reveals relatively better reconstruction in the middle bands (460nm - 520nm) as compared to the bands at the two ends. Notably, the spatial evaluation also shows that the skin areas in those middle bands exhibit better SSIM compared to the last few bands.
results demonstrate that the reconstruction of the skin area exhibits better performance, supporting the potential for low-cost hyperspectral skin solutions, such as reconstructing the facial skin-spectra cube from smartphone selfies.
#### 4.2.2 Spectral Domain
Table 4 shows SAM-based spectral evaluation results for two data types: (RGB, VIS) and (MSI, NIR), with and without background. SAM ranges from \(\pi/2\) to \(0\), in which closer to 0 indicating better reconstruction. After re-training models using our Hyper-Skin dataset, both data types showed significant performance improvements, as illustrated in Figure 4. Note that the performance of the models was generally lower when the background was present compared to when it was removed. This highlights the impact of background interference on the reconstruction results. These findings emphasize the potential of our dataset in enhancing hyperspectral reconstruction for applications related to skin analysis and other fields that require accurate spatial information. Note that the re-trained models also showed good performance on the (MSI, NIR) data. We further verify the reconstruction performance on the real RGB image taken by a smartphone. As shown in Figure 5, the reconstruction model is capable of estimating the skin spectral information. Due to page constraints, the visualizations of these results are provided as supplementary materials.
\begin{table}
\begin{tabular}{l|l|c|c||c|c} \hline \hline & \multicolumn{3}{c||}{Pre-trained Models} & \multicolumn{2}{c}{Re-trained Models} \\ \cline{3-6} & **Data** & (RGB, VIS) & (MSI, NIR) & (RGB, VIS) & (MSI, NIR) \\ \hline
**with** & HSCNN [37] & 0.677 \(\pm\) 0.061 & - & 0.119 \(\pm\) 0.008 & 0.091 \(\pm\) 0.010 \\
**Back-** & HRNet [38] & 0.648 \(\pm\) 0.062 & - & 0.147 \(\pm\) 0.014 & 0.094 \(\pm\) 0.009 \\
**ground** & MST++ [39] & 0.707 \(\pm\) 0.054 & - & 0.113 \(\pm\) 0.009 & 0.086 \(\pm\) 0.006 \\ \hline \hline
**w/o** & HSCNN [37] & 0.621 \(\pm\) 0.049 & - & 0.113 \(\pm\) 0.009 & 0.083 \(\pm\) 0.012 \\
**Back-** & HRNet [38] & 0.596 \(\pm\) 0.046 & - & 0.133 \(\pm\) 0.015 & 0.086 \(\pm\) 0.010 \\
**ground** & MST++ [39] & 0.628 \(\pm\) 0.050 & - & 0.107 \(\pm\) 0.010 & 0.076 \(\pm\) 0.005 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Spectral Evaluation with SAM
Figure 4: Re-training all three models with our Hyper-Skin dataset yielded a notable enhancement in performance, particularly in the skin area. However, it is worth noting that the first few bands of the reconstructed cube may not fully capture the variations present in the skin.
Figure 5: The demonstration of a real selfie image captured with a smartphone shows that the trained model is capable of reconstructing spectral information at the skin location.
## 5 Limitations, Ethical Considerations and Societal Impact
We worked closely with the university's ethical review board to ensure the inclusivity, privacy, and ethical integrity of the Hyper-Skin dataset. This ensured responsible data usage for skin analysis research. Our data collection followed strict participant consent procedures, including providing participants with detailed project information before obtaining written and verbal consent. The ethics review protocol and consent procedures are provided as supplementary materials.
LimitationsOur dataset's representativeness might be limited, potentially not capturing the full diversity of skin types, tones, and conditions across various populations. This could introduce biases and hinder model generalization. Additionally, while our deep neural network-based approach leverages the dataset for latent priors, it's constrained by dataset limitations. Novel cosmetology conditions not covered in training might affect model performance due to distribution shifts, a common challenge in machine learning [44; 45], particularly in medical datasets with skewed distributions [46; 47]. To address this, we've secured ethical approval to expand data collection, targeting diverse participants and cosmetology conditions to enhance practical utility.
Ethical ConsiderationsTo ensure ethical compliance throughout the data collection process, we implemented measures to anonymize personally identifiable information, such as assigning a subject ID instead of using participants' real information. Robust security measures were also put in place to protect sensitive data from unauthorized access or misuse. We strictly followed guidelines provided by the research ethics board and obtained informed consent from every participant, respecting their autonomy and ensuring they understood how their data would be used.
Societal ImpactOur Hyper-Skin dataset revolutionizes skin analysis by providing affordable and accessible solutions. With its ability to reconstruct skin spectral properties and estimate parameters like melanin and hemoglobin concentration, it empowers researchers and practitioners to develop low-cost skin analysis solutions directly for consumers. The dataset's societal impact extends to individuals monitoring their skin's well-being and skincare companies developing personalized products and innovative AI models. By driving advancements in skin analysis, the Hyper-Skin dataset benefits individuals, professionals, and the skincare industry as a whole.
## 6 Conclusion
This paper contributes to the field of hyperspectral skin analysis by providing a comprehensive collection of facial skin hyperspectral data, named Hyper-Skin dataset. The novelty of this dataset lies in its spectral coverage in the VIS and NIR spectrum, offering potential applications in skin monitoring and customized cosmetic products at the consumer's fingertips. It serves as a valuable resource for algorithm development and evaluation, with future directions including dataset diversification, advanced analysis techniques, and interdisciplinary collaborations, inviting researchers and practitioners to contribute to the advancement of hyperspectral skin analysis for human well-being.
|
2306.04735 | Soft-prompt Tuning for Large Language Models to Evaluate Bias | Prompting large language models has gained immense popularity in recent years
due to the advantage of producing good results even without the need for
labelled data. However, this requires prompt tuning to get optimal prompts that
lead to better model performances. In this paper, we explore the use of
soft-prompt tuning on sentiment classification task to quantify the biases of
large language models (LLMs) such as Open Pre-trained Transformers (OPT) and
Galactica language model. Since these models are trained on real-world data
that could be prone to bias toward certain groups of populations, it is
important to identify these underlying issues. Using soft-prompts to evaluate
bias gives us the extra advantage of avoiding the human-bias injection that can
be caused by manually designed prompts. We check the model biases on different
sensitive attributes using the group fairness (bias) and find interesting bias
patterns. Since LLMs have been used in the industry in various applications, it
is crucial to identify the biases before deploying these models in practice. We
open-source our pipeline and encourage industry researchers to adapt our work
to their use cases. | Jacob-Junqi Tian, David Emerson, Sevil Zanjani Miyandoab, Deval Pandya, Laleh Seyyed-Kalantari, Faiza Khan Khattak | 2023-06-07T19:11:25Z | http://arxiv.org/abs/2306.04735v2 | # Soft-prompt Tuning for Large Language Models to Evaluate Bias
###### Abstract
Prompting large language models has gained immense popularity in recent years due to the advantage of producing good results even without the need for labelled data [12]. However, this requires prompt tuning to get optimal prompts that lead to better model performances. In this paper, we explore the use of soft-prompt tuning on sentiment classification task to quantify the biases of large language models (LLMs) such as Open Pre-trained Transformers (OPT) [29] and Galactica language model [27]. Since these models are trained on real-world data that could be prone to bias toward certain groups of populations, it is important to identify these underlying issues. Using soft-prompts to evaluate bias gives us the extra advantage of avoiding the human-bias injection that can be caused by manually designed prompts. We check the model biases on different sensitive attributes using the _group fairness (bias)_ and find interesting bias patterns. Since LLMs have been used in the industry in various applications, it is crucial to identify the biases before deploying these models in practice. We open-source our pipeline1 and encourage industry researchers to adapt our work to their use cases.
Footnote 1: Github link withheld for double-blind submission
## 1 Introduction
Despite immense popularity, fine-tuned language models [13, 6, 28] have the drawback of requiring large amounts of labelled data as well as separate training and storage for each downstream task [13, 6, 28]. Language model prompting relieves the need for sizeable collections of labelled data, but the task of designing prompts to induce optimal performance for a given downstream application is challenging [15, 20]. Significant progress has been made in automatic prompt engineering methods. One such method for automatic prompt optimization is soft-prompt tuning [8], a parameter-efficient tuning method that trains a small set of prompt token embeddings to be provided along with the standard natural language input to the language model. Soft-prompt tuning has been used to induce high performance for several LLMs on various downstream tasks such as question answering, text summarization and text classification [22, 7]. On the other hand, bias has gained substantial attention from the research community recently [18, 23]. As the NLP model applications continue to expand rapidly, developing comprehensive analytical frameworks to measure the learned or inherited social biases of such models is imperative.
In this paper, we evaluate the utility of soft-prompt tuning [8] for bias evaluation of large language models, including Open Pre-trained Transformers (OPT) [29] and Galactica language models [27]. More specifically, the approach presented here leverages prompting to condition models toward the completion of sentiment analysis tasks on which fairness (bias) metrics are subsequently measured. In addition to the methods efficiency with respect to the number parameters tuned, another potential advantage of soft-prompt tuning is that it eliminates any injection of human bias through manual prompt design. Our studies show prompt-tuning enables fine-grained analysis and overall understanding of an LLM's bias towards under-represented groups. Here are our contributions:
* We apply soft-prompt tuning to adapt OPT and Galactica LLMs for sentiment classification using two distinct datasets.
* The prompting-based sentiment classification is then used to evaluate bias in LLMs across the sensitive social attributes.
* We explore the impact of the model size, model type, and task-specific labelled prompt-tuning datasets on model fairness.
* To our knowledge soft-prompt tuning has not been used for fairness evaluation [11, 17].
Related work
Research on soft-prompt tuning and parameter-efficient tuning of LLMs has expanded rapidly [8; 9; 14]. While these methods are well-studied with respect to their competitive, and often, improved performance over full-model fine-tuning, the existing work does not consider the bias implications or the utility of such approaches in bias evaluation. Many researchers have focused on identifying, quantifying, and mitigating bias in NLP [4; 3]. Bias Benchmark for QA (BBQ) evaluation task [19], aims to create a framework for evaluating social biases in language models of any size along a large swathe of sensitive attributes. The task, however, is limited to multiple-choice question-and-answer settings. Big-Bench [26] introduces different frameworks for evaluating LLMs, but a limited number bias evaluation methods, metrics, and aspects are covered. Our work addresses this gap in the literature and provides an important tool for the reproducible evaluation of bias in LLMs.
## 3 Methodology
**Prompt Tuning:** Prompting is the process of augmenting input text with carefully crafted phrases or templates to help a pre-trained language model accomplish a downstream task. When combined with well-formed prompts, large language models perform accurately on many tasks without the need for fine-tuning on labelled data [2]. However, the exact composition of a prompt often has a material impact on the LLMs performance [15]. Recently, considerable research has produced effective approaches for automated prompt optimization. The search space for such optimization may focus on natural language prompts [24] or the continuous space of token embeddings [8], known as discrete and continuous prompt optimization, respectively.
**Bias Evaluation:** Bias in NLP is usually quantified using sensitive attributes [3] such as sex, age, race, nationality etc. Each of these sensitive attributes consists of different protected groups. For example, the sensitive attribute _age_ consists of the protected groups {_adult, young, old_}. From the bias perspective, continuous prompt optimization [10] provides an excellent assessment tool, but it has not been studied in previous literature. In this paper, the prompt-tuning approach in [8] is applied.
### Experimental setup
In this paper, we use soft-prompt tuning to evaluate _group fairness_[3]. Group fairness evaluates if the model performance varies significantly and consistently across different groups and if the bias is harmful against specific groups. For a given metric \(M\) Group fairness is defined as: \(d_{M}(x)=\text{M}(x)-\overline{\text{M}}\) where \(d_{M}(x)\) is the \(M\) Gap of a particular group \(x\), that measures the gap between \(\text{M}(x)\) denotes (e.g., \(M\) can be Accuracy, FPR,...) of group \(x\) and the median observed \(M\) overall groups within a specific sensitive attribute, \(\overline{\text{M}}\). Here, we evaluate the accuracy Gap and the FPR Gap. The accuracy Gap measures the accuracy rate for different groups of \(x\) and the extent to which each group has higher or lower accuracy compared to the centroid. The higher the accuracy gap for a group the model performance is the better for that group. For the group fairness FPR Gap the metric \(M\) is FPR. Here, for positive sentiment analysis, we define the Positive FPR Gap. However, in the test set FP is defined on groups which were originally negative or neutral but are classified as positive sentiment. Then the higher the positive FPR gap we observed for those groups, it means the classifier has behaved in favour of those groups at a higher rate and reported them as positive labels. On the other hand, for negative sentiment analysis, we define the Negetive FPR Gap. In this case as the focus is on the negative sentiments the FP is defined on groups which were originally positive or neutral but are classified as negative sentiment. Thus the classifier behaves against the groups with higher Negetive FPR Gap. i.e., those groups are positive or neutral, but they have been detected as negative sentiment at a higher rate. The sensitive attributes considered here, and their respective protected groups, are Age: {adult, old, young }, Sexuality: {asexual, bisexual, heterosexual, homosexual, other}.
**Template Examples** Table 1 provides examples of templates from [3]. Each template is created with an intended sentiment and the {identity_adj} spaces are filled with descriptors associated with the protected groups of the relevant sensitive attribute. The sentiment associated with each data point is readily evident to a human evaluator. As such, even small disparities in model performance across protected groups may be cause for concern.
**Model Sizes:** We evaluated the biases of the family of Open Pre-trained Transformer (OPT) models
[29] and Galactica [27] on two sentiment classification tasks. We considered models with parameter sizes of 350M, 1.3B, 2.7B, 6.7B, and 13B for OPT and 1.3B and 6.7B for Galactica.
To quantify bias, we use the comprehensive templates and resulting dataset designed by Czarnowska et al. [3]. Table 1 in Appendix 3.1, provides an illustrative example of such templates for the sensitive attributes of sex and age. The use of such synthetic datasets for bias evaluation is common practice [5]. Based on hyperparameter search results, we included \(8\) continuous prompt tokens in each of our soft prompt-tuning experiment. Each prompt token is a dense vector with the same dimensionality as the embedding-space of the corresponding language model, which ranges from \(1024\) to \(5120\). Weights of the underlying language models are kept frozen. Overall, the parameters learned in our experiments are on the scale of \(0.003\%\) of the full LM model weights.
**Datasets:** We tune the prompts on two sentiment datasets. The SemEval-2018 Task 1 - Valence Ordinal Classification [21] (SemEval) and Stanford Sentiment Treebank Five-way (SST-5)[25] collections, mapping both to a 3-way sentiment classification task as described in Appendix C.
**Soft-prompt Tuning Details:** The soft-prompting approach adds a series of tokens, \(T=\{t_{1},t_{2},\ldots,t_{n}\}\), to the model input text \(X\). Given a target token or set of tokens \(Y\), the objective is to maximize the log-likelihood of the generative probability of \(Y\) conditioned on the tokens, \(T\), and input text \(X\), expressed as \(P\left(Y|T;X\right)\). For the sentiment analysis tasks examined here, the tokens are _positive_, _negative_, and _neutral_. A standard Adam optimizer is used for training [16]. We leveraged the JAX ML framework [1] to achieve efficient model parallelism on TPUv3-8 devices and on machines with up to four A40 48GB GPUs.
An illustration of the prompt-tuning procedure is shown in Figure 2 in Appendix D. Because the weights of the underlying language model are frozen throughout the training process, producing task-specific representations does not explicitly modify biases inherited from the language model pretraining data. We hypothesize that when compared with full-model fine-tuning, this approach would ensure a more accurate assessment of the bias innate to the language model. On the other hand, the optimized prompts help ensure that the model performs the downstream task as well as a fully fine-tuned model, which naturally reflects the settings of practical deployment.
As shown in Figure 2, beginning-of-sequence tokens are used to provide fixed initial embeddings for the continuous prompts. Each embedding is then perturbed by the trainable prompt embedding layer before flowing through the language model as usual, along with the remaining unmodified input-text tokens. An example of a prompted input for the sentiment task is also depicted in the figure. Note that no additional prompt augmentation is performed and task instruction comes purely in the form of the prompt tokens. The number of prompt tokens used and the maximum number of tokens in each input are considered as controlled variables and kept identical in all experiments.
For task-specific tuning of the models, the standard training and validation splits are used for the labelled task-specific datasets. Hyper-parameters, such as learning rate, are optimized for validation accuracy. A concrete description of the hyper-parameter sweep, along with the final parameters chosen appears in Appendix B. Given the inherent instability of prompt tuning, after parameter optimization we tuned fifteen different prompts, each with a different random seed. For each model size and task-specific dataset, we selected the top five
\begin{table}
\begin{tabular}{c|c|c} \hline \hline
**Sensitive attribute** & **Sentiment** & **Template** \\ \hline \multirow{4}{*}{Sex} & Positive & I identify as \{identity\_adj\} and live a comfortable life. \\ \cline{2-3} & Neutral & I identify as \{identity\_adj\} \\ \cline{2-3} & Negative & Being \{identity\_adj\} feels like a trap. \\ \hline \multirow{4}{*}{Age} & Positive & It made me feel \{identity\_adj\}, which I liked. \\ \cline{2-3} & Neutral & There is no restriction on who can feel \{identity\_adj\} \\ \cline{1-1} \cline{2-3} & Negative & I’m sorry for single \{identity\_adj\} mothers. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples of templates used to generate the evaluation dataset on which each of the OPT models are evaluated. Blanks represented by \(\{\)identity\_adj\(\}\) are filled with adjectives associated with different protected groups falling under the displayed sensitive attribute [3].
prompts in terms of validation accuracy in order to establish mean and confidence interval estimates for the test set accuracy and associated fairness (bias) metrics. Early stopping is applied during prompt tuning. The criterion for early stopping is a given evaluation loss exceeding the maximum of the previous five observed evaluation losses after the initial \(2,500\) steps. All prompts are trained until the early stopping criterion is met.
## 4 Results
In this section, we present the results for different sensitive attributes by showing the FPR gap for the various protected groups compared across both the SemEval and SST-5 datasets, as well as the various model sizes of OPT and Galactica.
**Sexuality:** In Figure 1, the FPR gap for positive sentiment is shown for sexuality. Within each group, the measured average gap and its corresponding confidence interval are shown for each model. The _positive FPR Gap_ measures the rate at which the model erroneously classifies negative or neutral statements associated with the protected group in a favourable light. Therefore, consistent and significant negative gaps across models represent cases for particular sexuality implying that such groups benefit from model mistakes at a significantly lower rate than others. On the other hand, large positive gaps suggest that a group benefits from model errors at a disproportionately higher rate.
Figure 1 shows that the rate at which examples belonging to the _asexual_ group benefit from model mistakes is consistently lower for models trained on both the SemEval and SST-5 datasets. Somewhat surprisingly, in this measure, there is some evidence to suggest that _heterosexual_ examples is unfavourable group and do not benefit from model mistakes. However, the pattern is fairly weak and is reversed when considering the _negative class FPR gap_. It is interesting to note that examples from the group termed _other_ appear to benefit, in a noticeable way, from model mistakes in both datasets.
The results exhibited in Figure 3 display the _negative FPR gap_. These represent model errors that have predicted that neutral or positive data points from each protected group are actually negative examples. Therefore, positive gaps in these plots suggest unfavourable bias against these groups compared with the whole. It is evident that as in Figure 1 the _asexual_ group suffers from a notably elevated harmful error rate. Furthermore, the _homosexual_ group experiences consistently higher negative-class FPR for both datasets considered and nearly all models. As mentioned above, the _heterosexual_ group experiences a favorable decreases in this FPR, departing from the results observed for positive-class FPR.
Reported alongside FPR gap measured for each model size is the confidence interval associated with that gap. For each group, Table 2 displays the net number of times the gap was below and above
Figure 1: FPR gap for the positive class for the sensitive attribute of sexuality. Markers indicate average gap and bars are \(95\)% confidence intervals. A positive gap indicates model errors that favor a particular group over others. The rate at which asexual examples benefit from model mistakes is consistently lower than others across both the SemEval and SST-5 datasets.
zero, at 95% confidence. That is, for each significant gaps below zero we subtract one, while one is added for significant gaps above zero. Values colored in red indicate the direction of the significant gaps that are considered harmful, while those in green denote potentially favourable treatment by the models. For both the _asexual_ and _homosexual_ protected groups, the experimental results strongly indicate the potential harmful bias for both datasets. However, as previously discussed, the evidence is mixed for the _heterosexual_ group with respect to the two gaps.
**Age:** As with the sensitive attribute of sexuality, discussed in the previous section, the FPR gaps for protected groups belonging to the age attribute are analyzed in this section. Figure 4 shows the FPR measured for the positive class. When considering results from the SST-5 dataset, a marked favourable increase in this FPR is present for the _adult_ group while an unfavourable decrease is observed for examples from the _old_ group. Interestingly, this trend is, at best, weakly present for the SemEval dataset. On the other hand, when considering the measurements in Figure 4, the _adult_ group is impacted by errors casting them in a negative light at a significantly lower rate than the other groups for the SemEval dataset. In addition, the _old_ and _young_ groups appear to suffer from an elevated probability of such errors, though the gaps are not significant when the confidence intervals are considered. The
Figure 3: FPR gap for the negative class for sensitive attribute sexuality. Markers indicate the average gap and bars are 95% confidence intervals. A positive gap indicates model errors that harm a particular group disproportionately compared with others. For SemEval adult examples are erroneously cast in a negative light at much lower rates.
Figure 2: Illustration of the prompt-tuning approach used for efficient parameter fine-tuning of OPT and Galactica. The prompt tokens, depicted with orange hatching, are initialized as the beginning-of-sequence token embedding. These embeddings are subsequently perturbed by adding learned prompt embeddings. All weights are frozen during back-propagation except for the prompt embedding layer.
gaps observed for the SST-5 dataset are much less consistent. In both cases, there is agreement as to which groups suffer or benefit from model bias. However, the way in which the bias is manifested is slightly different depending on the prompt-tuning dataset specific to the downstream task. Table 2 reinforces this conclusion. Therein, we observe general agreement across models with respect to which group benefits or does not from bias, but the gap identifying these groups differs depending on the prompt-tuning dataset.
## 5 Discussion & Conclusion
**Multidimensional aspects of the experiments:** While in this paper, we have explored the utility of a state-of-the-art soft-prompt tuning technique, the chosen downstream task is, in itself, very challenging yet impactful. This makes this exploration two-fold interesting but the analysis of the results is multidimensional across datasets, templates, prompt-tuning choices, sensitive attributes, their protected groups, language models, numerous fairness (bias)
Figure 4: FPR gap for the positive class for the sensitive attribute of age. Markers indicate the average gap and bars are \(95\)% confidence intervals. A positive gap indicates model errors that favour a particular group over others. The rate at which adult examples benefit from model mistakes is elevated for SST-5 while elderly examples benefit at a much lower rate. The trends for SemEval are not as clear for this metric.
Figure 5: FPR gap for the negative class for the sensitive attribute of age. Markers indicate average gap and bars are \(95\)% confidence intervals. A positive gap indicates model errors that harm a particular group disproportionately compared with others. The rate at which adult examples suffer from unfavourable model mistakes is consistently much smaller than others for SemEval. This conclusion is not as clear for SST-5.
metrics, and their graphical representations. We have tried our best to present the results in the most comprehensive way.
**Template design:** We use templates from Czarnowska et al. [3]. While they provided a baseline for our experiments, they consist of very simple sentences and hence are easily understood by the LLMs. That may be the cause of less conclusive results in some cases. It is worth trying to run experiments on more complicated templates, which is the subject of future work.
**Types of biases:** Many papers [3] rely on absolute values of the metric disparities that simply identify bias. We use a directional bias measure to be able to identify the favoured and un-favoured groups providing more precise bias analysis of the LLMs. It should be mentioned that a group that is flagged as a favourable group may be flagged as unfavourable by using a different bias quantification metric. Thus, different bias quantification formulations [23] might not be concurrently achievable.
**Impact of soft-prompt tuning on bias:** Continuous prompt tuning minimizes the potential influence of biases existing in the supervised training tasks by restricting the number of learned parameters. Furthermore, it removes the human element of prompt design, eliminating another avenue for introducing bias outside of the LLM itself. However, it should be noted that we performed soft-prompt tuning on the commonly used sentiment datasets that were generated from tweets (SemEval) and movie reviews (SST-5). The quality of these datasets has a strong impact on the soft-prompts produced. It is worth exploring how a better quality dataset (if available) impacts the performance of the downstream task and the bias observed.
**Impact of bias quantification on industry:** Industry applications of AI-based solutions broaden every day. This has raised many bias- and ethics-related issues. Here, we have provided the methodology and infrastructure design for using state-of-the-art technology to identify possible risks. We believe that our effort is a small step toward exploring this emerging area and will benefit industry as well as the research community in general.
As the next steps, we plan to extend our work by including a broader range of language models, extending to more sensitive attributes, including more bias metrics and a variety of downstream tasks. This is an effort to make the use of LLMs safer and more ethical in real-world deployment.
|
2303.01093 | Group nilpotency from a graph point of view | Let $\Gamma_G$ denote a graph associated with a group $G$. A compelling
question about finite groups asks whether or not a finite group $H$ must be
nilpotent provided $\Gamma_H$ is isomorphic to $\Gamma_G$ for a finite
nilpotent group $G$. In the present work we analyze the problem for different
graphs that one can associate with a finite group, both reporting on existing
answers and contributing to new ones. | Valentina Grazian, Andrea Lucchini, Carmine Monetta | 2023-03-02T09:26:55Z | http://arxiv.org/abs/2303.01093v2 | # Group nilpotency from a graph point of view
###### Abstract.
Let \(\Gamma_{G}\) denote a graph associated with a group \(G\). A compelling question about finite groups asks whether or not a finite group \(H\) must be nilpotent provided \(\Gamma_{H}\) is isomorphic to \(\Gamma_{G}\) for a finite nilpotent group \(G\). In the present work we analyze the problem for different graphs that one can associate with a finite group, both reporting on existing answers and contributing to new ones.
Key words and phrases:graph isomorphism; nilpotency 2010 Mathematics Subject Classification: 20D15; 05C25
## 1. **Introduction**
Given a finite group \(G\), one can consider a graph \(\Gamma_{G}\) associated with \(G\) which encodes certain group properties of \(G\). Such an approach has been extensively studied in the last decades, mainly for two reasons. The former is to determine structure description of \(G\) investigating the invariants of \(\Gamma_{G}\), while the latter aims to produce graphs fitting specific features (see for instance [7, 6, 12, 19, 21, 28]).
A natural question in this research line is to understand if a graph isomorphism - which is clearly a weaker relation than a group isomorphism - may or may not preserve specific properties of a group. More precisely, we are interested in the following question.
**Question 1.1**.: _If \(G\) and \(H\) are finite groups with isomorphic graphs \(\Gamma_{G}\cong\Gamma_{H}\) and \(G\) is nilpotent, is it true that \(H\) is nilpotent as well?_
Of course the hardness of the problem, as well as the answer, change depending on the graph choice. For instance we may easily produce a negative answer to Question 1.1 considering the _soluble graph_ of a group \(G\), that is the graph whose set of vertices is \(G\) and in which two vertices are adjacent if they generate a soluble group. Indeed, both the soluble graph of a nilpotent group, and the soluble graph of a soluble group are complete, thus in this case Question 1.1 has negative answer whenever we pick a nilpotent group \(G\) and a non-nilpotent soluble group \(H\) having the same order: the smallest example is given by the cyclic group of order \(6\) and the symmetric group of degree \(3\).
However, our report on this problem will clarify that the situation is not so easy in general and that in some cases Question 1.1 remains still open.
In the following we will analyze the distinct circumstances corresponding to the following graph choices: the non-commuting graph, the power graph, the prime graph (also known as Gruenberg-Kegel graph), the generating and the non-generating graph, the Engel graph and the join graph; our study is summarized in Table 1.
We will mainly adopt standard terminology in graph theory and known notation in group theory. If \(G\) is a finite group, \(|G|\) denotes the order of \(G\), \(Z(G)\) denotes the center of \(G\), \(\operatorname{Fit}(G)\) stands for the Fitting subgroup of \(G\) and \(\Phi(G)\) denotes the Frattini subgroup of \(G\). Also, we will write \(C_{n}\), \(S_{n}\) and \(D_{2n}\) for the cyclic group of order \(n\), the symmetric group of degree \(n\) and the dihiedral group of order \(2n\), respectively.
## 2. The non-commuting graph
The non-commuting graph of a group was first considered by Paul Erdos in 1975, while stating a problem solved by Neumann in [31]. If \(G\) is a finite group, the non-commuting graph of \(G\) is the graph whose vertices are the non-central elements of \(G\) (i.e. \(G\backslash\)Z(G)) and in which two vertices \(x\) and \(y\) are adjacent if and only if they do not commute, or equivalently, if the group \(\langle x,y\rangle\) is non-abelian. Note that the non-commuting graph of \(G\) is the complement of the commuting graph of \(G\) (where two elements are joined if they commute).
Question 1.1 in terms of the non-commuting graph was first posed by Abdollahi, Akbari and Maimani in [4], and it is in fact still open. Nevertheless, Question 1.1 is known to be true under certain extra conditions.
**Theorem 2.1**.: _[_4_, Theorem 3.24]_ _Let \(G\) and \(H\) be finite non-abelian groups with isomorphic non-commuting graphs. If \(G\) is nilpotent and \(|G|=|H|\), then H is nilpotent._
Proof.: Note that by the main result in [15], it is enough to prove that \(G\) and \(H\) have the same number of conjugacy classes of the same size. For every integer \(i\geq 1\), let \(m_{i}(G)\) and \(m_{i}(H)\) denote the number of conjugacy classes of size \(i\) of \(G\) and \(H\), respectively. We will show that \(m_{i}(G)=m_{i}(H)\)
\begin{table}
\begin{tabular}{|c||c|c|} \hline & Answer to Question 1.1 & Known cases with positive answer (if open) \\ & & or examples with negative answer \\ \hline \hline non-commuting graph & Open & \(|G|=|H|\) (Theorem 2.1) \\ & & \(G\) AC-group and \(|Z(G)|\geq|Z(H)|\) (Theorem 2.2) \\ \hline power graph & YES (Theorem 3.2) & \\ \hline prime graph & NO & \(G\cong C_{6}\times C_{6}\) and \(H\cong S_{3}\times C_{6}\) \\ & & Positive answer if \(|H|\) is square-free (Proposition 4.2) \\ \hline generating graph & Open & \(H\) supersoluble (Theorem 5.1) \\ \hline non-generating graph & Open & The subgraph obtained by removing all universal vertices is disconnected (Theorem 5.13) \\ \hline Engel graph & YES (Proposition 6.1) & \\ \hline join graph & NO & \(G\cong C_{p}\times C_{p}\) and \(H\cong D_{2p}\), for \(p\) an odd prime. \\ & & \(H\) is proved to be always supersoluble (Theorem 7.1) \\ \hline \end{tabular}
\end{table}
Table 1. Answers to Question 1.1 depending on the graphs.
for every \(i\geq 1\). First note that since the non-commuting graphs of \(G\) and \(H\) are isomorphic, we get \(|G|-|Z(G)|=|H|-|Z(H)|\) (looking at the number of vertices). By assumption, \(|G|=|H|\), so we deduce that \(m_{1}(G)=|Z(G)|=|Z(H)|=m_{1}(H)\). Now, for every \(g\in G\backslash\mathrm{Z(G)}\), if \(h\) is the image of \(g\) under the graph isomorphism, then looking at the number of vertices joined to \(g\) and \(h\), respectively, we deduce that \(|G|-|C_{G}(g)|=|H|-|C_{H}(h)|\). Using again the hypothesis \(|G|=|H|\), we obtain \(|g^{G}|=[G\colon C_{G}(g)]=[H\colon C_{H}(h)]=|h^{H}|\). This implies that for every \(i>1\) we have \(m_{i}(G)=m_{i}(H)\), concluding the proof.
In the same work, Abdollahi, Akbari and Maimani conjectured that any two finite groups having isomorphic non-commuting graph should have the same order, but this was proven to be false in [30]. The counterexample built in [30] involves two finite groups that are nilpotent, and so it does not affect Question 1.1. Also, such groups are finite AC-groups, that is, finite groups whose centralizers of non-central elements are all abelian. Taking inspiration from such result, in a recent paper Grazian and Monetta proved the following:
**Theorem 2.2**.: _[_19_, Corollary 1.4]_ _Let \(G\) and \(H\) be finite non-abelian groups with isomorphic non-commuting graphs. If \(G\) is a nilpotent AC-group and \(|\mathrm{Z(G)}|\geq|\mathrm{Z(H)}|\), then \(H\) is nilpotent._
Idea of the proof.: First notice that the isomorphism between the non-commuting graphs of the finite groups \(G\) and \(H\) implies that \(H\) is an AC-group. By [4, Proposition 3.14], if \(H\) is non-solvable, then \(|G|=|H|\) and we conclude by Theorem 2.1. Therefore we can assume that \(H\) is a solvable AC-group. Such groups have been classified in [33, Satz 5.12], showing that if \(H\) is non-abelian then \(H\) must follow one of \(4\) possible characterizations. The proof is completed analyzing the different possibilities.
Observe that a nilpotent AC-group has a unique Sylow subgroup that is non-abelian. Question 1.1 in the case of a finite nilpotent group \(G\) with at least two distinct non-abelian Sylow subgroups has also been covered:
**Theorem 2.3**.: _[_5_, Theorem 2.4]_ _Let \(G\) and \(H\) be finite non-abelian groups with isomorphic non-commuting graphs. If \(G\) is nilpotent, \(G\) has at least two non-abelian Sylow subgroups and \(|\mathrm{Z(G)}|\geq|\mathrm{Z(H)}|\), then \(|G|=|H|\)._
Note that if the assumptions of Theorem 2.3 are satisfied, then once again the conclusion that \(H\) is nilpotent is reached by Theorem 2.1.
Theorems 2.2 and 2.3 imply that in order to give a positive answer to Question 1.1 for non-commuting graphs with the additional assumption \(|\mathrm{Z(G)}|\geq|\mathrm{Z(H)}|\), it remains to consider the case in which \(G\) is a finite nilpotent group of the form \(G=P\times A\), for a non-abelian \(p\)-group \(P\) and an abelian group \(A\) with \((|P|,|A|)=1\), containing at least one element \(x\in G\backslash\mathrm{Z(G)}\) such that \(C_{G}(x)\) is not abelian. In this view, Grazian and Monetta posed the following conjecture:
**Conjecture 2.4**.: _[_19_, Conjecture 3]_ _Let \(p\) be a prime and suppose \(G=P\times A\) is a finite group where \(P\in\mathrm{Syl}_{\mathrm{p}}(\mathrm{G})\) is non-abelian and \(A\) is an abelian \(p^{\prime}\)-group. If \(H\) is a finite group whose non-commuting
graph is isomorphic to the one of \(G\) and \(|\mathrm{Z(G)}|\geq|\mathrm{Z(H)}|\) then \(H=Q\times B\), where \(q\) is a prime, \(Q\in\mathrm{Syl_{q}(H)}\) is non-abelian and \(B\) is an abelian \(q^{\prime}\)-group. In particular, \(H\) is nilpotent._
**Remark 2.5**.: _Theorem 2.2 implies that Conjecture 2.4 is true when \(G\) is an AC-group. Also, if \(G\) is a finite \(p\)-group (so \(A=1\)), then \(|G|=|H|\) by [3, Theorem 1.2], and so Conjecture 2.4 is true in this instance too._
## 3. The power graph and the enhanced power graph
The power graph of a semigroup was first introduced by Chakrabarty et al. [14], taking inspiration from the definition of directed power graph given by Kelarev and Quinn [20]. The power graph of a finite group \(G\) is a graph whose vertices are the elements of \(G\) and in which two distinct vertices \(x\) and \(y\) are adjacent if there exists \(k\geq 2\) such that \(x^{k}=y\) or \(y^{k}=x\). The enhanced power graph of \(G\), also known as cyclic graph of \(G\), is the graph with vertex set \(G\) in which distinct elements \(x\) and \(y\) are joined by an edge if and only if \(\langle x,y\rangle\) is cyclic. Note that the enhanced power graph is a subgraph of the power graph, and the two graphs coincide if and only if any element of the group has prime power order (see Theorem 28 of [1]). However, Question 1.1 is equivalent for power graphs and enhanced power graphs, thanks to the following result:
**Theorem 3.1**.: _[_34_, Corollary 3.1]_ _Two finite group \(G\) and \(H\) have isomorphic power graph if and only if they have isomorphic enhanced power graph._
In [13], Cameron proved that if \(G\) and \(H\) are finite abelian groups with isomorphic power graph, then they are isomorphic (see [13, Theorem 1]). Then in [11], the author proved that groups with isomorphic power graph have the same numbers of elements of each order (see [11, Corollary 3]). Thanks to this result, Mirzargar and Scapellato managed to give a positive answer to Question 1.1 for power graphs:
**Theorem 3.2**.: _[_29_, Corollary 3.2]_ _Let \(G\) and \(H\) be finite groups with isomorphic power graphs. If \(G\) is nilpotent, then \(H\) is nilpotent._
Proof.: Let \(p\) be a prime dividing \(|G|\), let \(P\) be the unique Sylow \(p\)-subgroup of \(G\) and set \(|P|=p^{n}\). Since \(G\) and \(H\) have isomorphic power graph, we get \(|G|=|H|\) and so the Sylow \(p\)-subgroups of \(H\) must have order \(p^{n}\). Also, \(G\) has exactly \(p^{n}\) elements of \(p\)-power order and by [11, Corollary 3], the graph isomorphism implies that \(H\) has exactly \(p^{n}\) elements of \(p\)-power order too. Thus we conclude that \(H\) has a unique Sylow \(p\)-subgroup. As this holds for every prime divisor of \(|H|\), we deduce that \(H\) is nilpotent.
## 4. The prime graph (or Gruenberg-Kegel graph)
The prime graph (also known as Gruenberg-Kegel graph) was introduced by Gruenberg and Kegel in an unpublished paper in 1975. For a finite group \(K\), denote by \(\pi(K)\) the set of prime divisors of the order of \(K\). The prime graph of a finite group \(G\) is the graph having \(\pi(G)\) as vertex set and such that two distinct vertices \(p\) and \(q\) are adjacent if and only if \(G\) contains an element of order \(pq\). Note
that if \(p\) is a prime and \(G\) is a finite \(p\)-group, then the prime graph of \(G\) contains only one vertex and no edges. In particular Question 1.1 is trivially true for \(p\)-groups. However, it is not hard to see that it is false in general:
**Proposition 4.1**.: _Let \(G\) be a finite nilpotent group and suppose there exists a finite non-nilpotent group \(K\) with \(\pi(G)=\pi(K)\). Then \(H:=K\times K\) is not nilpotent and the prime graphs of \(G\) and \(H\) are isomorphic._
Proof.: Note that \(H\) is not nilpotent, because \(K\) is not nilpotent by assumptions, \(\pi(H)=\pi(G)\) and the prime graph of \(H\) is complete, and so isomorphic to the one of \(G\).
We can even find finite groups with the same order representing a negative answer to Question 1.1. Take for example \(G=C_{6}\times C_{6}\) and \(H=S_{3}\times C_{6}\), both having a complete prime graph on two vertices. However, Question 1.1 has a positive answer if we assume that \(H\) has square-free order:
**Proposition 4.2**.: _Let \(G\) and \(H\) be finite groups with isomorphic prime graphs. If \(G\) is nilpotent and \(|H|\) is square-free, then \(H\) is cyclic (hence nilpotent)._
Proof.: Note that the assumption that \(|H|\) is square-free implies that every Sylow subgroup of \(H\) is cyclic of prime order. If \(H\) is a \(p\)-group for a prime \(p\), we are done. Assume that there exist distinct primes \(p\) and \(q\) dividing the order of \(H\). By hypothesis, the prime graph of \(H\) is isomorphic to the one of \(G\), that is complete. Therefore there exists an element \(x\) in \(H\) of order \(pq\) and \(P=\langle x^{q}\rangle\) is a Sylow \(p\)-subgroup of \(H\). We show that \(P\) is contained in the center of \(H\), by proving that for any prime \(r\) dividing the order of \(H\) there exists a Sylow \(r\)-subgroup of \(H\) commuting with \(P\). If \(r=q\), then \(P\) commutes with \(Q=\langle x^{p}\rangle\in\operatorname{Syl}_{q}(H)\). Hence assume that \(r\) is distinct from \(p\) and \(q\). Then there exists an element \(y\in H\) of order \(pr\) and the Sylow \(p\)-subgroup \(P_{1}=\langle y^{r}\rangle\) commutes with the Sylow \(r\)-subgroup \(R=\langle y^{p}\rangle\). Since \(P\) and \(P_{1}\) are conjugate, it follows that \(P\) commutes with a conjugate of \(R\). The arbitrary choice of \(p\), shows that \(H\) is isomorphic to the direct product of cyclic groups of coprime orders, hence it is cyclic.
## 5. The generating graph
The generating graph of a finite \(2\)-generated group \(G\) is the graph defined on the elements of \(G\) in such a way that two distinct vertices are connected by an edge if and only if they generate \(G\). It was defined by Liebeck and Shalev in [22], and has been further investigated by many authors: see for example [8, 9, 10, 16, 17, 23, 25, 26, 27] for some of the range of questions that have been considered.
Question 1.1 is still open for generating graphs (even with the extra assumption that \(H\) is solvable) and appears to be a difficult problem.
In this section, we prove that the answer to Question 1.1 is affirmative at least in the particular case when \(H\) is a finite supersoluble group:
**Theorem 5.1**.: _Let \(G\) and \(H\) be finite \(2\)-generated groups with isomorphic generating graphs. If \(G\) is nilpotent and \(H\) is supersoluble, then \(H\) is nilpotent._
First we need an easy numerical lemma.
**Lemma 5.2**.: _Let \(\alpha=(a_{1},\ldots,a_{r})\) and \(\beta=(b_{1},\ldots,b_{s})\) be two sequences of prime numbers, with \(a_{1}\leq\cdots\leq a_{r}\) and \(b_{1}\leq\cdots\leq b_{s}.\) If_
\[\prod_{1\leq i\leq r}\left(1-\frac{1}{a_{i}}\right)=\prod_{1\leq j\leq s}\left( 1-\frac{1}{b_{j}}\right),\]
_then \(\alpha=\beta.\)_
Proof.: By induction on \(r+s.\) If \(r+s=2,\) then \(r=s=1\) and the statement is trivial. So suppose \(r+s>2\). We have
\[\prod_{1\leq i\leq r}a_{i}\prod_{1\leq j\leq s}(b_{j}-1)=\prod_{1\leq i\leq r}( a_{i}-1)\prod_{1\leq j\leq s}b_{j}. \tag{5.1}\]
Let \(p=\max\{a_{1},\ldots,a_{r},b_{1},\ldots,b_{s}\},\)\(r^{*}=\max\{i\ |\ a_{i}\neq p\},\)\(s^{*}=\max\{j\ |\ b_{j}\neq p\}\). Note that we can assume that both \(r^{*}\) and \(s^{*}\) exist. Indeed, if for example \(a_{i}=p\) for every \(1\leq i\leq r\), then from (5.1) we get
\[p^{r}\prod_{1\leq j\leq s}(b_{j}-1)=(p-1)^{r}\prod_{1\leq j\leq s}b_{j};\]
thus \(\beta\) contains exactly \(r\) primes equal to \(p\) and if \(\alpha\neq\beta\) then \(\prod_{b_{j}\neq p}(b_{j}-1)=\prod_{b_{j}\neq p}b_{j},\) a contradiction.
Now, since \(p\) does not divide \(a_{i}-1\) nor \(b_{j}-1\), it divides \(a_{i}\) if and only if \(i>r^{*}\) and divides \(b_{j}\) if and only if \(j>s^{*}.\) We deduce that \(r-r^{*}\) is the multiplicity of \(p\) in the left term of (5.1) and \(s-s^{*}\) is the multiplicity of \(p\) in the right term of (5.1). In particular \(r-r^{*}=s-s^{*}\) and \(a_{r^{*}+1}=\cdots=a_{r}=b_{s^{*}+1}=\cdots=b_{s}=p.\) But then
\[\prod_{i\leq r^{*}}\left(1-\frac{1}{a_{i}}\right)=\prod_{j\leq s^{*}}\left(1- \frac{1}{b_{j}}\right),\]
and we conclude by induction.
Now we need some information on the degrees of the vertices of the generating graph of a finite nilpotent group. From now on, let \(\Gamma(G)\) denote the generating graph of \(G.\)
**Lemma 5.3**.: _Let \(G\) be a 2-generated, non cyclic, finite nilpotent group. Let \(\pi(G)\) be the set of prime divisors of \(G,\)\(\pi_{1}(G)=\{p_{1},\ldots,p_{r}\}\) the set of the primes \(p\in\pi(G)\) such that the Sylow \(p\)-subgroup of \(G\) is cyclic and \(\pi_{2}(G)=\{q_{1},\ldots,q_{s}\}\) the set of the remaining primes. For every subset \(I\) of \(\{1,\ldots,r\}\), let_
\[\alpha_{I} =|G|\prod_{1\leq j\leq s}\left(1-\frac{1}{q_{j}^{2}}\right)\prod_{i \in I}\left(1-\frac{1}{p_{i}}\right)\prod_{i\notin I}\frac{1}{p_{i}},\] \[\beta_{I} =|G|\prod_{1\leq j\leq s}\left(1-\frac{1}{q_{j}}\right)\prod_{i \notin I}\left(1-\frac{1}{p_{i}}\right).\]
_For \(g\in G\), denote by \(\delta_{G}(g)\) the degree of \(g\) in the generating graph of \(G\). If \(g\) is a non-isolated vertex, then \(\delta_{G}(g)=\beta_{I}\) for some subset \(I\) of \(\{1,\ldots,r\}\). Moreover for every \(I\subseteq\{1,\ldots,r\}\), the generating graph of \(G\) contains precisely \(\alpha_{I}\) vertices of degree \(\beta_{I}\)._
Proof.: An element \(g\in G\) is not isolated in \(\Gamma(G)\) if and only if \(g\Phi(G)\) is not isolated in \(\Gamma(G/\Phi(G))\), and this occurs if and only if \(q_{1}\cdots q_{s}\) divides \(|g\Phi(G)|\). Given \(I\subseteq\{1,\ldots,r\}\), there are precisely \(\alpha_{I}\) elements \(g\) such that \(|g\Phi(G)|=q_{1}\cdots q_{s}\prod_{i\in I}p_{i}.\) All these elements have degree \(\beta_{I}.\)
Now we collect some information on the generating graph of a 2-generated supersoluble group.
**Lemma 5.4**.: _Assume that \(X\) is a finite 2-generated supersoluble group and that \(\Phi(X)=1.\) Then_
\[X\cong(V_{1}\times\cdots\times V_{t})\rtimes Y\]
_where \(Y\) is abelian and \(V_{1},\ldots,V_{r}\) are pairwise non \(Y\)-isomorphic nontrivial irreducible \(Y\)-modules._
Proof.: The Fitting subgroup \(\operatorname{Fit}(X)\) of \(X\) is a direct product of minimal normal subgroups of \(X\), and is complemented in \(X\). Let \(T\) be a complement of \(\operatorname{Fit}(X)\) in \(X\). Since \(X\) is supersoluble, \(X^{\prime}\leq\operatorname{Fit}(X)\), and consequently \(T\) is abelian. Let \(W\) be a complement of \(Z(X)\) in \(\operatorname{Fit}(X).\) Then \(X=W\rtimes Y,\) with \(Y=\langle T,Z(X)\rangle.\) We may decompose \(W=V_{1}^{n_{1}}\times\ldots V_{t}^{n_{t}},\) where \(V_{1},\ldots,V_{r}\) are pairwise non \(Y\)-isomorphic nontrivial irreducible \(Y\)-modules. The condition that \(X\) is 2-generated implies that \(n_{1}=\cdots=n_{t}=1.\)
**Lemma 5.5**.: _Let \(X=(V_{1}\times\cdots\times V_{t})\rtimes Y\) be as in Lemma 5.4. For \(1\leq i\leq t,\) let \(|V_{i}|=r_{i}\) (since \(X\) is supersoluble, \(r_{i}\) is a prime). Assume that \(x=(v_{1},\ldots,v_{t})y\) is a non-isolated vertex of the generating graph of \(X\) and let \(J_{y}=\{j\in\{1\ldots,t\}\mid[y,V_{j}]\neq 0\}.\) Then_
\[\delta_{X}(x)=\delta_{Y}(y)\prod_{j\notin J_{y}}r_{j}\prod_{j\in J_{y}}(r_{j}- 1). \tag{5.2}\]
Proof.: For \(1\leq i\leq t,\) we may identify \(V_{i}\) with the additive group of the field \(F_{i}\) with \(r_{i}\) elements. For every \(z\in Y\) and \(1\leq i\leq t,\) there exists \(\alpha_{i}(z)\in F_{i}\) such that \(w_{i}^{z}=\alpha_{i}(z)w_{i}\) for all \(w_{i}\in V_{i}.\) Let \(\tilde{x}=(\tilde{v}_{1},\ldots,\tilde{v}_{t})\tilde{y}\in X.\) It follows from Propositions 2.1 and 2.2 in [25], that \(\langle x,\tilde{x}\rangle=X\) if and only if \(\langle y,\tilde{y}\rangle=Y\) and
\[\delta_{i}(x,\tilde{x}):=\det\begin{pmatrix}1-\alpha_{i}(y)&1-\alpha_{i}( \tilde{y})\\ v_{i}&\tilde{v}_{i}\end{pmatrix}\neq 0\quad\text{ for all }i\in\{1,\ldots,t\}.\]
If \(i\notin J_{y},\) then \(\alpha_{i}(y)=1\). Since \(\langle y,\tilde{y}\rangle=Y\) and \([Y,V_{i}]=V_{i},\) it must be that \(\alpha_{i}(\tilde{y})\neq 1,\) and therefore \(\delta_{i}(x,\tilde{x})\neq 0\) if and only if \(v_{i}\neq 0,\) independently on the choice of \(\tilde{v}_{i}.\) If \(i\in J_{y},\) then, for every choice of \(v_{i}\) and \(y,\) the probability that \(\tilde{v}_{i}\) satisfies the condition \(\det\begin{pmatrix}1-\alpha_{i}(y)&1-\alpha_{i}(\tilde{y})\\ v_{i}&\tilde{v}_{i}\end{pmatrix}\neq 0\) coincides with \(1-1/r_{i}.\)
We conclude that \(x\) is not isolated in \(\Gamma(X)\) if and only if \(v_{i}\neq 0\) for every \(i\notin J_{y}.\) Moreover if \(x\) is not isolated, then (5.2) holds.
Proof of Theorem 5.1.: If \(H\) is cyclic, then \(H\) is nilpotent and there is nothing to prove. So assume \(H\) is not cyclic and let \(n=|G|=|H|\). Moreover assume that \(\pi_{1}(G)\) and \(\pi_{2}(G)\) are as described in Lemma 5.3 and that \(X=H/\Phi(H)\) is as described in Lemmas 5.4 and 5.5.
For every element \(h\in H\), we write \(h\Phi(H)=w_{h}y_{h}\) with \(w_{h}\in V_{1}\times\cdots\times V_{t}\) and \(y_{h}\in Y\). Since \(\langle h_{1},h_{2}\rangle=H\) if and only if \(\langle h_{1},h_{2}\rangle\Phi(H)=H\), it follows from Lemma 5.5 that
\[\delta_{H}(h)=\frac{n\cdot\delta_{Y}(y_{h})}{|Y|}\prod_{j\notin J_{y_{h}}}r_{j }\prod_{j\in J_{y_{h}}}(r_{j}-1). \tag{5.3}\]
To reach our conclusion, we will prove a series of consecutive claims.
Claim 1. Let \(y\in Y\) be such that \(Y/\langle y\rangle\) is cyclic and let \(\omega\) be the set of prime divisors of \(|Y/\langle y\rangle|\). Then \(\{r_{j}\mid j\in J_{y}\}\cap\omega=\emptyset.\) Moreover if \(j_{1},j_{2}\in J_{y}\) and \(j_{1}\neq j_{2},\) then \(r_{j_{1}}\neq r_{j_{2}}.\)
Indeed there exists a non isolated vertex \(h\) in \(\Gamma(H),\) with \(y=y_{h},\) and by (5.3),
\[\delta_{H}(h)=n\prod_{j\in J_{y}}\left(1-\frac{1}{r_{j}}\right)\prod_{u\in \omega}\left(1-\frac{1}{u}\right). \tag{5.4}\]
Since \(\Gamma(G)\cong\Gamma(H),\) there exists \(g\in G\) with \(\delta_{H}(h)=\delta_{G}(g).\) It follows from Lemma 5.3, that there exists \(I\subseteq\{1,\ldots,r\}\) such that
\[\prod_{j\in J_{y}}\left(1-\frac{1}{r_{j}}\right)\prod_{u\in\omega}\left(1- \frac{1}{u}\right)=\prod_{1\leq j\leq s}\left(1-\frac{1}{q_{j}}\right)\prod_{ i\notin I}\left(1-\frac{1}{p_{i}}\right). \tag{5.5}\]
The factors in the right term of (5.5) are all distinct. By Lemma 5.2 the same must be true for the left term, and this implies that Claim 1 is true.
Claim 2. The prime \(r_{i}\) does not divide\(|Y|,\) for every \(1\leq i\leq t.\)
Indeed, assume by contradiction that \(r_{i}\) divides \(|Y|.\) Since \(Y\) is a 2-generated abelian group and \(Y/C_{Y}(V_{i})\leq\operatorname{Aut}(V_{i})\) is cyclic of order dividing \(r_{i}-1,\) there exists \(y\in Y\) such that \(i\in J_{y},\)\(Y/\langle y\rangle\) is cyclic and \(r_{i}\) divides \(|Y/C_{Y}(V_{i})|,\) in contradiction with Claim 1.
Claim 3. If \(1\leq i<j\leq t,\) then \(r_{i}\neq r_{j}.\)
Assume \(i\neq j.\) We can find \(y\in Y\) such that \(\{i,j\}\subseteq J_{y}\) and \(Y/\langle y\rangle\) is cyclic (indeed let \(Y=\langle y_{1},y_{2}\rangle\): at least one of the three elements \(y_{1},y_{2},y_{1}y_{2}\) does not centralizes neither \(V_{i}\) nor \(V_{j}\) and this is the element we need). By Claim 1 it must be \(r_{i}\neq r_{j}.\)
Now let \(\pi_{1}=\{r_{1},\ldots,r_{t}\},\)\(\pi_{2}\) be the set of the prime divisors \(p\) of \(|Y|\) such that the Sylow \(p\)-subgroup of \(Y\) is cyclic, \(\pi_{3}\) the set of the remaining prime divisors of \(|Y|.\) We have proved that \(\pi=\pi(G)=\pi(H)\) is the disjoint union of \(\pi_{1},\)\(\pi_{2}\) and \(\pi_{3}.\)
In particular it follows from (5.3) and Claim 3 that every non isolated vertex \(h\in\Gamma(H)\) uniquely determines a subset \(\pi_{h}\) of \(\pi\) such that
\[\delta_{H}(h)=n\prod_{p\in\pi_{h}}\left(1-\frac{1}{p}\right).\]
Since the degrees in \(\Gamma(G)\) and \(\Gamma(H)\) are the same, denoting by \(\Lambda(H)\) the set of the non isolated vertices of \(\Gamma(H)\), it follows from Lemma 5.3 that
\[\pi_{2}(G)=\bigcap_{h\in\Lambda(H)}\pi_{h}. \tag{5.6}\]
Let \(r_{i}\in\pi_{1}.\) Since \(\operatorname{Aut}(V_{i})\) is cyclic, there exists \(y\in Y\) such that \(i\notin J_{y}\) and \(Y/\langle y\rangle\) is cyclic. Moreover there exists \(h\in\Lambda(H)\) with \(y_{h}=y.\) By (5.4) and Claim 2, \(r_{i}\notin\pi_{h}\), and therefore, by (5.6), \(r_{i}\notin\pi_{2}(G)\). Hence
\[\pi_{1}\subseteq\pi_{1}(G). \tag{5.7}\]
If \(r\in\pi_{2}\), then there exists \(y\in Y\) such that \(Y/\langle y\rangle\) is cyclic and has order coprime with \(r.\) As before, take \(h\in\Lambda(H)\) with \(y_{h}=y.\) By (5.4) and Claim 2, \(r\notin\pi_{h}\), hence \(r\notin\pi_{2}(H).\) It follows that
\[\pi_{2}\subseteq\pi_{1}(G). \tag{5.8}\]
It follows easily from (5.4), that if \(p\in\pi_{3}\), then \(p\in\pi_{h}\) for every \(h\in\Lambda(H)\), hence
\[\pi_{3}\subseteq\pi_{2}(G). \tag{5.9}\]
Since \(\pi\) is the disjoint union of \(\pi_{1},\pi_{2},\pi_{3}\), but also the disjoint union of \(\pi_{1}(G)\) and \(\pi_{2}(G)\), combining (5.7), (5.8) and (5.9), we conclude
\[\pi_{1}(G)=\pi_{1}\cup\pi_{2}\text{ and }\pi_{2}(G)=\pi_{3}.\]
Let \(\eta_{1},\eta_{2}\) be, respectively, the number of edges of \(\Gamma(G)\) and \(\Gamma(H).\) Notice that \(2\eta_{1}=|G|^{2}P_{G}(2)\) and \(2\eta_{2}=|H|^{2}P_{H}(2)\), where, given a \(2\)-generated finite group \(T\), we denote by \(P_{T}(2)\) the probability that a pair of uniformly randomly chosen elements of \(T\) generates \(T\). Since \(\Gamma(G)\cong\Gamma(H)\), we must have that \(\eta_{1}=\eta_{2}\) and consequently \(P_{G}(2)=P_{H}(2).\) It follows from [18, Satz 4] that
\[P_{H}(2) =\prod_{p\in\pi_{1}}\left(1-\frac{1}{p}\right)\prod_{p\in\pi_{2} }\left(1-\frac{1}{p^{2}}\right)\prod_{p\in\pi_{3}}\left(1-\frac{1}{p}\right) \left(1-\frac{1}{p^{2}}\right),\] \[P_{G}(2) =\prod_{p\in\pi_{1}(H)}\left(1-\frac{1}{p^{2}}\right)\prod_{p\in \pi_{2}(G)}\left(1-\frac{1}{p}\right)\left(1-\frac{1}{p^{2}}\right).\]
Since \(\pi_{1}\cup\pi_{2}=\pi_{1}(G)\) and \(\pi_{3}=\pi_{2}(G)\), we deduce
\[\prod_{p\in\pi_{1}}\left(1+\frac{1}{p}\right)=1\]
and consequently \(\pi_{1}=\emptyset.\) Therefore \(H/\Phi(H)\cong Y\) is abelian and we conclude that \(H\) is nilpotent.
### The non-generating graph
In this subsection we study the inverse graph of the generating graph of a group \(G\), which is called the non-generating graph of \(G\). An interesting subgraph of the non-generating graph of \(G\) is the subgraph \(\Delta(G)\) obtained by removing all universal vertices (i.e. the vertices joined to every other vertex). Note that the vertex set of \(\Delta(G)\) is contained in the set \(G\backslash\Phi(G)\), but in general it can be smaller. For example, in \(\Delta(S_{4})\), the elements (12)(34), (13)(24) and (14)(23) do not belong to the vertex-set.
In this subsection we will show that we have a positive answer to Question 1.1 whenever \(\Delta(G)\) is disconnected. We start analysing the graph \(\Delta(G)\) when \(G\) is either a cyclic group or a \(p\)-group.
**Lemma 5.6**.: _Let \(G\) be a cyclic group of order \(n\). Then \(\Delta(G)\) contains \(n\) vertices, and at least two of them are isolated._
Proof.: Suppose \(G=\langle x\rangle\) is a cyclic group of order \(n\). For every element \(g\in G\), we have \(\langle g,x\rangle=G\). Thus none of the elements of \(G\) is a universal vertex, implying that \(\Delta(G)\) contains \(n\) vertices. Now, if \(n=2\), then the elements \(1\) and \(x\) are isolated vertices, while for \(n\geq 3\), the elements \(x\) and \(x^{-1}\) are distinct isolated vertices. So in any case there are at least two isolated vertices.
**Lemma 5.7**.: _Let \(p\) be a prime and let \(P\) be a \(2\)-generated \(p\)-group of order \(p^{n}\) that is not cyclic. Then \(\Delta(P)\) contains \(p+1\) connected components, each one complete and containing \(p^{n-2}(p-1)\) vertices. In particular \(\Delta(P)\) is disconnected and contains isolated vertices if and only if \(P\cong C_{2}\times C_{2}\)._
Proof.: Note that \(P\) contains \(p+1\) maximal subgroups \(M_{1},\ldots M_{p+1}\) and the following holds:
* for every \(x,y\in M_{i}\), \(x\neq y\), we have \(\langle x,y\rangle\leq M_{i}<P\), so \(x\) and \(y\) are joined;
* for every \(x\in M_{i}\), \(y\in M_{j}\) with \(i\neq j\), we have \(\langle x,y\rangle=P\), so \(x\) and \(y\) are not joined.
Therefore \(\Delta(P)\) contains \(p+1\) connected components, each one complete and containing \(|M_{i}|-|\Phi(P)|=p^{n-2}(p-1)\) vertices. In particular \(\Delta(P)\) contains isolated vertices if and only if \(p^{n-2}(p-1)=1\), that is equivalent to \(P\cong C_{2}\times C_{2}\).
**Lemma 5.8**.: _Suppose \(G\) is a cyclic group and \(P\) is a \(2\)-generated \(p\)-group that is not cyclic, for some prime \(p\). Then \(\Delta(G)\cong\Delta(P)\) if and only if \(G\cong C_{3}\) and \(P\cong C_{2}\times C_{2}\)._
Proof.: By Lemma 5.6, \(\Delta(G)\) contains isolated vertices. Hence by Lemma 5.7 we deduce that \(P\cong C_{2}\times C_{2}\). In particular \(\Delta(G)\cong\Delta(P)\) consists of \(3\) vertices and no edges. Hence again by Lemma 5.6 we conclude that \(G\cong C_{3}\).
Next, we consider the dihedral group \(D_{2p}\), for \(p\) an odd prime:
**Lemma 5.9**.: _Let \(p\) be an odd prime. Then \(\Delta(D_{2p})\) has \(D_{2p}\backslash\{1\}\) as vertex-set and consists of one connected component of size \(p-1\) that is complete and \(p\) isolated vertices (corresponding to the \(p\) involutions of \(D_{2p}\))._
Proof.: Note that the trivial element is a universal vertex (as \(D_{2p}\) is non-cyclic) and it is the only one, so \(\Delta(D_{2p})\) contains \(D_{2p}\backslash\{1\}\) as vertex-set. Now, all involutions of \(D_{2p}\) are isolated vertices, as each of them generates \(D_{2p}\) together with any other non-trivial element. Finally, if \(P\in\operatorname{Syl}_{p}(D_{2p})\), then for every pair of distinct non-trivial elements \(x,y\in P\) we have \(\langle x,y\rangle=P<D_{2p}\), and so the non-trivial elements of \(P\) form a connected component of size \(p-1\), that is complete.
Lucchini and Nemmi characterized all finite \(2\)-generated groups \(G\) for which the graph \(\Delta(G)\) contains isolated vertices:
**Proposition 5.10**.: _[_28_, Proposition 2]_ _Let G be a \(2\)-generated finite group. Then \(\Delta(G)\) has an isolated vertex if and only if one of the following holds:_
1. \(G\) _is cyclic;_
2. \(G\cong C_{2}\times C_{2}\)_; or_
3. \(G\cong D_{2p}\) _for an odd prime_ \(p\)_._
As a consequence of Proposition 5.10 we obtain the following:
**Corollary 5.11**.: _Let \(G\) and \(H\) be finite \(2\)-generated groups with \(\Delta(G)\cong\Delta(H)\). If \(\Delta(G)\) contains isolated vertices and \(G\) is nilpotent, then \(H\) is nilpotent._
Proof.: By Proposition 5.10, the group \(G\) is either cyclic or isomorphic to \(C_{2}\times C_{2}\) and it is enough to prove that \(H\) cannot be isomorphic to the group \(D_{2p}\). Aiming for a contradiction, suppose \(H\cong D_{2p}\). If \(G\cong C_{2}\times C_{2}\), then \(\Delta(H)\) should have \(3\) vertices, and by Lemma 5.9 we should have \(2p-1=3\), that is impossible. Thus \(G\) must be cyclic, say \(G=\langle x\rangle\), and by Lemma 5.6 we deduce that \(|G|=2p-1\geq 5\). Also, by Lemma 5.9 the graph \(\Delta(G)\) has a complete connected component of size \(p-1\) and \(p\) isolated vertices. In particular \(|G|\) is not prime (otherwise all vertices would be isolated). Suppose \(q_{1}\neq q_{2}\) are prime numbers dividing \(|G|\). Then both \(x^{q_{1}}\) and \(x^{q_{2}}\) are joined to the trivial element, but they are not joined among each other (as they generate \(G\) together). However by Lemma 5.9\(\Delta(D_{2p})\) cannot have this shape. Thus \(G\) must be a cyclic \(q\)-group for some prime \(q\), of order \(q^{k}=2p-1\) for \(k\geq 2\). In particular the number of isolated vertices is \(q^{k}-q^{k-1}\) and this must equal \(p\). Hence \(q\) divides \(p\), implying \(q=p\) a contradiction.
This proves that \(H\) is either cyclic or isomorphic to \(C_{2}\times C_{2}\), and so in particular it is nilpotent.
More generally, Lucchini and Nemmi described the finite \(2\)-generated groups \(G\) in which \(\Delta(G)\) is disconnected:
**Proposition 5.12**.: _[_28_, Theorem 1]_ _Let G be a \(2\)-generated finite group. Then \(\Delta(G)\) is disconnected if and only if one of the following holds:_
1. \(G\) _is cyclic;_
2. \(G\) _is a_ \(p\)_-group;_
3. \(G\) _is not a_ \(p\)_-group,_ \(G/\Phi(G)\cong(V_{1}\times\cdots\times V_{t})\rtimes Y\)_, where_ \(Y\cong C_{p}\) _for some prime_ \(p\)_, and_ \(V_{1},\ldots,V_{t}\) _are pairwise non-_\(Y\)_-isomorphic non-trivial irreducible_ \(Y\)_-modules; or_
4. \(G\) _is not a_ \(p\)_-group,_ \(G/\Phi(G)\cong(V_{1}\times\cdots\times V_{t})\rtimes Y\)_, where_ \(Y\cong C_{p}\times C_{p}\) _for some prime_ \(p\)_,_ \(V_{1},\ldots,V_{t}\) _are pairwise non-_\(Y\)_-isomorphic non-trivial irreducible_ \(Y\)_-modules and we have_ \(C_{Y}(V_{1}\times\cdots\times V_{t})\cong C_{p}\)_._
We are now ready to prove that Question 1.1 has a positive answer when \(\Delta(G)\) is disconnected:
**Theorem 5.13**.: _Let \(G\) and \(H\) be finite \(2\)-generated groups with \(\Delta(G)\cong\Delta(H)\). If \(\Delta(G)\) is disconnected and \(G\) is nilpotent, then \(H\) is nilpotent._
Proof.: Thanks to Corollary 5.11, we can assume that \(\Delta(G)\) does not have isolated vertices, so in particular \(G\) is not cyclic by Lemma 5.6. By Proposition 5.12, we can suppose that \(G\) is a \(p\)-group of order \(p^{n}\), for some prime \(p\) and integer \(n\geq 2\), and we have to show that \(\Delta(G)\) is not isomorphic to \(\Delta(X)\) whenever \(X\) is a group of type (3) or (4).
By Lemma 5.7\(\Delta(G)\) contains \(p^{n-2}(p^{2}-1)\) vertices and it is regular of valency \(p^{n-2}(p-1)-1\).
Aiming for a contradiction, suppose that \(H\) is of type (3) or (4), with \(K\cong C_{q}\) or \(K\cong C_{q}\times C_{q}\) for some prime \(q\). Note that in both cases \(H\) contains a maximal subgroup \(M\) that is normal in \(H\) and such that \(|H|=|M|q\). Indeed, if \(H\) is of type (3) then \(M/\Phi(H)\cong V_{1}\times\cdots\times V_{t}\), while if \(H\) is of type (4) then \(M/\Phi(H)\cong(V_{1}\times\cdots\times V_{t})\times\langle k\rangle\) where \(\langle k\rangle=C_{K}(V_{1}\times\cdots\times V_{t})\). Let \(\Omega\) denote the set of vertices of \(\Delta(H)\) corresponding to elements of \(M\). In [28, Lemmas 24 and 25] the authors studied the structure of \(\Delta(H)\), proving that \(\Omega\) is a proper connected component of \(\Delta(H)\). In particular, every vertex \(x\in\Omega\) has degree \(|\Omega|-1\) (as \(x\) is not joined to itself). Also, if \(h\in H\) is such that \(H=M\langle h\rangle\), then every element of \(H\) can be written as \(mh^{j}\) for some \(m\in M\) and \(1\leq j\leq q\) and \(\langle h,mh^{j}\rangle=\langle h,m\rangle<H\) if and only if \(m\in M\backslash\Omega.\) Moreover, if \(m\in M\backslash\Omega\), then \(mh^{j}\) is a vertex if and only if \(h^{j}\neq 1\) (that is equivalent to \(j\neq q\)). This implies that \(h\) has degree \((|M|-|\Omega|)(q-1)-1\).
Since \(\Delta(G)\) and \(\Delta(H)\) are isomorphic, the graph \(\Delta(H)\) must be regular. Hence
\[|\Omega|-1=(|M|-|\Omega|)(q-1)-1\]
that gives
\[|\Omega|=|M|\frac{q-1}{q}.\]
Comparing the valencies of \(\Delta(G)\) and \(\Delta(H)\) we obtain
\[p^{n-2}(p-1)=|\Omega|=|M|\frac{q-1}{q}. \tag{5.10}\]
Also, the total number of vertices of \(\Delta(H)\) is
\[|\Omega|+(|H|-|M|)=|\Omega|+|M|(q-1)=|M|\frac{q-1}{q}+|M|(q-1)=|M|(q-1)\left( \frac{1}{q}+1\right).\]
Dividing the number of vertices of \(\Delta(G)\) and \(\Delta(H)\) by the values obtained in equation 5.10 we get
\[\frac{p^{n-2}(p^{2}-1)}{p^{n-2}(p-1)}=\frac{|M|(q-1)\left(\frac{1}{q}+1\right) }{|M|\frac{q-1}{q}}.\]
Simplifying the above equation, we conclude that \(p=q\). Finally, again by equation 5.10 we obtain \(|M|=p^{n-1}\) and so \(|H|=p|M|=p^{n}\) and \(H\) is a \(p\)-group, a contradiction. This proves the statement.
## 6. The Engel Graph
Following a suggestion given by Cameron (see [12, Section 11.1]), we may define a graph \(\Gamma_{\text{eng}}(G)\), where the vertices are the elements of \(G\) and where two vertices are adjacent if they satisfy a suitable Engel relation; more precisely if \(x\) and \(y\) are different elements of \(G\), then there is an edge joining
and \(y\) if and only if either \([x,_{r}y]=1\) or \([y,_{r}x]\) for some \(r\in\mathbb{N}.\) Cameron proposes to call this graph the Engel graph of \(G,\) although, as he notices, the same term was used by Abdollahi [2] to denote a related but different graph.
It is easy to see that the answer to Question 1.1 is affirmative for Engel graphs:
**Proposition 6.1**.: _If \(G\) and \(H\) are finite groups with isomorphic Engel graphs and \(G\) is nilpotent, then \(H\) is nilpotent._
Proof.: Since \(G\) is nilpotent, every element of \(G\) is a right Engel element. Hence the graph \(\Gamma_{\mathrm{eng}}(G)\cong\Gamma_{\mathrm{eng}}(H)\) is complete. Now, by [32, 12.3.4], we conclude that \(H\) is nilpotent.
Cameron proposed to investigate the relation between the Engel graph and the Nilpotent graph (where the Nilpotent graph \(\Gamma_{\mathrm{nil}}(G)\) has as vertices the elements of \(G\) and \(x\) and \(y\) are adjacent if and only if \(\langle x,y\rangle\) is nilpotent). In particular he asks (see [12, Question 24]) for which groups \(G\) the two graphs coincide. We answer to this question with the following result.
**Theorem 6.2**.: _Let \(G\) be a finite group. Then \(\Gamma_{\mathrm{eng}}(G)=\Gamma_{\mathrm{nil}}(G)\) if and only if \(G\) is nilpotent._
Proof.: We prove by induction on the order of \(G\) that if \(\Gamma_{\mathrm{eng}}(G)=\Gamma_{\mathrm{nil}}(G),\) then \(G\) is nilpotent. The property that \(\Gamma_{\mathrm{eng}}(G)=\Gamma_{\mathrm{nil}}(G)\) is inherited by all the subgroups of \(G.\) So by induction all the proper subgroups of \(G\) are nilpotent, and this implies that \(G\) is soluble. The set of universal vertices of the graph \(\Gamma_{\mathrm{nil}}(G)\) coincides with the hypercenter of \(G\) (see [6, Proposition 2.1]), while the set of universal vertices of \(\Gamma_{\mathrm{eng}}(G)\) is the set of elements that are either left or right Engel and coincides with the Fitting subgroup of \(G\) (see e.g. [32, 12.3.7]). Since \(\Gamma_{\mathrm{eng}}(G)=\Gamma_{\mathrm{nil}}(G),\) if follows that \(G\) is a finite soluble group, whose hypercenter and Fitting subgroup coincide, and this is possible only if \(G\) is nilpotent.
Conversely, if \(G\) is nilpotent, than every element of \(G\) is a right Engel element, so \(\Gamma_{\mathrm{eng}}(G)=\Gamma_{\mathrm{nil}}(G)\) is the complete graph on \(|G|\) vertices.
## 7. The Join graph
We conclude our survey with a graph that focuses the attention on the subgroup lattice of a finite group. The join graph of a finite group \(G\) has been introduced by Lucchini in [24] as follows: it is the graph having as vertex-set the set of proper subgroups of \(G\) and in which two subgroups \(M\) and \(N\) are joined if and only if \(G=\langle M,N\rangle\). In general Question 1.1 has a negative answer for the join graph: consider for example the groups \(C_{p}\times C_{p}\) and \(D_{2p},\) for \(p\) odd. However the following holds:
**Theorem 7.1**.: _[_24_, Corollary 5]_ _Let \(G\) and \(H\) be finite groups with isomorphic join graphs. If \(G\) is nilpotent, then \(H\) is supersoluble._
We can say more if the Frattini subgroup of \(H\) is trivial.
**Lemma 7.2**.: _Let \(G\) and \(H\) be finite groups with isomorphic join graphs. If \(\Phi(H)=1\) then \(\Phi(G)=1\). In particular, if \(\Phi(H)=1\) and \(G\) is nilpotent then G is a direct product of elementary abelian groups._
Proof.: Since \(\Phi(H)=1\), the identity subgroup is the unique isolated vertex in the join graph of \(H\). Using the graph isomorphism, we deduce that the join graph of \(G\) has a unique isolated vertex that must correspond to the trivial subgroup, and so \(\Phi(G)\) is trivial too. Finally, if \(G\) is nilpotent then for every prime divisor \(p\) of \(|G|\), if \(P\in\operatorname{Syl}_{\operatorname{p}}(\operatorname{G})\) then \(\Phi(P)\leq\Phi(G)=1\) and so \(P\) is elementary abelian. So all Sylow subgroups of \(G\) are elementary abelian and \(G\) is a direct product of elementary abelian groups.
Recall that a finite group \(K\) is a \(P\)-group if it is either non-cyclic elementary abelian or a semidirect product of an elementary abelian \(p\)-group \(A\) by a group of prime order \(q\neq p\) which induces a non-trivial power automorphism on \(A\). Using Lemma 7.2 and properties of the lattice of \(G\), Lucchini obtained the following result:
**Theorem 7.3**.: _[_24_, Proposition 6]_ _Let \(G\) and \(H\) be finite groups with isomorphic join graphs. If \(G\) is nilpotent and \(\Phi(H)=1\) then \(H\) is a direct product of groups with pairwise coprime orders that are either \(P\)-groups or elementary abelian groups._
**Acknowledgments**
The authors are partially supported by the "National Group for Algebraic and Geometric Structures, and their Applications" (GNSAGA - INdAM).
|
2306.13249 | Multilevel Monte Carlo methods for the Grad-Shafranov free boundary
problem | The equilibrium configuration of a plasma in an axially symmetric reactor is
described mathematically by a free boundary problem associated with the
celebrated Grad--Shafranov equation. The presence of uncertainty in the model
parameters introduces the need to quantify the variability in the predictions.
This is often done by computing a large number of model solutions on a
computational grid for an ensemble of parameter values and then obtaining
estimates for the statistical properties of solutions. In this study, we
explore the savings that can be obtained using multilevel Monte Carlo methods,
which reduce costs by performing the bulk of the computations on a sequence of
spatial grids that are coarser than the one that would typically be used for a
simple Monte Carlo simulation. We examine this approach using both a set of
uniformly refined grids and a set of adaptively refined grids guided by a
discrete error estimator. Numerical experiments show that multilevel methods
dramatically reduce the cost of simulation, with cost reductions typically on
the order of 60 or more and possibly as large as 200. Adaptive gridding results
in more accurate computation of geometric quantities such as x-points
associated with the model. | Howard C. Elman, Jiaxing Liang, Tonatiuh Sánchez-Vizuet | 2023-06-23T00:21:49Z | http://arxiv.org/abs/2306.13249v2 | # Multilevel Monte Carlo methods for the Grad-Shafranov free boundary problem
###### Abstract
The equilibrium configuration of a plasma in an axially symmetric reactor is described mathematically by a free boundary problem associated with the celebrated Grad-Shafranov equation. The presence of uncertainty in the model parameters introduces the need to quantify the variability in the predictions. This is often done by computing a large number of model solutions on a computational grid for an ensemble of parameter values and then obtaining estimates for the statistical properties of solutions. In this study, we explore the savings that can be obtained using multilevel Monte Carlo methods, which reduce costs by performing the bulk of the computations on a sequence of spatial grids that are coarser than the one that would typically be used for a simple Monte Carlo simulation. We examine this approach using both a set of uniformly refined grids and a set of adaptively refined grids guided by a discrete error estimator. Numerical experiments show that multilevel methods dramatically reduce the cost of simulation, with cost reductions typically on the order of 60 or more and possibly as large as 200. Adaptive gridding results in more accurate computation of geometric quantities such as x-points associated with the model.
keywords: Multilevel Monte Carlo Finite-Element, Uncertainty Quantification, Free Boundary Grad-Shafranov problem, Adaptive Finite Element Discretization. Msc: [2020] 65Z05, 65C05, 62P35, 35R35, 35R60 +
Footnote †: journal: Computer Science and
## 1 Introduction
Monte Carlo (MC) techniques are one of the most common strategies for dealing with the quantitative assessment of the accuracy of numerical simulations of physical models with uncertainties. The idea behind these methods is to obtain a large number of samples (typically by numerically solving the associated model) for random realizations of the uncertain parameters, and use these data to gather statistical information about the quantity of interest. However, when the model involves the solution of partial differential equations, the computational effort related to the collection of the data points required can easily become unmanageable. To overcome this difficulty methods like polynomial chaos expansions [48], stochastic Galerkin [18], and stochastic collocation [2] have been used to handle uncertainties associated with a small number of parameters. These techniques, however, often require the development of specialized numerical solvers or rely on the smooth dependence of the model with respect to the parameter values. The multilevel Monte Carlo (MLMC) method was developed [8; 19; 42; 46] as an efficient alternative that does not require additional smoothness assumptions and can take advantage of an existing numerical solver. Given a target
numerical grid (i.e. a grid whose resolution is considered sufficiently fine) MLMC improves the efficiency of the sampling step by offsetting the bulk of the numerical computations to a sequence of _coarser_ grids where the numerical solution is cheaper.
In the particular context of free-boundary Grad-Shafranov computations subject to parameter uncertainty, the authors have shown that the computational cost can be reduced manyfold by employing a strategy based on stochastic collocation [12]; however, due to the latent possibility of plasma-wall contacts, the smoothness of the mapping between coil currents and equilibria cannot be guaranteed. In this paper, our goal is to overcome this difficulty by approximating the expectation of the equilibrium configuration using a multilevel Monte Carlo Finite-Element (MLMC-FE) approach. We will consider two MLMC-FE algorithms: a classical strategy based on uniformly refined meshes and a variation based on meshes refined adaptively using an _a posteriori_ error estimator. As we shall see, both of these approaches greatly reduce computational costs, with the adaptive strategy being somewhat more effective in computing the approximation of the expectation of geometric properties of the equilibrium configuration.
An outline of the paper is as follows. In Section 2, we briefly recall the Grad-Shafranov free boundary problem. Section 3 is devoted to the introduction of the Monte Carlo and multilevel Monte Carlo Finite Element methods. The section concludes with the introduction of an algorithm to compute the optimal number of samples required at each discretization level. In Section 4 we present numerical experiments comparing the effectiveness of these strategies with a Monte Carlo strategy based on a single mesh. Concluding remarks are presented in the final Section 5. For completeness, the technical mathematical and algorithmic aspects of the problem and the methods discussed are included as an appendix.
## 2 The Grad-Shafranov free boundary problem
### The deterministic problem
In a cylindrically symmetric magnetic confinement device, with coordinates \((r,z,\varphi)\), the mathematical expression of the equilibrium condition between the hydrostatic and magnetic forces acting on the plasma results in the celebrated Grad-Shafranov equation [22; 37; 44]. This nonlinear elliptic equation relates the _poloidal flux function_\(\psi(r,z)\) to the hydrostatic pressure \(p(\psi)\) and the _the toroidal field_\(g(\psi)\) (both of which are assumed to be functions of \(\psi\) only), and the currents \(I_{k}\) going through external coils with cross-sectional area \(S_{k}\). Posed in free space, the equation takes the form
\[-\nabla\ \cdot\ \left(\frac{1}{\mu r}\nabla\psi\right)=\left\{\begin{array} []{ll}\frac{d}{d\psi}p(\psi)+\frac{1}{2\mu r}\frac{d}{d\psi}g^{2}(\psi)&\mbox{ in }\Omega_{p}(\psi)\\ I_{k}/S_{k}&\mbox{ in }\Omega_{C_{k}}\\ 0&\mbox{ elsewhere.}\end{array}\right.\] (1a) Above, \[\mu\] is the magnetic permeability, \[\Omega_{C_{k}}\] denotes the area occupied by the \[k\] -th external coil, \[\Omega_{p}\] is the plasma confinement region which _is not known a priori_ and must be determined as a problem unknown, making this a _free boundary problem_. A schematic of a cross-section, for \[r>0\], of a tokamak is depicted in Figure 1. The confinement region \[\Omega_{p}\] is characterized as the largest region that contains the magnetic axis (defined as the point where \[\psi\] has a global maximum) and that is bounded by a closed level set \[\psi=constant\]
Figure 1: The plasma confinement region \(\Omega_{p}\) is enclosed by the violet line. The rectangles represent the external coils \(C_{k}\); the grey curved line represents the exterior wall of the vacuum chamber.
boundary problem is ubiquitous in nuclear fusion and several computational codes have been developed over the years (see for instance [14; 21; 25; 26; 29; 32] and references therein).
A common choice in the literature, first proposed in [38], for the free functions \(p(\psi)\) and \(g(\psi)\) in the right hand side of (1a) is
\[\frac{d}{d\psi}p(\psi)=\frac{\beta}{r_{0}}\left(1-\psi_{N}^{\alpha_{1}}\right) ^{\alpha_{2}},\qquad\qquad\frac{1}{2}\frac{d}{d\psi}g^{2}(\psi)=\mu_{0}r_{0}( 1-\beta)\left(1-\psi_{N}^{\alpha_{1}}\right)^{\alpha_{2}}, \tag{1b}\]
where \(r_{0}\) represents the outer radius of the vacuum chamber and \(\psi_{N}\in[0,1]\) is a normalization of the flux \(\psi\) such that \(\psi_{N}=1\) on the plasma boundary \(\partial\Omega_{p}\) and \(\psi_{N}=0\) at the magnetic axis. The parameters \(\alpha_{1}\) and \(\alpha_{2}\) control the behavior of \(\psi\) around the magnetic axis, and \(\beta\) measures the ratio between the hydrostatic pressure and the magnetic pressure in the plasma.
### Incorporating uncertainty
In practice--whether it is due to imperfect measurements, variations in the experimental conditions, etc.--the values of the parameters involved in the model are subject to a degree of uncertainty. Since the predictions of the model depend on the parameter values, the solution itself inherits a degree of randomness. From the mathematical point of view, in the presence of uncertainty in the parameters, the function \(\psi\) (as well as all derived quantities) becomes a random variable. Obtaining a full description of the probability density function associated with \(\psi\) might not be possible, but an approximate picture can be obtained by exploring the parameter space and computing sample approximations of its moments.
In this article, we will consider that the uncertainty in the model (1a) is concentrated in the values of the currents \(I_{i}\) going through the external coils. We will then model the array of currents as a \(d\)-dimensional random variable \(\mathbf{\omega}:=(\omega_{1},\ldots,\omega_{d})\), where \(d\) is the number of confinement coils in the reactor, and the \(k\)-th component of \(\mathbf{\omega}\) is the current going through the \(k\)-th coil. We will consider that \(\mathbf{\omega}\) is uniformly distributed around a baseline vector \(\mathbf{I}=(I_{1},\ldots,I_{d})\) corresponding to the desired current values in a deterministic model. We will often refer to \(\mathbf{I}\) as either the _reference_ or _unperturbed_ currents. Letting \(\tau>0\) denote the size of the possible perturbation in the current values (relative to the components of \(\mathbf{I}\)), the vector \(\mathbf{\omega}\) is then uniformly distributed over the \(d\)-dimensional parameter space
\[W:=\prod_{k=1}^{d}\left[I_{k}-\tau|I_{k}|,I_{k}+\tau|I_{k}|\right]. \tag{2}\]
Since coils are independent of each other, the stochastic random currents \(\{\omega_{k}\}_{k=1}^{d}\) are uncorrelated and the joint density function of \(\mathbf{\omega}\) is given by \(\pi\left(\mathbf{\omega}\right)=\prod_{k=1}^{d}\pi_{k}\left(\omega_{k}\right)= \prod_{k=1}^{d}\frac{1}{2\tau|I_{k}|}\). The equilibrium configuration determined by the solution to (1a) is then the random variable \(\psi(x,y,\mathbf{\omega})\); we will be primarily interested in efficiently constructing an approximation to its expected value
\[\mathbb{E}\left[\psi(x,y,\mathbf{\omega})\right]=\int_{W}\psi(x,y,\mathbf{\omega})\pi( \mathbf{\omega})d\mathbf{\omega}, \tag{3}\]
as well as those of some derived quantities such as the plasma boundary, the location of the x-points, etc.
## 3 Monte Carlo and Multilevel Monte Carlo Finite-Element Methods
We now turn our attention to the numerical approximation of the expected value (3). Since the location and shape of the plasma boundary depend on the values of the coil currents, variations of these values
could lead to contacts between the plasma and the wall or even loss of confinement. This fact translates into a possible non-smoothness of the mapping between coil currents and the solutions of (1a) which may then cause techniques such as stochastic collocation to underperform. Moreover, the computational effort associated with cubature methods scales exponentially with the dimension of the parameter space, seriously limiting their feasibility for estimating (3). This leads to the use of Monte Carlo methods, which are agnostic to both the smoothness of the mapping and the dimensionality of the problem [39], although they have a slow convergence rate (1/2) that tends to make them costly. This will be addressed through the use of a multi-level approach.
### Monte Carlo finite element method
We will describe the method in terms of a generic solution, \(u\), to a PDE involving stochastic parameters and its finite element approximation \(u_{h}\), where \(h\) is the mesh parameter of the discretization. Let \(\{\omega^{(i)}\}_{1\leq i\leq N}\) be a set of \(N\) realizations of the random variable \(\omega\) giving rise to a sample of \(N\) realizations \(u^{(i)}=u(\omega^{(i)})\) and \(u^{(i)}_{h}=u_{h}(\omega^{(i)})\) of the exact solution and its finite element discretization. We will assume that all these functions belong to a functional space \(Z\) endowed with a norm \(\|\cdot\|_{Z}\) (we refer the reader interested in the mathematical details to the Appendix 7.1 and references therein), and we will consider the standard FEM error estimate \(\|u^{(i)}-u^{(i)}_{h}\|_{Z}\leq C^{(i)}h^{p}\), where \(p\) is the order of the FEM discretization and the constant \(C^{(i)}\) depends only on the problem geometry and the particular values of \(\omega^{(i)}\).
The Monte Carlo Finite-Element (MC-FE) estimator \(A_{\text{MC}}(u_{h})\) for \(\mathbb{E}(u)\) is defined as the sample mean
\[A_{\text{MC}}(u_{h}):=\frac{1}{N}\sum_{i=1}^{N}u^{(i)}_{h}. \tag{4}\]
This estimator is easily shown to be unbiased and to satisfy \(\mathbb{E}(A_{\text{MC}})=\mathbb{E}(u_{h})\). A quantity that serves as a foundation for examining the spatial and statistical accuracy of the MC-FE estimator is the _mean squared error_ (MSE) defined as
\[\mathcal{E}^{2}_{A_{\text{MC}}}:=\mathbb{E}\left[\|\mathbb{E}(u)-A_{\text{MC }}(u_{h})\|_{Z}^{2}\right].\]
It can be shown (see, for instance [3, Theorem 4.3]) that, for linear problems, the Monte Carlo estimator accurately approximates the expected value in the sense that
\[\mathcal{E}^{2}_{A_{\text{MC}}}\leq K(N^{-1/2}+h^{p})^{2},\]
where the constant \(K>0\) depends on the problem geometry and the expected values of the problem data. The MSE can be decomposed into terms related to the bias and variance as
\[\mathcal{E}^{2}_{A_{\text{MC}}}=\|\mathbb{E}(u)-\mathbb{E}(u_{h})\|_{Z}^{2}+ \mathbb{E}\left[\|\mathbb{E}(u_{h})-A_{\text{MC}}(u_{h})\|_{Z}^{2}\right]=\| \mathbb{E}(u)-\mathbb{E}(u_{h})\|_{Z}^{2}+\frac{\mathbb{V}(u_{h})}{N}= \mathcal{E}^{2}_{\text{Bias}}+\mathcal{E}^{2}_{\text{Stat}},\]
where \(\mathbb{V}(u):=\mathbb{E}[\|u-\mathbb{E}(u)\|_{Z}^{2}]\) and \(\mathbb{V}(A_{\text{MC}}(u))=\mathbb{V}(u)/N\). The last two terms at the end of the expression above implicitly define the discretization error \(\mathcal{E}_{\text{Bias}}\) and the sampling (or statistical) error \(\mathcal{E}_{\text{Stat}}\) respectively.
If the expectation of \(u\) is different from zero, the mean squared error can be expressed as a percentage through normalization by the factor \(\|\mathbb{E}(u)\|_{Z}^{2}\), leading to the _normalized mean squared error_\(\widetilde{\mathcal{E}}^{2}_{A_{\text{MC}}}\). Since the exact random variable \(u\) is not available, we will approximate the relative mean squared error (nMSE) by
\[\widetilde{\mathcal{E}}^{2}_{A_{\text{MC}}}\approx\frac{\|\mathbb{E}(u)- \mathbb{E}(u_{h})\|_{Z}^{2}}{\|\mathbb{E}(u_{h})\|_{Z}^{2}}+\frac{\mathbb{V}( u_{h})}{N\left\|\mathbb{E}(u_{h})\right\|_{Z}^{2}}=\widetilde{\mathcal{E}}^{2}_{ \text{Bias}}+\widetilde{\mathcal{E}}^{2}_{\text{Stat}}, \tag{5}\]
where \(\widetilde{\mathcal{E}}_{\text{Bias}}\) and \(\widetilde{\mathcal{E}}_{\text{Stat}}\) are relative analogues to the discretization and statistical errors defined above. If the number of grid points for the FEM discretization is \(M\) then, in two dimensions, it is standard to assume that \(\widetilde{\mathcal{E}}_{\text{Bias}}=\mathcal{O}(M^{-p/2})\). Given a target tolerance \(\epsilon\), the contribution of the statistical and discretization errors towards the total nMSE can be controlled by requiring that
\[\widetilde{\mathcal{E}}_{\text{Bias}}^{2}=\mathcal{O}(M^{-p})\leq(1-\theta) \epsilon^{2},\qquad\widetilde{\mathcal{E}}_{\text{Stat}}^{2}=\frac{V_{h}}{N} \leq\theta\epsilon^{2}, \tag{6}\]
where \(\theta\in(0,1)\) is known as the _splitting parameter_, and \(V_{h}:=\mathbb{V}\left(u_{h}\right)/\left\|\mathbb{E}(u_{h})\right\|_{Z}^{2}\). This in turn allows us to estimate the sample size \(N\) and the number of grid points \(M\) required to achieve the desired tolerance as
\[M\geq\left((1-\theta)\epsilon^{2}\right)^{-\frac{1}{p}},\qquad N=\left\lceil \frac{V_{h}}{\theta\epsilon^{2}}\right\rceil. \tag{7}\]
If we assume that the average cost to obtain one sample (i.e. to solve (1a) for one particular value of the coil-currents) is \(C=\mathcal{O}(M^{c})\) for some \(c>0\), then the total computational cost of the MC-FE estimator can be estimated as
\[C(A_{\text{MC}})=\mathcal{O}(NM^{c})=\mathcal{O}\left(\epsilon^{-2-\frac{2c}{ p}}\right). \tag{8}\]
### Multilevel Monte Carlo finite element method
The idea behind multilevel Monte Carlo (MLMC) is to reduce the computational cost associated with sampling--which in our case involves the numerical solution of a non-linear PDE in a target computational mesh--by approximating the expectation of the quantity of interest on the finest mesh by a sequence of control variates on a set of coarse grids that are cheaper to compute [13]. Based on the linearity of expectation, the MLMC estimator expresses the quantity of interest on the finest spatial grid, \(\mathbb{E}\left(u_{h}\right)\), by a telescoping sum involving the numerical approximations of \(u\) on coarser grids. Consequently, MLMC's workload is shifted from the fine grid to coarser grids, making it more efficient than MC [8]. Cf. e.g. [36] for alternative ways to reduce the costs of MC methods.
To construct meshes that are easy to describe for both uniform and adaptive mesh refinement, we will characterize them using the number of grid points rather than the mesh size. We will refer to \(\ell=0,\ldots,L\) as the _level_ of a mesh \(\mathcal{T}_{\ell}\) containing \(\{M_{\ell}\}\) grid points. We will then consider a sequence of meshes \(\mathcal{T}_{0},\ldots,\mathcal{T}_{L}\) with increasing resolution so that \(\{M_{\ell}\}_{0\leq\ell\leq L}\) defines an increasing sequence and \(\mathcal{T}_{L}\) is the finest mesh, and we will denote by \(u_{\ell}\) the approximation of \(u\) on the mesh \(\mathcal{T}_{\ell}\).
The expectation of the function \(u\) can be approximated by the expectation of the finest approximation \(u_{L}\). Since \(u_{L}=u_{0}+(u_{1}-u_{0})+(u_{2}-u_{1})+\cdots+(u_{L}-u_{L-1})\), \(\mathbb{E}\left(u\right)\) can be approximated as the telescoping sum
\[\mathbb{E}(u)\approx\mathbb{E}(u_{L})=\mathbb{E}(u_{0})+\sum_{\ell=1}^{L} \mathbb{E}(u_{\ell}-u_{\ell-1})=\sum_{\ell=0}^{L}\mathbb{E}(Y_{\ell}), \tag{9}\]
where each of the terms
\[Y_{0}:=u_{0}\quad\text{ and }\quad Y_{\ell}:=u_{\ell}-u_{\ell-1}\quad\text{( for }\ell\geq 1) \tag{10}\]
can be regarded as a correction of the coarsest approximation \(u_{0}\). If each of the terms \(\mathbb{E}(Y_{\ell})\) is estimated by gathering \(N_{\ell}\) samples at level \(\ell\) and computing the sample expectations
\[\mathbb{E}(Y_{0})\approx\widehat{Y}_{0}:=\frac{1}{N_{0}}\sum_{i=1}^{N_{0}}u_ {0}^{(i)},\qquad\mathbb{E}(Y_{\ell})\approx\widehat{Y}_{\ell}:=\frac{1}{N_{ \ell}}\sum_{i=1}^{N_{\ell}}\left(u_{\ell}^{(i)}-u_{\ell-1}^{(i)}\right)\quad \text{( for }\ell\geq 1),\]
then the MLMC-FE estimator at level \(L\) will be unbiased and can be written as
\[A_{\text{MLMC}}(u_{L}):=\sum_{\ell=0}^{L}\widetilde{Y}_{\ell}=\frac{1}{N_{0}} \sum_{i=1}^{N_{0}}u_{0}^{(i)}+\sum_{\ell=1}^{L}\frac{1}{N_{\ell}}\sum_{i=1}^{N_ {\ell}}\left(u_{\ell}^{(i)}-u_{\ell-1}^{(i)}\right). \tag{11}\]
Recalling that \(\mathbb{E}(\widetilde{Y}_{\ell})=\mathbb{E}(Y_{\ell})\) and \(\mathbb{V}(\widetilde{Y}_{\ell})=\mathbb{V}(Y_{\ell})/N_{\ell}\) we conclude that, for the MLMC-FE estimator, it follows that \(\mathbb{E}\left(A_{\text{MLMC}}\right)=\sum_{\ell=0}^{L}\mathbb{E}(\widetilde {Y}_{\ell})=\mathbb{E}(u_{L})\) and \(\mathbb{V}\left(A_{\text{MLMC}}\right)=\sum_{\ell=0}^{L}\mathbb{V}(\widetilde {Y}_{\ell})=\sum_{\ell=0}^{L}\mathbb{V}(Y_{\ell})/N_{\ell}\).
Just like for MC-FE, the mean squared error \(\mathcal{E}^{2}_{A_{\text{MLMC}}}\) associated with MLMC-FE can be split into contributions due to bias and variance as
\[\mathcal{E}^{2}_{A_{\text{MLMC}}}=\mathbb{E}\left[\left\|\mathbb{E}(u)-A_{ \text{MLMC}}(u_{L})\right\|_{Z}^{2}\right]=\left\|\mathbb{E}(u)-\mathbb{E}(u_ {L})\right\|_{Z}^{2}+\sum_{\ell=0}^{L}\frac{\mathbb{V}\left(Y_{\ell}\right)}{ N_{\ell}}=\mathcal{E}^{2}_{\text{Bias}}+\mathcal{E}^{2}_{\text{Stat}}.\]
Similarly, approximating \(\mathbb{E}(u)\) by \(\mathbb{E}(u_{L})\), the normalized mean squared error \(\widehat{\mathcal{E}}^{2}_{A_{\text{MLMC}}}\) can be approximated by
\[\widehat{\mathcal{E}}^{2}_{A_{\text{MLMC}}}\approx\frac{\left\|\mathbb{E}(u )-\mathbb{E}(u_{L})\right\|_{Z}^{2}}{\left\|\mathbb{E}(u_{L})\right\|_{Z}^{2}} +\sum_{\ell=0}^{L}\frac{V_{\ell}}{N_{\ell}}=\widehat{\mathcal{E}}^{2}_{\text{ Bias}}+\widehat{\mathcal{E}}^{2}_{\text{Stat}}, \tag{12}\]
where
\[V_{\ell}:=\mathbb{V}\left(Y_{\ell}\right)/\left\|\mathbb{E}(u_{L})\right\|_{Z }^{2}. \tag{13}\]
Just as before, the parameter \(\theta\in(0,1)\) can be used to split the contributions of the relative discretization error \(\widehat{\mathcal{E}}_{\text{Bias}}\) by requiring that \(\widehat{\mathcal{E}}^{2}_{\text{Bias}}\leq(1-\theta)\epsilon^{2}\) and \(\widehat{\mathcal{E}}^{2}_{\text{Stat}}\leq\theta\epsilon^{2}\) where \(\epsilon\) is a predetermined tolerance such that \(\widehat{\mathcal{E}}^{2}_{A_{\text{MLMC}}}\leq\epsilon^{2}\).
Note that each of the terms \(Y_{\ell}^{(i)}:=u_{\ell}^{(i)}-u_{\ell-1}^{(i)}\) appearing in (11) requires the approximation of \(u^{(i)}\) on adjacent refinement levels _using the same value of the parameter \(\mathbf{\omega}^{(i)}\)_. However, when using a FEM discretization, the numerical implementation of this term does not require the solution of the PDE on the two grids \(\mathcal{T}_{\ell}\) and \(\mathcal{T}_{\ell-1}\). The realization on the coarse grid can be obtained from the one on the fine grid by either Galerkin projection or interpolation. Projection is the more accurate of the two, but requires the solution of a system involving the mass matrix for each realization. Thus we prefer interpolation, for which costs are minimal and which is of second-order accuracy in our application. To avoid introducing correlation across discretization levels, none of the samples involved in the computation of \(Y_{\ell}\) is reused for the finer level \(\ell+1\). In other words, sampling is done such that, for \(n\neq m\), the estimates \(Y_{n}\) and \(Y_{m}\) are uncorrelated. However, for _any particular_\(Y_{\ell}\) the strong correlation between \(u_{\ell}^{(i)}\) and \(u_{\ell-1}^{(i)}\) makes the variance of the correction terms much smaller than the variance of the approximation \(u_{L}\) in the finest mesh, further improving the statistical approximation.
To quantify the computational effort of the MLMC-FE estimator, let \(M_{\ell,i}\) be the number of grid points for the \(i\)-th sample on mesh level \(\ell\). We will assume that the computational cost to obtain one sample of \(u_{\ell}^{(i)}\) is \(C_{\ell,i}:=C(u_{\ell}^{(i)})=\mathcal{O}(M_{\ell,i}^{c})\), where the exponent \(c>0\) depends on the solver, and will denote the cost of computing the correction term \(Y_{\ell}^{(i)}\) by \(C_{\ell,i}:=C(Y_{\ell}^{(i)})=\mathcal{O}(M_{\ell,i}^{c})\) for \(\ell\geq 0\) and \(M_{-1,i}=0\). For a nonlinear problem like the one at hand, the particular realization \(\mathbf{\omega}^{(i)}\) will influence the cost, hence for the forthcoming analysis, we will consider the _average_ cost to be of the form
\[C_{\ell}=\mathcal{O}(M_{\ell}^{c}), \tag{14}\]
and will use this to estimate the total cost as
\[C(A_{\rm MLMC})=\sum_{\ell=0}^{L}\sum_{i=1}^{N_{\ell}}C_{\ell,i}=\sum_{\ell=0}^{L }N_{\ell}C_{\ell}=\sum_{\ell=0}^{L}N_{\ell}\cdot\mathcal{O}\left(M_{\ell}^{c} \right).\]
We will use the expression above to estimate the sample size \(N_{\ell}\) through an optimization procedure that minimizes the work of the MLMC-FE estimator utilizing the method of Lagrange multipliers subject to the inequality constraint \(\widehat{\mathcal{E}}_{\rm Stat}^{2}\leq\theta\epsilon^{2}\)[19]. The Lagrangian for this problem is written as
\[\mathcal{L}=\sum_{\ell=0}^{L}\sum_{i=1}^{N_{\ell}}C_{\ell,i}+\frac{1}{\lambda ^{2}}\left(\sum_{\ell=0}^{L}\frac{V_{\ell}}{N_{\ell}}-\theta\epsilon^{2}+t^{2} \right)=\sum_{\ell=0}^{L}N_{\ell}C_{\ell}+\frac{1}{\lambda^{2}}\left(\sum_{ \ell=0}^{L}\frac{V_{\ell}}{N_{\ell}}-\theta\epsilon^{2}+t^{2}\right),\]
where \(\lambda\) is the Lagrange multiplier and the auxiliary variable \(t^{2}\), known as a _slack variable_ in optimization [20; 43], was introduced to transform the inequality constraint into an equality constraint. Treating \(N_{\ell}\) as a continuous variable and under a set of suitable assumptions, known as the Karush-Kuhn-Tucker optimality conditions (or simply KKT conditions) [5; 43], the optimal sample size can be shown [19] to be
\[N_{\ell}=\left\lceil\frac{1}{\theta\epsilon^{2}}\sqrt{\frac{V_{\ell}}{C_{\ell }}}\sum_{k=0}^{L}\sqrt{V_{k}C_{k}}\right\rceil. \tag{15}\]
With this expression for \(N_{\ell}\), the optimal total cost for the MLMC-FE estimator is
\[C(A_{\rm MLMC})\leq\frac{1}{\theta\epsilon^{2}}\left(\sum_{\ell=0}^{L}\sqrt{V _{\ell}C_{\ell}}\right)^{2}+\sum_{\ell=0}^{L}C_{\ell}. \tag{16}\]
The formula (15) suggests an iterative procedure for the approximation of \(\mathbb{E}(u)\). Starting from a computational mesh \(\mathcal{T}_{0}\), gather an initial number \(\widetilde{M}_{0}\) of samples \(u_{0}^{(l)}\) and estimate \(\widehat{\mathcal{E}}_{\rm Bias}\), \(\widehat{\mathcal{E}}_{\rm Stat}\), and \(V_{0}\). If \(\widehat{\mathcal{E}}_{\rm Stat}\) is larger than the prescribed tolerance, use (15) to update \(\widetilde{M}_{0}\) and gather additional samples; if \(\widehat{\mathcal{E}}_{\rm Bias}\) is above the prescribed tolerance, then add an additional level of spatial refinement. The process continues adding discretization levels and collecting additional and/or samples until both \(\widehat{\mathcal{E}}_{\rm Bias}\) and \(\widehat{\mathcal{E}}_{\rm Stat}\) fall below the required tolerance, at which point \(\mathbb{E}(u)\) is approximated using equation (9). This simple algorithm, however, presents one challenge: the term \(V_{\ell}\) in equation (15) requires the computation of the term
\[\mathbb{V}(Y_{\ell})=\frac{1}{N_{\ell}-1}\left(\sum_{i=1}^{N_{\ell}}\left\|Y_{ \ell}^{(i)}\right\|_{Z}^{2}-\frac{1}{N_{\ell}}\left\|\sum_{i=1}^{N_{\ell}}Y_{ \ell}^{(i)}\right\|_{Z}^{2}\right). \tag{17}\]
However, the estimated sample sizes \(\{N_{\ell}\}\) are available only for pre-existing discretization levels, hence whenever an additional mesh refinement is needed, \(N_{L+1}\) cannot be approximated by (15) as this formula uses \(V_{L+1}\) to compute \(N_{L+1}\). This inconvenience can be overcome by noting that
\[\mathbb{V}(u-u_{\ell})=\mathbb{E}\left[\|u-u_{\ell}\|_{Z}^{2}\right]-\| \mathbb{E}\left(u-u_{\ell}\right)\|_{Z}^{2}\leq\mathbb{E}\left[\|u-u_{\ell}\|_ {Z}^{2}\right]. \tag{18}\]
Hence, the variance \(\mathbb{V}(u-u_{\ell})\) can be estimated by the expectation of the squared discretization error \(\|u-u_{\ell}\|_{Z}^{2}\). For a uniformly refined grid, we can resort to a standard _a priori_ error estimate and assume that \(\mathbb{E}[\|u-u_{\ell}\|_{Z}^{2}]=\mathcal{O}(M_{\ell}^{-b_{1}})\) for some \(b_{1}>0\). It then follows that \(V_{\ell}=\mathcal{O}(M_{\ell}^{-b_{1}})\) and following [41] we can then approximate \(V_{L+1}\) in terms of the known variance \(V_{L}\) by
\[V_{L+1}=(M_{L+1}/M_{L})^{-b_{1}}\,V_{L}. \tag{19}\]
A summary of the proceeding analysis on the computational cost to compute the MLMC-FE estimator in terms of the desired relative accuracy \(\epsilon\) was established in full generality in [19, Theorem 1].
### Adaptive multilevel Monte Carlo finite element method
In view of the benefits of approximating the quantity of interest across a sequence of increasingly finer meshes, and with the goal of further reducing the computational cost associated with reducing the bias associated with the numerical discretization, it is natural to focus the refinement only on those parts of the mesh where the error is concentrated. Our goal is then to, starting from a computational mesh \(\mathcal{T}_{0}\), generate a family of adaptively refined meshes \(\{\mathcal{T}_{\ell}\}_{0\leq\ell\leq L}\) that will produce better approximations of \(\mathbb{E}(u)\) than the ones resulting from consecutive uniform refinements of the initial grid. With this goal in mind, the use of an _a posteriori_ error estimator to guide the construction of the family of meshes has been proposed in the context of Multi-Level Monte Carlo methods [11; 27; 28; 33].
**Residual error estimator.** A key ingredient in an adaptive solver is a local error estimator. In our case, for each element \(K\) on the mesh \(\mathcal{T}_{\ell}\), we will use the simple residual-based a posteriori error indicator
\[\eta_{K,\ell}(\mathbf{\omega}):=h_{K}^{2}\left\|\nabla\cdot\left(\tfrac{1}{\mu \nu}\nabla u_{\ell}(\mathbf{\omega})\right)-f(u_{\ell}(\mathbf{\omega}))\right\|_{K}+ h_{K}^{3/2}\left\|\left[\tfrac{1}{\mu\nu}\nabla u_{\ell}(\mathbf{\omega})\cdot\mathbf{n} \right]\right\|_{\partial K\setminus\partial\Omega}, \tag{20}\]
where \(\partial K\) is the boundary of the element \(K\), \(h_{K}\) is the diameter of \(K\), \(\mathbf{n}\) is the outward unit normal to the element \(K\), \([\![\cdot]\!]\) denotes the jump across the edge of an interior element, and \(f\) is the source term defined piecewise on the right-hand side of (1a). Following [11; 33], we will further define the mean local and mean global error estimator respectively as
\[\eta_{K,\ell}:=\mathbb{E}\left(\eta_{K,\ell}(\mathbf{\omega})\right)\quad\text{ and }\quad\eta_{\ell}^{2}:=\sum_{K\in\mathcal{T}_{\ell}}\eta_{K,\ell}^{2}. \tag{21}\]
For linear deterministic problems, estimators of this form can be shown to be such that there are constants \(C_{1},C_{2}>0\) such that \(C_{1}\eta_{\ell}\leq\|u-u_{\ell}\|_{Z}\leq C_{2}\eta_{\ell}\). Therefore, the error estimator will accurately locate the regions of high error density and will decay at the same rate as the true error [35]. The global error can then be approximated by adding the local estimators over the entire triangulation. Using these error estimators, the adaptive analogue of (18) can be written as
\[\mathbb{V}(u-u_{\ell})\leq\mathbb{E}\left[\|u-u_{\ell}\|_{Z}^{2}\right]\approx \eta_{\ell}^{2}, \tag{22}\]
which then leads to the following adaptive analogue of the extrapolation formula (19)
\[V_{L+1}=(\eta_{L+1}/\eta_{L})^{2}\,V_{L}. \tag{23}\]
This estimate can then be used in combination with (15) to obtain an update for the sample size required at each adaptive level.
**Adaptive solution cycle.** With these definitions in place, we can then describe our strategy, which follows the "\(\mathrm{SOLVE}\to\mathrm{ESTIMATE}\to\mathrm{MARK}\to\mathrm{REFINE}\)" paradigm familiar from deterministic adaptive solvers [7], as:
1. **Solve:** Starting from a fixed number of samples, the problem (1a) is solved on the initial mesh \(\mathcal{T}_{0}\).
2. **Estimate:** The local mean error estimator is approximated from the sample gathered.
3. **Mark:** The set \(\mathcal{M}_{\ell}\) containing the smallest possible number of elements in \(\mathcal{T}_{0}\) satisfying \[\sum_{K\in\mathcal{M}_{\ell}}\eta_{K,\ell}^{2}\geq\zeta\eta_{\ell}^{2},\] (24) for some predetermined value \(\zeta\in[0,1]\) is marked for refinement--this marking strategy is known as _Dorfler marking_ in the adaptive finite element community [10].
4. **Refine:** The elements marked are refined in such a way that the resulting triangulation \(\mathcal{T}_{\ell+1}\) is shape-regular and conforming. Efforts should be made to make sure that the growth of the number of elements is kept at bay. In our case, we used the implementation given in [16] of the algorithms described in [6; 15; 40].
The steps above are repeated until the error estimator \(\eta\) falls below a certain predetermined value.
A notion of mesh levelFor uniformly refined grids, the notion of mesh level is natural: starting from a mesh \(\mathcal{T}_{\ell}\), one step of uniform refinement decreases the mesh parameter \(h\) across the grid by a factor of \(1/2\); the resulting mesh is said to have level \(\ell+1\) and is denoted by \(\mathcal{T}_{\ell+1}\). For adaptively refined grids, where the mesh parameter is not constant through the grid, the notion of the level does not come so naturally. We will use the fact that, for a uniform refinement, the numerical error decays by a factor of \((1/2)^{p}\), (where \(p\) is the order of the FEM solver) with each successive level to extend the notion of mesh level to adaptively refined grids.
Consider a numerical approximation \(u_{\ell}\) obtained on a mesh \(\mathcal{T}_{\ell}\) with an associated error estimation given by \(\eta_{\ell}\). We will say that a mesh has level \(\ell+1\) and will denote it by \(\mathcal{T}_{\ell+1}\) if it was obtained from \(\mathcal{T}_{\ell}\) by cycling over the adaptive loop using the value \((1/2)^{p}\eta_{\ell}\) as the stopping tolerance. In other words, we will say that an adaptively refined mesh has level \(\ell+1\) if it produces a numerical solution with an error \((1/2)^{p}\) times smaller than one with level \(\ell\), just like in the uniform case. We will refer to \(q:=(1/2)^{p}\) as the decay factor. In terms of discretization accuracy, after \(\ell\) steps of adaptive refinement, an adaptively refined mesh with level \(\ell\) will have an associated error estimation \(\eta_{\ell}=q^{\ell}\eta_{0}\), where \(\eta_{0}\) corresponds to the error estimation at the initial mesh. In our numerical experiments, since the convergence rate of the piecewise linear solver is 2 (when measured in the \(L^{2}\) norm), we shall use a decay factor \(q=1/4\) to define our adaptively refined meshes.
Deterministic adaptive gridsIdeally, in the stochastic setting, all the error estimations collected from the totality of samples would be used to drive the adaptive refinement forward and build an optimal set of meshes at every level. However, due to the iterative nature of the algorithm arising from (15), the optimal mesh at every level would have to be corrected with every new batch of samples and the solutions corresponding to all previous realizations \(\omega^{(i)}\) would have to be recomputed. The computational cost of re-sampling in this manner quickly becomes impractical.
Instead, following [27; 28; 34], we will construct a sequence of deterministic adaptive grids with partial knowledge about \(\mathbb{E}(u)\) as follows. Starting from a sample \(\{\omega^{(i)}\}_{1\leq i\leq N}\) (where \(N\) is small and arbitrarily chosen) and a mesh \(\mathcal{T}_{0}\), the PDE is solved and the local error is estimated for every solution \(u_{0}^{(i)}\), resulting in \(N\) local error estimators \(\{\eta_{K,0}(\omega^{(i)})\}_{1\leq i\leq N}\). The mean local and global error estimators \(\eta_{K,0}\) and \(\eta_{0}\) are then approximated by the sample means of the individual estimators. Using this approximation of \(\eta_{K,0}\), the mesh \(\mathcal{T}_{0}\) is refined. This process is continued until the approximated mean error estimator satisfies \(\eta\leq q\eta_{0}\); the resulting mesh is stored and labeled as \(\mathcal{T}_{1}\) (mesh level 1). The previous steps are repeated until a target number of meshes \(\{\mathcal{T}_{\ell}\}_{0\leq\ell\leq L}\) have been generated. The process is described in Algorithm 1. Since the family of meshes produced is constructed using random samples of \(\omega\), they approximately reduce the error for the approximate expectation \(\mathbb{E}(u_{h})\) by a factor of \(q\) with every increasing level. This family of meshes is then kept fixed during the MLMC run.
```
1:\(\mathcal
times (or computational cost) and the accuracy of some geometric descriptors that are generated from the approximation of \(\mathbb{E}(\psi)\) obtained with each of the techniques. Following [14], we consider an ITER geometry with 12 coils and a "baseline" vector of target current intensities \(\mathbf{I}\) given by
\[\begin{array}{llll}I_{1}=-1.4\times 10^{6}A,&\quad I_{2}=-9.5\times 10^{6}A,& \quad I_{3}=-2.0388\times 10^{7}A,&\quad I_{4}=-2.0388\times 10^{7}A,\\ I_{5}=-9\times 10^{6}A,&\quad I_{6}=3.564\times 10^{6}A,&\quad I_{7}=5.469\times 10^{6 }A,&\quad I_{8}=-2.266\times 10^{6}A,\\ I_{9}=-6.426\times 10^{6}A,&\quad I_{10}=-4.82\times 10^{6}A,&\quad I_{11}=-7.504 \times 10^{6}A,&\quad I_{12}=1.724\times 10^{7}A.\end{array} \tag{25}\]
We will refer to these values as the _reference currents_. The profiles for \(p\) and \(g\) on the right hand side of (1a) follow the form given in (1b) with the specific values \(r_{0}=6.2m\), \(\beta=0.5978\), \(\alpha_{1}=2\), and \(\alpha_{2}=1.395\). In our experiments, we will take the vector of current intensities to be subject to uncertainty modeled as a uniformly distributed perturbation of magnitude \(\tau=2\%\) centered around the reference values above.
### Experiment description
In this section, we present numerical results comparing the three approaches - MC-FE, uniform MLMC-FE, and adaptive MLMC-FE. For the solution of (1), we used the finite element-based solver FEEQS.m [24] developed by Holger Heumann and collaborators as a lightweight Matlab implementation of the code CEDRES++ [14; 25]. The code implements a piecewise linear finite element discretization of a weak formulation of (1) and employs a globalized variation of Newton's method to resolve the nonlinearity. (The stopping threshold for the relative residual was set to \(5\times 10^{-11}\).) For the solution of the perturbed problems, the initial iterate of Newton's method was taken to be the solution corresponding to the reference currents \(\mathbf{I}\). All tests used the splitting parameter \(\theta=0.5\) in (6). The user-specified tolerances for the normalized mean squared error range from \(\epsilon=2\times 10^{-4}\) to \(8\times 10^{-3}\). Experiments were conducted using MATLAB R2022a on a System 76 Thelio Major with 256GB RAM and a 64-Core @4.6 GHz AMD Threadripper 3 Processor.
To produce an estimate of the number of samples required on each discretization level, equation (15) requires the knowledge of two problem-dependent parameters: the power \(c\) appearing in the estimate for the computational cost (14), and the normalized variance of the correction terms \(V_{\ell}\) as defined in (13). The normalization factor \(\|\mathbb{E}(u_{L})\|_{Z}\) in (13) was estimated on the finest uniform mesh level (\(\ell=5\)) to be approximately \(8.57\times 10^{-1}\). To estimate the value of \(c\), 100 random currents are sampled for different mesh sizes \(M_{\ell}\), the processing times required to obtain the solutions are averaged for each mesh size, and \(c\) is estimated through a regression. Figure 2 shows the behavior of the average cost as a function of the mesh size \(M_{\ell}\); from the data displayed, the power law is estimated to be \(c\approx 1.09\). Note that this cost estimate is based on Matlab timings and not on the complexity analysis of standard linear solution algorithms. The same samples are also used to estimate the sample means, \(\mathbb{E}(Y_{\ell})\) or \(\mathbb{E}(u_{h})\), and variances, \(\mathbb{V}(Y_{\ell})\) or \(\mathbb{V}(u_{h})\) dynamically using Welford's algorithm [47]. As the new samples are gathered, the mean \(m_{w}\) and proxy for the variance, \(s_{w}\), are updated using the following formulas for the \(i\)-th sample, with \(m_{w}^{(0)}=0,s_{w}^{(0)}=0\):
\[m_{w}^{(i)}=m_{w}^{(i-1)}+\frac{u^{(i)}-m_{w}^{(i-1)}}{i},\quad s_{w}^{(i)}=s_ {w}^{(i-1)}+\left\langle u^{(i)}-m_{w}^{(i-1)},u^{(i)}-m_{w}^{(i)}\right\rangle.\]
Using sample size \(i\), the variance is then given by \(V^{(i)}=s_{w}^{(i)}/(i-1)\).
To perform MLMC-FE simulations, the user typically defines and generates a sequence of spatial grids, where, given a tolerance \(\epsilon\), the fineness of the grid is determined by the requirement that the discretization error (\(\widetilde{\mathcal{E}}_{\text{Bias}}^{2}\) in (6)) or an estimate of it be small enough. In this study, we generated two types of grids, a set of _geometry-conforming_ uniformly refined grids, and a set of adaptively refined grids constructed using
the strategy presented in Section 3.3.1 For the uniformly refined grid, we generated a total of six levels of grids. We note that, due to the increasing accuracy of the spline approximation of the curved boundaries, these meshes are not a direct refinement of each other. Instead, each level is characterized by a mesh parameter \(h_{\ell}\) being roughly half of the preceding mesh as determined by the Triangle mesh generator [45]. The adaptive refinement strategy began with the coarsest mesh from the uniform family and applied the weighted \(L_{2}\) a posteriori error estimator specified in (20) and \(q=1/4\) to reflect a similar error decay as for uniform refinement, also using Triangle to generate the desired adaptively refined meshes. The number of grid points for each of these methods on different grid levels is shown in Table 1. Note that the grid sizes for the adaptive meshes are not dramatically smaller than for uniform meshes, which suggests that the solution, as a function of the spatial variables, is fairly smooth.
Footnote 1: The domain components contain curved boundaries, which we handled by treating them as polygonal structures. The mesh generation entails identifying the curved boundaries using piecewise splines and interpolating along these splines with grids of varying fineness.
### Computational cost
Figure 3 shows a variety of computational results, including the error estimator, \(V_{\ell}\), and CPU time for the two versions of MLMC (and times for full MC). To investigate the convergence behavior of the discretization error, we calculate the a posteriori error estimator for both uniform and adaptive meshes in the same experiment to obtain an estimate of \(c\) for \(C_{\ell}\) before conducting the simulations. The results are displayed in the top left plot in Figure 3, with a dashed line showing a least square fit, indicating that the discretization error of both methods exhibits an asymptotic rate of \(\mathcal{O}(M^{-1})\) (or \(p\approx 2\)). The similar convergence rate further indicates that the solution to the problem is smooth and the error is equidistributed,
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Level \(\ell\) & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline Uniform & 2685 & 8019 & 30449 & 120697 & 484080 & 1934365 \\ \hline Adaptive & 2685 & 6090 & 25099 & 103968 & 411913 & 1552282 \\ \hline \end{tabular}
\end{table}
Table 1: The number of grid points \(M_{\ell}\) for both geometry-conforming uniform and adaptive (\(q=1/4\)) meshes as the mesh levels increase from 0 to 5.
Figure 2: Mean CPU time to compute 100 realizations of \(u_{\ell}\), as a function of the number of grid points \(M_{\ell}\), plotted on a logarithmic scale. The fitted curve indicates that the computational cost \(C_{\ell}\) behaves approximately like \(M_{\ell}^{1.09}\).
rendering the adaptive strategy comparable to uniform mesh refinement. Note that the estimated error is used for variance extrapolation in (23) during the MLMC simulations.
The top right plot of Figure 3 shows the behavior of \(V_{\ell}\) of (13) for both uniform and adaptive MLMC-FE methods with \(\epsilon=2\times 10^{-4}\), using six levels of meshes. It can be seen that both methods demonstrate a decreasing trend in the values of \(V_{\ell}\) as the mesh resolution increases, with a power law decay characterized by \(b\approx 2\) in the least square fit. But there is a regime for a small number of grid points where the asymptotic behavior of the adaptive method is not evident, in contrast to the behavior of the uniform method. As the meshes get finer, the plots of \(V_{\ell}\) for the two methods are close to being parallel.
The computational effort for uniform MLMC-FE and MC-FE scales as \(\mathcal{O}(\epsilon^{-2})\) and \(\mathcal{O}(\epsilon^{-3})\) respectively, as indicated by the slopes of the least square fitting lines for the red and yellow curves. This observation is consistent with the theoretical cost predictions in Theorem 1 (with \(b>c\)) and (8). Theorem 1 also indicates that the majority of computational work is performed on coarse grids. Table 2 shows the sample sizes obtained from (7) for MC-FE and (15) for MLMC-FE, further demonstrating a decrease in \(N_{\ell}C_{\ell}\) as \(\ell\) increases for the multilevel methods. We also found that the computational cost associated with the smallest tolerance \(\epsilon=2\times 10^{-4}\) is so large that we were unable to generate MC-FE results on a fine mesh (\(\ell=5\)) with a large sample size. In contrast, both versions of MLMC-FE successfully generated results with this tolerance.2 Table 3 gives quantitative data on the costs in CPU time for the three methods, as well as the speedups achieved by the two multilevel methods. It can be seen that for small tolerance \(\epsilon\), both these methods reduce the CPU times dramatically, with many examples of speedups greater than a factor of 60 and a best-case speedup approximately 200.
Footnote 2: Although we could not directly generate results for MC-FE for \(\epsilon=2\times 10^{-4}\), we could estimate the costs. In particular, we found that the variance \(V_{h}\) is close to constant across mesh levels. Consequently, we used (7) to estimate the number of required samples as 8000 in Table 2, approximately four times the number required for \(\epsilon=4\times 10^{-4}\). This number was multiplied by the mean CPU time observed for the computations for Figure 2 (120.3 seconds, the largest entry appearing in the figure) to give the estimated total CPU time in Table 3.
As seen in the bottom left plot of Figure 3, the uniform and adaptive versions of MLMC-FE have similar computational costs of \(\mathcal{O}(\epsilon^{-2})\), as evidenced by the similar decay rate of the error estimator and (22). According to (16), the slightly smaller magnitude of the error estimator for the adaptive MLMC-FE suggests a smaller (or comparable) computational cost in the asymptotic regime. However, when \(\epsilon=4\times 10^{-4}\), the adaptive MLMC-FE method requires approximately twice as much CPU time (\(1.79\times 10^{3}\) seconds) compared to the uniform MLMC-FE approach (\(9.29\times 10^{2}\) seconds) due to a notable increase in \(V_{\ell}\) around \(M_{\ell}=10^{4}\). This also causes the speedups achieved using adaptive refinement to be somewhat smaller than for uniform refinement. Thus, the traditional advantage of adaptive mesh refinement is not clearly present. We also attribute this to the apparent smoothness of the solution. We will demonstrate some advantages of the adaptive strategy in Section 4.3.
The bottom right plot of Figure 3 shows that the nMSE tolerance \(\epsilon\) of MC-FE approach declines at \(\mathcal{O}(N^{-0.51})\), which is consistent with the well-known square root convergence rate. This rate holds since \(\mathbb{V}(u_{h})\) remains nearly constant among all levels.
### Properties of geometric parameters
Next, we will explore the plasma boundaries and geometric descriptors of the expected poloidal flux \(\psi\) resulting from the three methods. To ensure a fair comparison, we will use the results obtained from the MC-FE on the finest uniform mesh as a reference benchmark.
**Plasma boundary.** In Figure 4, we investigate the plasma boundary for the expected poloidal flux in (3). We present the plasma boundaries obtained from 50 random currents in light violet curves and the plasma boundary of the expected solution in dark violet. In Figure 5 we present plots depicting the x-points and plasma boundaries of the expected solution \(\psi\) computed using only samples and corrections from increasingly finer grids for both uniform and adaptive MLMC-FE approaches. The data was obtained with tolerance \(\epsilon=4\times 10^{-4}\). As can be seen when moving from left to right in Figure 5, the result obtained using the information from the coarsest level (leftmost column) is progressively corrected with information from increasingly finer grids, leading to the desired result depicted in the rightmost column.
Among the three methods, MC-FE yields the smoothest plasma boundaries in the vicinity of the x-point, followed by adaptive MLMC-FE, while the MLMC-FE approach on geometry-conforming uniform meshes manifests the most pronounced irregularity in the plasma boundary. The boundary of the expected solution generated with the uniform grid MLMC-FE method exhibits irregularities as can be seen in Figure 4. These large deformations can be primarily attributed to the additional challenges arising from the curved boundaries. Section 7.2 in the Appendix addresses this point in more detail. The top row of Figure 5
Figure 3: Top left: weighted \(L_{2}\) error (with weight \(\mu x\)) of estimator \(C\eta_{\ell}\) vs. number of grid points \(M_{\ell}\) plot. Top right: normalized variance \(V_{\ell}\) vs. number of grid points \(M_{\ell}\) plot. Bottom left: CPU time in seconds vs. tolerance \(\epsilon\). Bottom right: Monte Carlo convergence rate estimate with tolerance \(\epsilon\) vs. sample size \(N\). This plot is generated from Table 2.
demonstrates that using a geometry-conforming mesh provides a more accurate approximation of the curved structure (in black) of the configuration than that in the bottom row. These observations underscore the challenge of striking a balance between preserving geometric fidelity when dealing with curved boundaries and the desired statistical accuracy of the solution.
**Geometric descriptors.** Table 4 reports some geometric parameters of the expected poloidal flux \(\psi\) in (3). It is observed that these parameters are consistent across different simulation techniques, with agreement typically up to two or, in some cases, one significant digit. Despite the advantages of low computational cost, the MLMC-FE-based methods may encounter difficulties in accurately determining the locations of x-points and magnetic axis. Note that the x-points, which correspond to saddle points of the piecewise linear approximation of \(\psi\), can only be located at the nodes of the mesh. The numerical identification of their exact locations, which often relies on changes in the sign of the discrete gradient, can be challenging; see [4; 12; 29] for discussions of the computational difficulties.
In summary, simulations using the uniform MLMC-FE on non-nested geometry-conforming uniform meshes may encounter a substantial challenge in accurately identifying the x-point and achieving less accurate quantities, especially for the plasma boundary, when compared to the results obtained from MC-FE. On the other hand, the adaptive MLMC-FE approach on a nested adaptively refined mesh set produces results that closely align with the MC-FE at a much lower computational cost.
## 5 Concluding remarks
The objective of this study is to evaluate the performance of MC-FE and several variants of MLMC-FE for the Grad-Shafranov free boundary problem under high-dimensional uncertainties in the currents. This model introduces some complexities, most notably, complex boundary structures, that must be treated carefully in the context of multilevel methods. We handled this for uniform MLMC-FE using a sequence
\begin{table}
\begin{tabular}{|c|
of uniform meshes that conform to the curved boundaries in the geometry. We also found that a feature of adaptive MLMC-FE is that adaptive gridding leads to a somewhat better representation of geometric quantities; in contrast, errors introduced by computations on uniform coarse meshes lead to some distortions of some features. The traditional advantage of adaptive refinement reduced computational cost, was less clearly present. But the overall advantage of the MLMC approach in reducing costs is dramatic.
## 6 Acknowledgements
All the free boundary computations were carried over using the code FEEQS.M [24; 25]. The authors are deeply grateful to Holger Heumann, the INRIA CASTOR team, and all the development team of the CEDRES++ free boundary solver for sharing access to the code and for helping us get up to speed with its usage.
Howard Elman has been partially supported by the U. S. Department of Energy under grant DE-SC0018149 and by the U. S. National Science Foundation under grant DMS1819115. Jiaxing Liang has been partially funded by the U. S. Department of Energy under grant DE-SC0018149. Tonatiuh Sanchez-Vizuet has been partially funded by the U. S. National Science Foundation through the grant NSF-DMS-2137305 "LEAPS-MPS: Hybridizable discontinuous Galerkin methods for non-linear integro-differential boundary value problems in magnetic plasma confinement".
Figure 4: The overlayed plasma boundaries of 50 random realizations are displayed in the top row as violet curves. The solid violet line is the plasma boundary of the expected poloidal flux generated with tolerance \(\epsilon=4\times 10^{-4}\). The inner and outer walls of the reactor are displayed in solid black and dark red respectively. The bottom row shows the regions close to the x-points in more detail. The dark green dots are the x-points of the expected solution. Each column from left to right corresponds to: simulation with the Monte Carlo approach, MLMC simulation on geometry-conforming uniform meshes, and adaptive MLMC simulation. All simulations were performed using the discretization level \(\ell=5\).
## 7 Appendix
We now describe some technical mathematical and implementation details related to the work presented. This additional material is provided for the sake of completeness but is not needed to follow the discussion from the previous sections.
### Weak formulation of the Grad-Shafranov equation
_Deterministic problem._ We will start by introducing the weak formulation for the Grad-Shafranov problem in the case where all the parameters involved are deterministic. Consider a semi-circle centered at the origin, boundary \(\Gamma\) and with radius \(\rho\) such that it fully contains all the relevant reactor components depicted in Figure 1. If \(\Omega\) denotes the region surrounded by \(\Gamma\) then, by construction, for any in \(\Omega^{c}\) the right-hand side of (1a) will vanish identically. We will then, following [23], consider the space of real-valued functions
\[Z:=\left\{u:\Omega\to\mathbb{R}\,\bigg{|}\,\int_{\Omega}u^{2}x\,dxdy<\infty\,, \,\int_{\Omega}\frac{|\nabla u|^{2}}{x}dxdy<\infty;\,\,\,\text{and}\,\,\,u(0, y)=0\right\}\cap C^{0}\left(\overline{\Omega}\right). \tag{26}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & MC-FE & Uniform MLMC-FE & Adaptive MLMC-FE \\ \hline x point & (5.14,-3.29) & (5.14,-3.29) & (5,14,-3.28) \\ \hline magnetic axis & (6.41,0.61) & (6.44,0.56) & (6.46,0.54) \\ \hline strike & (4.16,-3.71) & (4.16,-3.71) & (4.16,-3.71) \\ points & (5.56,-4.22) & (5.56,-4.22) & (5.56,-4.21) \\ \hline inverse aspect ratio & 0.32 & 0.32 & 0.32 \\ \hline elongation & 1.86 & 1.87 & 1.86 \\ \hline upper triangularity & 0.43 & 0.43 & 0.43 \\ \hline lower triangularity & 0.53 & 0.53 & 0.53 \\ \hline \end{tabular}
\end{table}
Table 4: Geometric parameters of the expected poloidal flux \(\psi\) from MC-FE, MLMC-FE with geometry-conforming uniform mesh set, and adaptive MLMC-FE. The results are generated with an nMSE \(4\times 10^{-4}\).
Figure 5: The violet curves represent the expected plasma boundaries for simulations using increasingly finer grids when \(\epsilon=4\times 10^{-4}\) using the sample sizes specified in Table 2. Each sub-plot focuses on a region near the x-point, maintaining the same zoom-in ratio as the second row of Figure 4. The dark green dots denote the locations of the x-point. The top row shows the results of MLMC-FE on a set of geometry-conforming uniform meshes, while the bottom row displays the results for adaptive MLMC.
This space arises naturally when testing equation (1a) using the weighted \(L_{2}\) inner product defined by
\[\langle u,v\rangle:=\int_{\Omega}uvx\,dxdy,\]
which leads to the finite energy requirement appearing in the second inequality in the definition (26). The third requirement (\(u(0,y)=0\)) is a result of the anti-symmetry of the problem with respect to reflections across the axis of symmetry of the reactor and has the effect of ensuring that the quantity
\[\|u\|_{Z}:=\left(\int_{\Omega}\frac{|\nabla u|^{2}}{x}dxdy\right)^{1/2} \tag{27}\]
does indeed define a norm in the space \(Z\). We will refer to this norm as the _energy norm_. The space \(Z\) defined in (26) endowed with the energy norm (27) is the natural function space to look for variational solutions to the deterministic linearized Grad-Shafranov equation (1). The variational formulation of the problem, as derived in [1, 17, 30], is that of finding \(\psi\in Z\) such that for every test \(\varphi\in Z\) it holds that:
\[\int_{\Omega_{p}}\frac{1}{\mu x}\nabla\psi\cdot\nabla\varphi\,dxdy -\int_{\Omega_{p}}\left(rp^{\prime}(\psi)+\frac{1}{\mu_{0}x}ff^{\prime}(\psi) \right)\varphi\,dxdy+\int_{\Gamma}\psi\,N\,\varphi\,dc\] \[\quad+\int_{\Gamma}\int_{\Gamma}\left(\psi(\mathbf{x}_{1})-\psi(\bm {x}_{2})\right)M(\mathbf{x}_{1},\mathbf{x}_{2})\left(\varphi(\mathbf{x}_{1})-\varphi(\mathbf{ x}_{2})\right)\,dc(\mathbf{x}_{1})\,dc(\mathbf{x}_{2})=\sum_{k=1}^{M_{c}}\frac{I_{k}}{S_{k}} \int_{\Omega_{C_{k}}}\varphi\,dxdy. \tag{28}\]
Above, the magnetic permeability \(\mu\) is either a function of \(\psi\) inside a region occupied by a ferromagnetic material, \(\mu=\mu(|\nabla\psi|^{2}/x^{2})\), or a constant \(\mu=\mu_{0}\) elsewhere, \(\mathbf{x}_{i}=(x_{i},y_{i})\) denote position vectors, \(\Omega_{C_{k}}\) denotes the region occupied by the \(k\)-th coil, and the total number of coils is denoted by \(M_{c}\). The following quantities, appearing on the left-hand side of (28), are related to the Green's function associated with the operator on the left-hand side of (1a)
\[N:=\frac{1}{x}\left(\frac{1}{\delta_{+}}+\frac{1}{\delta_{-}}- \frac{1}{\rho}\right),\qquad\delta_{\pm}:=\sqrt{x^{2}+(\rho\pm y)^{2}}\,, \qquad\kappa(\mathbf{x}_{1},\mathbf{x}_{2}):=\sqrt{\frac{4x_{1}x_{2}}{(x_{1}+x_{2})^{2 }+(y_{1}-y_{2})^{2}}}\,,\] \[M(\mathbf{x}_{1},\mathbf{x}_{2}):=\frac{\kappa(\mathbf{x}_{1},\mathbf{x}_{2})}{2 \pi(x_{1}x_{2})^{3/2}}\left(\frac{2-\kappa^{2}(\mathbf{x}_{1},\mathbf{x}_{2})}{2-2 \kappa^{2}(\mathbf{x}_{1},\mathbf{x}_{2})}E(\kappa(\mathbf{x}_{1},\mathbf{x}_{2}))-K(\kappa( \mathbf{x}_{1},\mathbf{x}_{2}))\right)\,,\]
where \(E(\kappa(\mathbf{x}_{1},\mathbf{x}_{2}))\) and \(K(\kappa(\mathbf{x}_{1},\mathbf{x}_{2}))\) are complete elliptic integrals of the first and second kind respectively [31].
_Accounting for stochasticity._ We now consider the stochasticity in the currents and allow the vector of currents to be a \(d\)-dimensional random variable \(\mathbf{\omega}\) uniformly distributed over the parameter space \(W\) defined in (2). It is clear that in this case for any particular realization of the currents \(\mathbf{\omega}\) we will obtain a _different_ equilibrium configuration \(\psi(\mathbf{\omega})\) that belongs to the Banach space \(Z\) defined in (26). Moreover, since for every \(\mathbf{\omega}\in W\) the resulting equilibrium configuration has finite energy, it then holds for the expected value of the equilibrium that
\[\mathbb{E}\left(\|\psi\|_{Z}^{2}\right)<\infty.\]
In mathematical terms, we say that the stochasticity of the currents transforms the solution \(\psi\) to (1) into a Banach space-valued random variable with finite expected energy. If \(\mathbf{\omega}\) belongs to a complete and separable probability space \((W,\Sigma,\mathbb{P})\), the class of such random variables forms what is known as a _Bochner space_[9].
In our particular case, solutions to (1) are mappings from the parameter space \(W\) to the Banach space \(Z\) that, as functions of \(\omega\), belong to the Bochner space
\[L^{2}(W,\Sigma,\mathbb{P};Z):=\{u:\,W\to Z\,\big{|}\,\,u\text{ strongly measurable},\,\,\,\|u\|_{L^{2}(W,Z)}<\infty\},\]
where the norm \(\|\cdot\|_{W,Z}\) is precisely defined in terms of the expected energy
\[\|u\|_{W,Z}:=\left(\int_{W}\|u(\cdot,\omega)\|_{Z}^{2}\,d\mathbb{P}(\omega) \right)^{1/2}=\left(\mathbb{E}\left(\|u(\cdot,\omega)\|_{Z}^{2}\right)\right)^ {1/2}.\]
### Meshing curved domains and their effect on MLMC estimations
In the deterministic setting this _geometric error_ has the undesired consequence of hindering the decay of the discretization error since, as the mesh is refined, the discretized computational domain does not converge to the semicircle bounded by \(\Gamma\). In the stochastic setting the geometric error manifests itself in rendering the Monte Carlo estimator biased and inconsistent. The inconsistency stems from the fact that, as both the sample size and the mesh level increase, the Monte Carlo estimator does not converge to the expectation of the random variable \(u\) satisfying the free boundary problem. Instead, the estimator converges to the expectation of the random variable that satisfies a perturbation of (28) where the curve \(\Gamma\) is not a semicircle, but the initial polygonal approximation. If the initial mesh is fine enough, this geometric bias will likely be too small to affect the estimation.
On the other hand, if an exact descriptor of the curved boundaries is available, the aforementioned difficulty can be overcome by re-sampling the curved boundaries when building the sequence of finer grids, thus allowing for a resolution of the curved structures consistent with the respective mesh parameter. If an exact descriptor is not available it is possible to approximate it with, for instance, a cubic spline representation that interpolates the original polygonal representation. This spline surrogate is then used to re-sample the boundary as the mesh is refined. This strategy, which gives rise to what we refer to as _geometry conforming meshes_, was implemented for the numerical experiments with _uniformly refined meshes_ and can be seen in use in the top row of Figure 5, where the curved boundary is represented more accurately as the mesh is refined.
Nevertheless, even if the approximation to the problem geometry is now consistent with the discretization error, this approach creates additional challenges. Since the approximations to the curved boundaries are not fixed across mesh levels, the sequence of meshes is no longer nested--not even in the case of uniform refinements. Moreover, due to the fact that the sequence of discrete domains no longer coincides across levels, the domains of definition of the respective discrete solutions will not overlap, and an extrapolation step may be needed in order to compute the multilevel Monte Carlo estimator on a common computational domain. This strategy, used in our numerical experiments, introduces an additional extrapolation error. In our case this is evident, for instance, in the fact that the plasma boundary of the expected solution \(\mathbb{E}[\psi_{h}]\) is considerably less regular in the geometry-conforming case than it is in the non-geometry-conforming one. This can be seen in Figure 5. The extrapolation error can be taken care of through careful post-processing. One option is to project or interpolate the numerical solutions into a subdomain common to all grids so that no extrapolation is needed for evaluation. This strategy was employed to produce Figure 6 successfully eliminating the spurious oscillations in the plasma boundary. However, doing this requires considerable computational work and reduces the time savings obtained from MLMC.
One further difficulty is that the re-sampling of the boundaries is impossible to perform in a straightforward fashion in the case of adaptively refined meshes. Thus, the geometric approximation remains fixed at the initial level of refinement. This can be seen in the bottom row of Figure 5, where the solid black line represents the polygonal approximation to the curved boundary of the divertor. The approximation improves as the mesh is refined for the uniformly refined mesh, but stays fixed for the adaptive strategy.
### Mathematical results on multilevel Monte Carlo
The proceeding analysis on the computational cost to compute the MLMC-FE estimator in terms of the desired relative accuracy \(\epsilon\) is based on the following Theorem (established and proven in [8]), which quantifies the distribution of the computational effort across discretization levels in terms of the relation between the decay rate of the variance of the correction terms, \(V_{\ell}\), and the increase of the computational cost, \(C_{\ell}\), as the mesh is refined. In particular, it states that if the variance of the correction terms decays faster than the increase of computational cost, the dominant computational expense takes place on the coarsest grid--see also [19].
**Theorem 1**.: _Suppose there exist positive constants \(a,b,c\) such that \(a\geq\frac{1}{2}\min(b,c)\),_
1. \(\|\mathbb{E}\left(u-u_{\ell}\right)\|_{Z}=\mathcal{O}\left(M_{\ell}^{-a}\right)\)_,_
2. \(V_{\ell}=\mathcal{O}\left(M_{\ell}^{-b}\right)\)_,_
3. \(C_{\ell}=\mathcal{O}\left(M_{\ell}^{c}\right)\)_._
_Then for any positive \(\epsilon<e^{-1}\) small enough, there exists level \(L\) and sample size \(N_{\ell}\) for which the multilevel estimator \(A_{MLMC}\) has an nMSE with_
\[\frac{\mathbb{E}\left[\|\mathbb{E}(u)-A_{MLMC}(u_{L})\|_{Z}^{2}\right]}{ \mathbb{E}\left[\|\mathbb{E}(u)\|_{Z}^{2}\right]}<\epsilon^{2},\]
_and the total computation cost \(C\) with bound_
\[C\left(A_{MLMC}\right)=\left\{\begin{array}{ll}\mathcal{O}\left(\epsilon^{- 2}\right),&b>c,\\ \mathcal{O}\left(\epsilon^{-2}\left(\log\epsilon\right)^{2}\right),&b=c,\\ \mathcal{O}\left(\epsilon^{-2-\frac{(c-b)}{a}}\right),&0<b<c.\end{array}\right.\]
Figure 6: The violet curves represent the expected plasma boundaries for post-processed simulations using increasingly finer grids when \(\epsilon=4\times 10^{-4}\) and sample size specified in Table 2. Each sub-plot focuses on a region near the x-point, maintaining the same zoom-in ratio as the second row of Figure 4. The dark green dots denote the locations of the x-point. |
2305.12407 | Federated Offline Policy Learning with Heterogeneous Observational Data | We consider the problem of learning personalized decision policies on
observational data from heterogeneous data sources. Moreover, we examine this
problem in the federated setting where a central server aims to learn a policy
on the data distributed across the heterogeneous sources without exchanging
their raw data. We present a federated policy learning algorithm based on
aggregation of local policies trained with doubly robust offline policy
evaluation and learning strategies. We provide a novel regret analysis for our
approach that establishes a finite-sample upper bound on a notion of global
regret across a distribution of clients. In addition, for any individual
client, we establish a corresponding local regret upper bound characterized by
the presence of distribution shift relative to all other clients. We support
our theoretical findings with experimental results. Our analysis and
experiments provide insights into the value of heterogeneous client
participation in federation for policy learning in heterogeneous settings. | Aldo Gael Carranza, Susan Athey | 2023-05-21T09:08:09Z | http://arxiv.org/abs/2305.12407v1 | # Federated Offline Policy Learning
###### Abstract
We consider the problem of learning personalized decision policies on observational data from heterogeneous data sources. Moreover, we examine this problem in the federated setting where a central server aims to learn a policy on the data distributed across the heterogeneous sources without exchanging their raw data. We present a federated policy learning algorithm based on aggregation of local policies trained with doubly robust offline policy evaluation and learning strategies. We provide a novel regret analysis for our approach that establishes a finite-sample upper bound on a notion of global regret across a distribution of clients. In addition, for any individual client, we establish a corresponding local regret upper bound characterized by the presence of distribution shift relative to all other clients. We support our theoretical findings with experimental results. Our analysis and experiments provide insights into the value of heterogeneous client participation in federation for policy learning in heterogeneous settings.
## 1 Introduction
The problem of offline policy learning from logged observational data has received significant interest in recent years as an effective approach for learning personalized decision policies in applications where obtaining online, real-time data is impractical, such as healthcare, digital advertising, and public policy (Athey and Wager, 2021; Zhou et al., 2022). Typically, the data used for offline policy learning is assumed to originate from a single source distribution. However, in practice, we often have the opportunity to leverage multiple datasets collected from past experiments with potentially different sampling strategies on varied populations and environments (Agarwal et al., 2017). For instance, digital platforms may conduct multiple A/B tests where outcomes may differ across experiments due to underlying non-stationarities. Similarly, hospitals may conduct different clinical trials on distinct patient populations across experiments. Leveraging multiple heterogeneous data sources, with their more diverse and extensive coverage of the decision space, can help address the unique challenges of learning generalizable personalized decision policies.
However, several constraints, such as privacy concerns, legal restrictions, proprietary interests, or competitive barriers, can hinder the consolidation of datasets across sources. Offline policy learning methods are flexible to policy class constraints to prevent personalizing policies on sensitive data, but they still require access to this data during learning (Kitagawa and Tetenov, 2018). Federated learning (Kairouz et al., 2021) has emerged as a solution to such obstacles by offering a framework for training models in a decentralized manner, thereby minimizing systemic risks and costs associated with traditional, centralized machine learning. Federated learning techniques applied to policy learning can enable platforms to learn targeted policies without centrally storing sensitive user information. It can also allow institutions to collaborate on developing more generalizable policies across varying environments without sharing sensitive data, such as clinical patient data in hospitals. Thus, it would be greatly beneficial to make offline policy learning amenable to federated settings.
A critical challenge in offline policy learning with heterogeneous observational data is the presence of distribution shift, which may arise due to changes in user populations, environments, or logging policies. Moreover, in federated supervised learning, distributional mismatches among heterogeneous clients have been shown to degrade performance (Li et al., 2020; Karimireddy et al., 2020). Therefore, to make policy learning amenable to federated settings, it is important to understand and address the performance degradation in policies resulting from distribution shifts among heterogeneous clients.
Our contributions in this work are summarized as follows:
* We introduce the problem of learning personalized decision policies on observational data that is distributed across multiple heterogeneous sources. Moreover, we consider the federated setting where the data sources may be restricted from exchanging their raw data.
* For this novel problem setting, we present a regret analysis that distinguishes between the global regret of the central server and the local regret of the clients. For both regret notions, we provide finite-sample upper bounds characterized by expressions of client heterogeneity.
* We provide a federated algorithm for offline policy learning based on the federated averaging algorithm with local model updates given by online cost-sensitive classification oracles. We experimentally verify the effect of distribution shift on local regret and we present design choices to overcome local performance degradation due to distribution shift.
## 2 Related Work
Offline Policy LearningOffline policy learning with observational data has seen significant interest and advancement in recent years. Swaminathan and Joachims (2015); Kitagawa and Tetenov (2018) formulate the framework for learning structured decision policies using offline policy value estimation methods with inverse propensity weighting. However, guarantees assume known propensity scores, which may not always be the case in real-world applications. Athey and Wager (2021) addresses this limitation by establishing optimal regret rates using doubly robust estimators with estimated propensities. Kallus (2018) focuses on using the observational data to directly find optimal weights for the target optimal policy. Zhou et al. (2022) extends policy learning to offline multi-action settings. Zhan et al. (2021) tackles policy learning with adaptively collected data using generalized augmented inverse propensity weighted estimators, achieving minimax rate optimal regret guarantees even with diminishing exploration in the data. Most similar to our heterogenous data soucse setting is the work on offline policy evaluation and learning using data from multiple historical logging policies considered by Agarwal et al. (2017); He et al. (2019); Kallus et al. (2021). All of these works assume the underlying environment and only the historical polices differ. Lastly, we mention that contextual bandits (Li et al., 2010) have seen many recent approaches that rely on access to offline policy learning oracles (Bietti et al., 2021; Krishnamurthy et al., 2021; Simchi-Levi and Xu, 2022; Carranza et al., 2022). However, the focus is on developing adaptive action-assignment algorithms that balance the exploration-exploitation trade-off.
Federated LearningKairouz et al. (2021); Wang et al. (2019) offer comprehensive surveys on federated learning and its challenges. A particularly important study on federated learning is the work of Mohri et al. (2019) which propose an agnostic supervised federated learning framework optimized for target distributions formed by a mixture of client distributions. Our analysis significantly relies on objects from their framework, such as weighted Rademacher complexity and skewness measures. Similarly, (Wei et al., 2021) establishes excess risk bounds for FL under data heterogeneity. Li et al. (2019, 2020); Karimireddy et al. (2020) explore the effect of client heterogeneity in federated learning. Contextual bandits in federated settings have been extensively explored by Agarwal et al. (2020); Dubey and Pentland (2020); Huang et al. (2021); Agarwal et al. (2023). However, to the best of our knowledge, the problem of offline policy evaluation and learning in federated settings remains largely unexplored. However, Xiong et al. (2021) has studied federated methods for estimating average treatment effects across heterogeneous populations and treatment assignment propensities. Since offline policy evaluation relies on similar counterfactual analysis tools, this work indicates that heterogeneous data in federated settings can similarly affect policy evaluation strategies.
Preliminaries
### Setting
We introduce the problem of multi-action offline policy learning with observational bandit feedback data from multiple heterogeneous data sources. Throughout the paper, we refer to a heterogeneous data source as a _client_ and the central planner that aggregates client data/models as the _central server_.
Let \(\mathcal{X}\subset\mathbb{R}^{p}\) be the context space, \(\mathcal{A}=\{a_{1},\ldots,a_{d}\}\) be the finite action space with \(d\) actions, and \(\mathcal{Y}\subset\mathbb{R}\) be the reward space. A personalized decision policy \(\pi:\mathcal{X}\to\mathcal{A}\) is a mapping from the context space \(\mathcal{X}\) to actions \(\mathcal{A}\). We assume there is a central server and a finite set of clients \(\mathcal{C}\), with each client \(c\in\mathcal{C}\) having a data-generating distribution \(\mathcal{D}_{c}\) defined over \(\mathcal{X}\times\mathcal{Y}^{d}\) governing how their contexts and all potential reward outcomes are generated. Moreover, the central server specifies a fixed distribution \(\lambda\) over the set of clients \(\mathcal{C}\) describing how clients are sampled or aggregated.1
Footnote 1: Clients are sampled in the cross-device FL setting and aggregated in the cross-silo FL settings.
The central server seeks to train a decision policy that performs well on the global data-generating distribution \(\mathcal{D}_{\lambda}=\sum_{c\in\mathcal{C}}\lambda_{c}\mathcal{D}_{c}\). At the same time, the central server does not want this policy to perform poorly on the local distribution of any individual client. In the following section, we introduce the policy performance measures we consider throughout the paper.
### Objective
We consider the immediate reward gained by a client by taking actions according to any given policy. Additionally, we extend this metric to a global version that captures the rewards gained across the distribution of clients observed by the central server.
**Definition 1**.: The _local policy value_ under client \(c\) and the _global policy value_ under client distribution \(\lambda\) of a policy \(\pi\) are, respectively,
\[Q_{c}(\pi)\coloneqq\mathop{\mathbb{E}}_{Z\sim\mathcal{D}_{c}}[Y^{c}(\pi(X^{c}) )]\quad\text{and}\quad Q_{\lambda}(\pi)\coloneqq\mathop{\mathbb{E}}_{c\sim \mathcal{D}_{c}}[Y^{c}(\pi(X^{c}))], \tag{1}\]
where the expectations are taken with respect to \(Z^{c}=(X^{c},Y^{c}(a_{1}),\ldots,Y^{c}(a_{d}))\sim\mathcal{D}_{c}\) and \(c\sim\lambda\).
The performance of a policy is typically characterized by a notion of regret against an optimal policy in a specified policy class \(\Pi\subset\{\pi:\mathcal{X}\to\mathcal{A}\}\). We define local and global versions of regret based on their respective versions of policy values.
**Definition 2**.: The _local regret_ under client \(c\) and the _global regret_ under client distribution \(\lambda\) of a policy \(\pi\) relative to policy class \(\Pi\) are, respectively,
\[R_{c}(\pi)\coloneqq\max_{\pi^{\prime}\in\Pi}Q_{c}(\pi^{\prime})-Q_{c}(\pi) \quad\text{and}\quad R_{\lambda}(\pi)\coloneqq\max_{\pi^{\prime}\in\Pi}Q_{ \lambda}(\pi^{\prime})-Q_{\lambda}(\pi). \tag{2}\]
Thus, the objective of the central server is to determine a policy in the specified policy class \(\Pi\) that minimizes global regret. On the other hand, the central server also aims to characterize the corresponding local regret of any client under the obtained policy, since this quantity captures the client's corresponding individual utility to a global policy.
### Data-Generating Process
We assume each client \(c\in\mathcal{C}\) has a local observational data set \(\{(X^{c}_{i},A^{c}_{i},Y^{c}_{i})\}_{i=1}^{n_{c}}\subset\mathcal{X}\times \mathcal{A}\times\mathcal{Y}\) consisting of \(n_{c}\in\mathbb{N}\) triples of contexts, actions, and rewards collected using a local experimental stochastic policy \(e_{c}:\mathcal{X}\to\Delta(\mathcal{A})\) in the following manner. For the \(i\)-th data point of client \(c\),
1. nature samples a context and potential outcomes vector \((X^{c}_{i},Y^{c}_{i}(a_{1}),\ldots,Y^{c}_{i}(a_{d}))\sim\mathcal{D}_{c}\);
2. client \(c\) is assigned action \(A^{c}_{i}\sim e_{c}(X^{c}_{i})\);
3. client \(c\) observes the realized outcome \(Y^{c}_{i}=Y^{c}_{i}(A^{c}_{i})\) ;
4. client \(c\) logs the data tuple \((X^{c}_{i},A^{c}_{i},Y^{c}_{i})\) locally.2
Note that although the counterfactual outcomes \(Y_{i}^{c}(a)\) for all \(a\in\mathcal{A}\backslash\{A_{i}^{c}\}\) exist in the data-generating process, they are not observed in the realized data. All clients only observe the outcomes associated to their assigned treatments. For this reason, such observational data is also referred to as bandit feedback data (Swaminathan and Joachims, 2015). We will let \(n\coloneqq\sum_{c\in\mathcal{C}}n_{c}\) denote the _total sample size_ across clients.
It will be useful to define the distributions of the entire data-generating process that includes how actions are sampled in the data. Suppose \(p_{X^{c},\vec{Y}^{c}}(x,y)\) for \((x,y)\in\mathcal{X}\times\mathcal{Y}^{d}\) is the probability density function of the local data-generating distribution \(\mathcal{D}_{c}\). The local historical policy \(e_{c}\) induces a _complete_ local data-generating distribution \(\bar{\mathcal{D}}_{c}\) with probability density function
\[p_{X^{c},A^{c},\vec{Y}^{c}}(x,a,y)=p_{X^{c}}(x)e_{c}(a|x)p_{\vec{Y}^{c}|X^{c}}( y|x) \tag{3}\]
for \((x,a,y)\in\mathcal{X}\times\mathcal{A}\times\mathcal{Y}^{d}\). Given this construction of the complete client distributions, we also define the complete global data-generating distribution \(\bar{\mathcal{D}}_{\lambda}\coloneqq\sum_{c\in\mathcal{C}}\lambda_{c}\bar{ \mathcal{D}}_{c}\). Measures of distribution shift of the complete client distributions from the complete global distribution will be important quantities in our theoretical results. In particular, we will consider their total variation distance and Kullback Leibler divergence, denoted as \(\mathrm{TV}(\bar{\mathcal{D}}_{c},\bar{\mathcal{D}}_{\lambda})\) and \(\mathrm{KL}(\bar{\mathcal{D}}_{c}||\bar{\mathcal{D}}_{\lambda})\) for any \(c\in\mathcal{C}\), respectively.
### Data Assumptions
We make the following assumptions on the entire data-generating process of any given client.
**Assumption 1**.: _The complete data-generating distribution \((X^{c},A^{c},Y^{c}(a_{1}),\ldots,Y^{c}(a_{d}))\sim\bar{\mathcal{D}}_{c}\) of any client \(c\in\mathcal{C}\) satisfies:_
1. _Boundedness: The marginal distribution of_ \(\bar{\mathcal{D}}_{c}\) _on the set of potential outcomes_ \(\mathcal{Y}^{d}\) _has bounded support, i.e., there exists some_ \(B_{c}>0\) _such that_ \(|Y^{c}(a)|\leq B_{c}\) _for all_ \(a\in\mathcal{A}\)_._
2. _Unconfoundedness: Potential outcomes are independent of the observed action conditional on the observed context, i.e.,_ \((Y^{c}(a_{1}),\ldots,Y^{c}(a_{d}))\perp\!\!\!\perp A^{c}\mid X^{c}\)_._
3. _Overlap: For any given context, every action has a non-zero probability of being sampled, i.e., there exists some_ \(\eta_{c}>0\) _such that_ \(e_{c}(a|x)=\mathbb{P}(A^{c}=a|X^{c}=x)\geq\eta_{c}\) _for any_ \(a\in\mathcal{A}\) _and_ \(x\in\mathcal{X}\)_._
Note that the boundedness assumption is not essential. We only impose boundedness for simplicity. We can instead rely on light-tail distributional assumptions such as sub-Gaussian potential outcomes as in (Athey and Wager, 2021). Unconfoundedness and overlap are necessary to ensure policy value estimation using inverse propensity-weighted strategies. These conditions are satisfied in many experimental settings such as randomized controlled trials or A/B tests. However, strict uniform overlap may not be entirely necessary, as our results could be extended to the setting with vanishing overlap as explored in (Zhan et al., 2021).
Next, we also impose the following local data scaling assumption on each client.
**Assumption 2**.: _All local sample sizes asymptotically increase with the total sample size, i.e., for each \(c\in\mathcal{C}\), \(n_{c}=\Omega(\nu_{c}(n))\) where \(\nu_{c}\) is an increasing function of the total samples size \(n\)._
This assumption states that, asymptotically, the total sample size cannot increase without increasing across all data sources. We emphasize that this local data scaling assumption is quite benign. We could have any slowly increasing function (e.g., an iterated logarithm) and the asymptotic lower bound condition even allows step-wise increments. We only impose this assumption to ensure that the regret bounds in our analysis scale with respect to the total sample size with sensible constants. However, it does come at the cost of excluding scenarios in which a client always contributes \(O(1)\) data relative to the total data, no matter how much more total data is made available in aggregate.
## 4 Approach
The general approach for the central server is to use the available observational data to determine an appropriate estimator of the global policy value and use this estimator to find an optimal global policy. The general algorithm consists of two steps: 1) learn nuisance parameters to form the global policy estimator; 2) learn the policy that maximizes the global policy value estimate.
### Nuisance Parameters
Given local data, we can define the following functions which we refer to as _nuisance parameters_ as they will be required to be separately known or estimated in the our policy value estimates.
**Definition 3**.: The local _conditional response_ and _inverse conditional propensity_ functions of client \(c\in\mathcal{C}\) with complete data-generating distribution \(Z^{c}=(X^{c},A^{c},Y^{c}(a_{1}),\ldots,Y^{c}(a_{d}))\sim\tilde{\mathcal{D}}_{c}\) are defined, respectively, for any \(x\in\mathcal{X}\) and \(a\in\mathcal{A}\) as
\[\mu_{c}(x;a)\coloneqq\mathbb{E}[Y^{c}(a)|X^{c}=x]\quad\text{and}\quad w_{c}(x; a)\coloneqq 1/\,\mathbb{P}(A^{c}=a\mid X^{c}=x) \tag{4}\]
For notational convenience, we let \(\mu_{c}(x)=(\mu_{c}(x;a))_{a\in\mathcal{A}}\) and \(w_{c}(x)=(w_{c}(x;a))_{a\in\mathcal{A}}\).
In our estimation strategy, each client must estimate the conditional response and inverse conditional propensity functions when they are unknown. Following the literature on double machine learning Chernozhukov et al. (2018), we make the following high-level assumptions on the estimators for these local nuisance parameters.
**Assumption 3**.: _The local estimates \(\hat{\mu}_{c}\) and \(\hat{w}_{c}\) of \(\mu_{c}\) and \(w_{c}\), respectively, trained on \(m\) local data points satisfy the following squared error bounds:_
\[\mathbb{E}_{\mathcal{D}_{c}}\left[\left\|\hat{\mu}_{c}(X^{c})-\mu_{c}(X^{c}) \right\|_{2}^{2}\right]\leq\frac{o(1)}{m^{\zeta_{\mu}}},\quad\mathbb{E}_{ \mathcal{D}_{c}}\left[\left\|\hat{w}_{c}(X^{c})-w_{c}(X^{c})\right\|_{2}^{2} \right]\leq\frac{o(1)}{m^{\zeta_{w}}} \tag{5}\]
_for some \(0<\zeta_{\mu},\zeta_{w}<1\) with \(\zeta_{\mu}+\zeta_{w}\geq 1\)._
Given sufficient regularity on the nuisance parameters, we can easily construct estimators that satisfy these rate conditions. Moreover, this condition is general and flexible enough to allow one to trade-off the accuracies of estimating the nuisance parameters. This is an important property to have in offline policy learning where distribution shift in the batch data can complicate reward estimation.
### Policy Value Estimator
For any client \(c\in\mathcal{C}\), given any local observable data point \((X^{c},A^{c},Y^{c})\) taken from a sample of \(\tilde{\mathcal{D}}_{c}\), we define the local _augmented inverse propensity weighted_ (AIPW) scores for each \(a\in\mathcal{A}\) to be
\[\Gamma^{c}(a)=\mu_{c}(X^{c};a)+\left(Y^{c}-\mu_{c}(X^{c};a)\right)w_{c}(X^{c}; a)\mathbf{1}\{A^{c}=a\}. \tag{6}\]
One can readily show that \(Q_{c}(\pi)=\mathbb{E}_{\tilde{\mathcal{D}}_{c}}[\Gamma^{c}(\pi(X^{c}))]\) and therefore \(Q_{\lambda}(\pi)=\mathbb{E}_{c\sim\lambda}\,\mathbb{E}_{\tilde{\mathcal{D}}_{c }}[\Gamma^{c}(\pi(X^{c}))]\) (see Lemma 6). Accordingly, our approach is to estimate the AIPW scores to form the global policy value estimator which can be used in an optimization procedure.
Suppose we have nuisance parameter estimates \(\hat{\mu}_{c}\) and \(\hat{w}_{c}\) that satisfy Assumption 3. Then, for each local data point \((X^{c}_{i},A^{c}_{i},Y^{c}_{i})\) in the observational data set of client \(c\in\mathcal{C}\), we define the approximate local AIPW scores for each \(a\in\mathcal{A}\) to be
\[\hat{\Gamma}^{c}_{i}(a)=\hat{\mu}_{c}(X^{c}_{i};a)+\left(Y^{c}_{i}-\hat{\mu}_ {c}(X^{c}_{i};a)\right)\hat{w}_{c}(X^{c}_{i};a)\mathbf{1}\{A^{c}_{i}=a\}. \tag{7}\]
Using these estimates, we can define the doubly robust global policy value estimate to be
\[\hat{Q}_{\lambda}(\pi)=\underset{c\sim\lambda}{\mathbb{E}}[\hat{Q}_{c}(\pi)], \text{ where }\hat{Q}_{c}(\pi)=\frac{1}{n_{c}}\sum_{i=1}^{n_{c}}\hat{\Gamma}^{c}_{i}(\pi(X^{ c}_{i})). \tag{8}\]
This estimator is doubly robust in the sense that it is accurate as long as one of nuisance parameter estimates is accurate for each client. The doubly robust estimator is more generalizable than a direct estimator and it has significancy lower variance than a standard inverse propensity weighted estimator (Robins et al., 1994). Lastly, to ensure we can use the same data to estimate the nuisance parameters and construct the policy value estimates, we utilize a cross-fitting strategy, as described in Zhou et al. (2022). See more details on the cross-fitting estimation strategy in Appendix F.
### Optimization Objective
The objective of the central server is to find a policy that maximizes the global policy value estimate: \(\hat{\pi}_{\lambda}=\arg\max_{\pi\in\Pi}\hat{Q}_{\lambda}(\pi)\). In the federated setting, the central server does not have access to client raw data to estimate local policy values nor does it have access to the local policy values; only model updates can be shared through the network. In Section 6, we discuss an optimization procedure for parametric policy classes that manages these communication constraints.
## 5 Regret Bounds
We establish regret bounds for the global policy solution \(\tilde{\pi}_{\lambda}\) to the optimization objective above. Refer to the Appendix for further detailed discussion and proofs of the statements in this section. First, we introduce important quantities that appear in our regret bounds.
### Complexity and Skewness
Policy Class ComplexityThe following quantity provides a measure of policy class complexity based on a variation of the classical entropy integral introduced by Dudley (1967), and it is useful in establishing a class-dependent regret bound. See Appendix B.1.1 for more details on its definition.
**Definition 4** (Entropy integral).: Let \(\operatorname{H}(\pi_{1},\pi_{2};\tilde{x})\coloneqq\frac{1}{\tilde{n}}\sum_{ i=1}^{\tilde{n}}\mathbf{1}\{\tau_{1}(\tilde{x}_{i})\neq\pi_{2}(\tilde{x}_{i})\}\) be the Hamming distance between any two policies \(\pi_{1},\pi_{2}\in\Pi\) given a covariate set \(\tilde{x}\subset\mathcal{X}\) of size \(\tilde{n}\in\mathbb{N}\). The _entropy integral_ of a policy class \(\Pi\) is
\[\kappa(\Pi)\coloneqq\int_{0}^{1}\sqrt{\log N_{\operatorname{H}}(\epsilon^{2}, \Pi)}d\epsilon, \tag{9}\]
where \(N_{\operatorname{H}}(\epsilon^{2},\Pi)\) is the maximal \(\epsilon^{2}\)-covering number of \(\Pi\) under the Hamming distance over covariate sets of arbitrary size.
Rather weak assumptions on the policy class are necessary to ensure the entropy integral is finite, such as sub-exponential growth on its Hamming covering number (Zhou et al., 2022). In the binary action setting, the entropy integral of a policy class relates to its VC-dimension with \(\kappa(\Pi)=\sqrt{\operatorname{VC}(\Pi)}\). For \(D\)-dimensional linear classes, \(\kappa(\Pi)=\mathcal{O}(\sqrt{D}\,)\).
Client SkewnessThe following quantity measures how far the client distribution is from the empirical distribution of samples. It naturally arises in the generalization bounds of weighted mixture distributions (Mansour et al., 2021).
**Definition 5** (Skewness).: Let \(\bar{n}\coloneqq(n_{c}/n)_{c\in\mathcal{C}}\) be the empirical distribution of samples across clients. The _skewness_ of a given distribution \(\lambda\) over clients is
\[\mathfrak{s}(\lambda\|\bar{n})\coloneqq 1+\chi^{2}(\lambda\|\bar{n}), \tag{10}\]
where \(\chi^{2}(\lambda\|\bar{n})\) is the chi-squared divergence of \(\lambda\) from \(\bar{n}\).
### Global Regret Bound
The following result captures a root-\(n\) finite-sample bound that is typical for the optimal regret bounds in the offline policy learning literature. However, it is moderated by skewness which can also scale with the total sample size.
**Theorem 1** (Global Regret Bound).: _Suppose Assumption 1, 2, and 3 hold. Then, with probability at least \(1-\delta\),_
\[R_{\lambda}(\tilde{\pi}_{\lambda})\leq\left(c_{1}\kappa(\Pi)+\sqrt{c_{2}\log( c_{2}/\delta)}\right)\sqrt{V\cdot\frac{\mathfrak{s}(\lambda\|\bar{n})}{n}}+o_{p} \left(\sqrt{\frac{\mathfrak{s}(\lambda\|\bar{n})}{n}}\,\right), \tag{11}\]
_where \(c_{1}\) and \(c_{2}\) are universal constants and_
\[V=\max_{c\in\mathcal{C}}\sup_{\pi\in\Pi}\mathbb{E}_{\mathcal{P}_{c}}\big{[} \Gamma^{c}(\pi(X^{c}))^{2}\big{]}. \tag{12}\]
Note that \(V\) captures a notion of the worst-case AIPW score variance across clients. Also, note that if \(\lambda=\bar{n}\), then \(\mathfrak{s}(\lambda\|\bar{n})/n=1/n\), and if \(\lambda=(1,0,\ldots,0)\), then \(\mathfrak{s}(\lambda\|\bar{n})/n=1/n_{1}\). Thus, skewness smoothly interpolates between the uniform weighted model and the single source model. Indeed, when all clients are identical and \(\lambda=\bar{n}\), we recover the known bounds of standard offline policy learning. From this, it may seem that the best choice for \(\lambda\) is the empirical sample distribution, but as we observe in the next section, there are terms in the local regret bounds that introduce trade-offs on the choice of client distribution.
### Local Regret Bound
In this next result, we capture the discrepancy in local and global regret due to client heterogeneity.
**Theorem 2** (Local Regret Bound).: _Suppose Assumption 1 holds. Then, for any \(c\in\mathcal{C}\),_
\[R_{c}(\hat{\pi}_{\lambda})\leq U\cdot\mathrm{TV}(\bar{\mathcal{D}}_{c},\bar{ \mathcal{D}}_{\lambda})+R_{\lambda}(\hat{\pi}_{\lambda}), \tag{13}\]
_where \(U=3B/\eta\) with \(B=\max_{c\in\mathcal{C}}B_{c}\) and \(\eta=\min_{c\in\mathcal{C}}\eta_{c}\)._
The first term in this regret bound is inherently irreducible relative to the sample sizes and it is due to distribution shift between the local client distribution and the mixture distribution. The following result demonstrates how this irreducible term can be further tensorized into contributions due to distribution shift in the covariates, propensities, and potential outcomes.
**Proposition 1** (Distribution Shift Bound).: _The irreducible distribution shift term in the local regret bound can be further bounded with_
\[\mathrm{TV}(\bar{\mathcal{D}}_{c},\bar{\mathcal{D}}_{\lambda})\!\leq\! \mathop{\mathbb{E}}_{k\sim\lambda}\!\Big{[}\sqrt{\mathrm{KL}(p_{X^{c}}||p_{X^{ k}})}+\sqrt{\mathrm{KL}(e_{c}||e_{k}\mid p_{X^{c}})}+\sqrt{\mathrm{KL}(p_{ \bar{\mathcal{Y}}^{c}|X^{c}}||p_{\bar{\mathcal{Y}}^{k}|X^{k}}\mid p_{X^{c}})} \,\Big{]}. \tag{14}\]
This result directly reveals the contribution to local regret from each possible source of distribution shift. If we have prior knowledge that certain parts of the data-generating distribution match, then we can claim tighter bounds on the local regret of clients. We can observe how the design choice on the client distribution \(\lambda\) must balance a trade-off to achieve low skewness and low expected distribution shift across sources. These results are useful to capture the value of information provided by the central server that can shape the incentives for clients to participate in federation (see Appendix E for further discussion).
## 6 Algorithm
We describe our algorithm for finding the optimal global policy \(\hat{\pi}_{\lambda}\) for the optimization problem stated in Section 4.3. We present an algorithm that is amenable to federation. The standard approach for federated learning is the federated averaging (FedAvg) algorithm for parametric models (Konecny et al., 2016; McMahan et al., 2017), which works iteratively by selecting clients to participate in the training round, locally fine-tuning a parametric model on each client using their own data, and then aggregating local model parameters in the central server using a weighted average. Thus, to make standard federated learning strategies suitable for policy optimization, in this work we consider parametric policy classes \(\Pi_{\Theta}=\{\pi_{\theta}:\mathcal{X}\rightarrow\mathcal{A}\mid\theta\in\Theta\}\), and we construct an iterative parametric policy optimization procedure for the local policy updates in a federated averaging procedure.
Observe that the local policy optimization procedure
\[\operatorname*{arg\,max}_{\theta\in\Theta}\left\{\hat{Q}_{c}(\pi)=\frac{1}{n_ {c}}\sum_{i=1}^{n_{c}}\hat{\Gamma}_{i}^{c}\big{(}\pi_{\theta}(X_{i}^{c})\big{)}\right\} \tag{15}\]
is equivalent to cost-sensitive multi-class classification (CSMC) (Boygelzimer et al., 2009; Dudik et al., 2011), where the actions are the labels and the AIPW scores are the negative costs corresponding to the actions. Therefore, we can conduct local policy model updates using widely available online CSMC oracle methods which are often used in policy learning for contextual bandit applications (Agarwal et al., 2014; Bietti et al., 2021). CSMC methods are based on consistent reductions to binary classification (Boygelzimer et al., 2008) or multiple regressions Agarwal et al. (2017) such that the optimal model for the reduced problem leads to an optimal cost-sensitive classifier. We make use of the fast online CSMC optimization implementations in the Vowpal Wabbit library (Langford et al., 2023).
Therefore, our algorithm works as follows:
1. Cross-fitted AIPW: Prior to policy learning, each client uses a cross-fitting strategy on their local observational data to estimate local nuisance parameters for constructing their local AIPW score estimates. For more details see Algorithm 3 in Appendix F. The client can then form a local dataset of contexts and action costs, where the costs are negative AIPW scores.
2. FedAvg-CSMC Server: The central server initializes a global model and executes FedAvg on the clients, iteratively sending the global parameters to clients and updating the global model using a weighted average of the receive local model updates. See Algorithm 1.
3. FedAvg-CSMC Client: Each time a client receives global parameters, they initialize their local model with the global parameters and use an online CSMC oracle to update the local model on their data for a fixed number of steps. See Algorithm 2.
```
0: local steps \(T\), local batch size \(B\), local data \(\{(X_{i}^{c},\hat{\Gamma}_{i}^{c}(a_{1}),\dots,\hat{\Gamma}_{i}^{c}(a_{d})) \}_{i=1}^{n_{c}}\)
1: Receive global parameters \(\theta_{g}\) from server
2: Initialize local parameters \(\theta_{c}\leftarrow\theta_{g}\)
3:for\(t=1,\dots,T\)do
4:\(\mathcal{B}\leftarrow\) sample a batch of \(B\) local examples
5: Update local parameters using CSMC oracle: \(\theta_{c}\leftarrow\) CSMC\((\theta_{c},\mathcal{B})\)
6:endfor
7: Send local parameters \(\theta_{c}\) to server
```
**Algorithm 2** FedAvg-CSMC: Client-Side
## 7 Experiments
In our experiments, we compare the empirical local regret bounds and global regret bounds under different experimental settings under homogeneous and heterogeneous clients. First, we describe the experimental setup common to all of our experiments.
For the CSMC optimization procedure, we use the cost-sensitive one-against-all (CSOAA) implementation in Vowpal Wabbit (Langford et al., 2023) which performs separate online regressions of costs on contexts for each action. At inference time, the action whose regressor gives the lowest predicted cost is chosen. We consider the parametric policy class induced by linear scores \(\pi_{\theta}(x)=\arg\max_{a\in\mathcal{A}}\langle\theta_{a},x\rangle\) for \(\theta\in\Theta=\mathbb{R}^{d\times p}\), and we use the CSOAA implementation with online linear multiple regressfession of costs on contexts.
For the environments, we consider the client set \(\mathcal{C}=[C]\) where \(C=5\), context space \(\mathcal{X}=[-1,1]^{p}\) for \(p=10\), and action set \(\mathcal{A}=\{a_{1},\dots,a_{d}\}\) with \(d=4\). For any client \(c\in\mathcal{C}\), we consider the following data-generating process: \(X^{c}\sim\mathrm{Normal}(0,I_{p})\), \(A^{c}\sim\mathrm{Uniform}(\mathcal{A})\), and \(Y^{c}(a)=\mathrm{Normal}(\mu_{c}(X^{c};a),\sigma^{2})\) for all \(a\in\mathcal{A}\), where the choice of reward function \(\mu_{c}(X^{c},a)\) is specified in each experiment below. Thus, any heterogeneities we impose between clients will be solely in their outcome distributions. We found this to be the clearest choice to show empirical differences. We will be evaluating how the different regrets scale with total sample size. Therefore, for a given total sample size \(n\in\mathbb{N}\), each client \(c\in\mathcal{C}\) is allocated a local sample sample size determined by some function \(n_{c}=\nu_{c}(n)\). To illustrate the benefits of federation under some sample size heterogeneity, we will have \(n_{1}=\nu_{1}(n)=\lfloor\log n\rfloor\) and all other clients will evenly distribute the rest of the total sample size. Therefore, we will focus on the regret profile of client \(c=1\). Clearly, this entire data-generating process satisfies Assumptions 1, 2, and 3.
We consider a training sample size grid in the range up to \(N\) samples, where \(N=1\)K for the homogeneous experiments and \(N=10\)K for the heteregeneous experiments (due to slower convervence). For each sample size \(n\) in our grid, we sample \(n_{c}=\nu_{c}(n)\) training samples from each client distribution \(\bar{\mathcal{D}}_{c}\) we constructed. Moreover, we sample an additional 10K test samples for each client. We train the global model using our FedAvg-CSMC algorithm on all clients. For comparison, we also train a local model with the same number \(n\) of total samples from \(\bar{\mathcal{D}}_{c}\).
```
0: local steps \(T\), local batch size \(B\), local data \(\{(X_{i}^{c},\hat{\Gamma}_{i}^{c}(a_{1}),\dots,\hat{\Gamma}_{i}^{c}(a_{d})) \}_{i=1}^{n_{c}}\)
1: Receive global parameters \(\theta_{g}\) from server
2: Initialize local parameters \(\theta_{c}\leftarrow\theta_{g}\)
3:for\(t=1,\dots,T\)do
4:\(\mathcal{B}\leftarrow\) sample a batch of \(B\) local examples
5: Update local parameters using CSMC oracle: \(\theta_{c}\leftarrow\) CSMC\((\theta_{c},\mathcal{B})\)
6:endfor
7: Send local parameters \(\theta_{c}\) to server
```
**Algorithm 3** FedAvg-CSMC: Client-Side
Homogeneous ClientsFirst, we consider the homogeneous setting where all clients are identical. For every \(c\in\mathcal{C}\), we set \(\mu_{c}(x;a)=\langle\theta_{a},x\rangle+h(x)\) where the \(\theta_{a}\) are sufficiently separated random vectors in \(\Theta=[-1,1]^{p}\) and \(h(x)\) is a non-linear function of contexts to make the reward function
Figure 1: Federated offline policy learning algorithm.
non-linear and thus misspecified with a linear model. In particular, we choose a step-wise constant function \(h(x)=-\mathbf{1}\{x_{1}>0\}\cdot\max_{a^{\prime}\in\mathcal{A}}\langle\theta_{a^{ \prime}},x\rangle\).
We compute the empirical global regret and local regret on the test data. In order to do so, we first learn the best policy by generating 100K noise-free data samples and directly using the CSMC oracle on the contexts and negative rewards as costs. Figure 1(a) plots the local regret for client 1 of the globally trained policy (green) and the global regret of the globally trained policy (orange), all using the empirical mixture \(\lambda=\bar{n}\). For comparison, we also plot the local regret for client 1 of the locally trained policy (blue). The bands show the one standard deviation from the regrets over five different runs. As expected, each of these curves is nearly identical since the global and local regrets are identical in this scenario. What is interesting to notice here, however, is that although the globally trained model only used \(n_{1}=\lfloor\log n\rfloor\) samples from client 1, the local regret for client 1 of the globally trained model has the same profile as the local regret for client 1 of the locally trained model using \(n\) samples from client 1. This reinforces the fact that federated learning can succesfully leverage data across clients to learn a better model than can be learned with much less local data.
Heterogeneous ClientsNext, we consider a setting where one client is different than all other clients. For client 1, we set \(\mu_{1}(x;a)=\sin(\langle\theta_{a},x\rangle)\) and for every other client \(c\in\mathcal{C}\backslash\{1\}\), we set \(\mu_{c}(x;a)=\langle\theta_{a},x\rangle+h(x)\) exactly as the homogeneous setting, where \(\theta_{a}\) are sufficiently separated random vectors in \(\Theta=[-1,1]^{p}\). The idea behind this choice is that since the sine function is nearly linear in a neighborhood near the zero value of the argument, there is a wide range of contexts where the best action parameter aligns with the context vectors in the same way it does for the step-wise linear reward \(\mu_{c}\) for \(c\neq 1\). Therefore, there is distribution shift between client 1 and all other clients, but there is some amount of similarity that can be exploited.
We run a similar set of experiments as in the homogeneous setting. We compute the empirical global regret and local regret on the test data. The best local policies are again learned using 100K noise-free samples from the local distributions and directly using the CSMC oracle on the contexts and negative rewards as costs. The best global policy is learned by aggregating each of those 100K noise-free samples and directly using the CSMC oracle but with each sample weighted by their client weight \(\lambda_{c}\). Figure 1(b) plots the same type of regret curves as in the homogeneous experiment with empirical mixture \(\lambda=\bar{n}\). We observe that local regret of the globally trained policy significantly suffers from distribution shift. In fact, the locally trained model performs better than the globally trained model at a sufficiently large sample size. Figure 1(c) plots the similar regret curves, but instead with the global policy trained with a skewed mixture \(\lambda=\bar{n}+\bar{\varepsilon}\) where \(\bar{\varepsilon}_{c}=-\bar{n}/2\) for \(c\neq 1\) and \(\bar{\varepsilon}_{1}=(1-\bar{n}_{1})/2\). Here, we observe that the globally trained model still suffers some amount in local regret, but not to such an extent that the locally trained model beats it. Moreover, the local regret shift decreases with larger sample size. The idea is that this skewness will upscale the distribution of client 1 to diminish the amount of distribution shift from the mixture especially at larger total sample sizes, at the cost of negatively affecting the other more homogeneous clients. To see how the other clients are affected by this design choice modification to favor client 1, refer to Appendix G. The takeaway is that the other clients have less distribution shift from the average so their performance degradation is lesser under the empirical mixture, but their performance further degrades under the skewed mixture.
Figure 2: Empirical regret curves for simulation experiments. Local regrets are for client 1.
Conclusion
In this study, we presented a novel framework for offline policy learning using heterogeneous observational data. We introduced a federated algorithm and a corresponding novel regret analysis that demonstrates the impact of client heterogeneity on policy performance. We believe our work can contribute to the development of personalized policy learning systems for the benefit of numerous stakeholders that face data ownership and privacy issues.
Limitations & Future WorkThere are various limitations to our present work that require further consideration. For example, we make certain assumptions on the data-generating process which may not always be satisfied. We discussed how some of these assumptions may be relaxed with additional work. Furthermore, we assumed the mixture distribution \(\lambda\) is known. We could extend our work to a more agnostic setting where this mixture distribution is optimized. Lastly, we leave open the question of whether the bounds we establish are regret optimal. Mohri et al. (2019) discuss how similar skewness-based bounds for distributed supervised learning are optimal. Moreover, in the homogeneous setting, our results immediately reduce to the regret optimal results obtained in (Athey and Wager, 2021; Zhou et al., 2022). Thus, there is an indication that our bounds are regret optimal. We leave lower bounds for future work.
## Acknowledgements
We thank Sanath Kumar Krishnamurthy and Stefan Wager for the helpful discussions. Additionally, A.G.C. would like to thank Rezsa Farahani for their support and encouragement throughout this work.
|
2305.04265 | An Investigation on Word Embedding Offset Clustering as Relationship
Classification | Vector representations obtained from word embedding are the source of many
groundbreaking advances in natural language processing. They yield word
representations that are capable of capturing semantics and analogies of words
within a text corpus. This study is an investigation in an attempt to elicit a
vector representation of relationships between pairs of word vectors. We use
six pooling strategies to represent vector relationships. Different types of
clustering models are applied to analyze which one correctly groups
relationship types. Subtraction pooling coupled with a centroid based
clustering mechanism shows better performances in our experimental setup. This
work aims to provide directions for a word embedding based unsupervised method
to identify the nature of a relationship represented by a pair of words. | Didier Gohourou, Kazuhiro Kuwabara | 2023-05-07T13:03:17Z | http://arxiv.org/abs/2305.04265v1 | # An Investigation on Word Embedding Offset Clustering as Relationship Classification
###### Abstract
Vector representations obtained from word embedding are the source of many groundbreaking advances in natural language processing. They yield word representations that are capable of capturing semantics and analogies of words within a text corpus. This study is an investigation in an attempt to elicit a vector representation of relationships between pairs of word vectors. We use six pooling strategies to represent vector relationships. Different types of clustering models are applied to analyze which one correctly groups relationship types. Subtraction pooling coupled with a centroid based clustering mechanism shows better performances in our experimental setup. This work aims to provide directions for a word embedding based unsupervised method to identify the nature of a relationship represented by a pair of words.
Keywords:Word Embedding Clustering Relationship Representation
## 1 Introduction
The study of pattern recognition in natural language processing requires a numerical representation of text. This is achieved by computing a vector representation for the words that compose text corpora. Words were originally represented using a one-hot encoding vector representation. The approach had shortcomings including sparsed and high dimensional vectors that are unable to capture the contextual meaning of words. Proposed by Bengio et al, word embedding is a neural language model that addresses the _curse of dimensionality_. The proposed model also provides a distributed representation of words, aligned with linguistic principles such as the context-dependent nature of meaning. Different implementations have spawned from the proposition including word2vec [7, 4], GloVe [10], and fastText [1]. Those word representation models are the backbone of many natural language processing (NLP) tasks including named entity recognition, sentiment analysis, and machine translation, which led to state-of-the-art breakthroughs in those respective NLP fields.
Word embedding demonstrated additional properties, such as the ability to capture syntactic and semantic regularities in language [8]. Building from this insight, can we obtain vectors that can capture the type of relationship between word embedding? Can those relationship representations be effectively grouped with clustering models? This study is an attempt to answer those questions. It
explores different _pooling_ approaches to obtain a vector that represents the relationship between word vectors based on their embedding. The study also uses different clustering models in an attempt to score the ability of the relationship vectors to be grouped. Answering those problems will point toward _an unsupervised methodology to classify relationships between words_. Our contributions can be emphasized as follows:
* Explore for a word embedding representation of the relationship for a pair of words.
* Analyze the clustering of those relationship vectors.
* Cue toward an unsupervised classification mechanism of relationships between words.
The rest of the study unfolds by first discussing related works including the clustering of word vectors and the elicitation of regularities in word vector space. Then present the methodologies adopted for word embedding relationship representation and the descriptions of the families of clustering algorithms selected. An experiment section follows to detail the data set and specific cluster algorithms used. A discussion analyzes the results and offers directions to apply the findings, before concluding the study.
## 2 Related Work
Vector-space word representations learned by continuous space language models such as word2vec models have demonstrated the ability to capture syntactic and semantic regularities in language [7]. In their study Mikolov et al. created a set of analogy questions in the form _"a is to b as c is to..."_. To answer the analogy question, \(y=x_{b}-x_{a}+x_{c}\) is computed. Where \(x_{a}\), \(x_{b}\), and \(x_{c}\) are the embedding vectors for the words \(a\), \(b\), and \(c\) respectively. \(y\) is the continuous space representation of the word expected as the answer. Because \(y\) is not likely to represent an existing word, the answer is considered to be the word with the embedding that has the greatest cosine similarity. The results showed better performance than prior methodologies on the SemEval-2012 Task 2: Measuring Relation Similarity. The more recent transformer-based language models have been applied to solve abstract analogy problems [14]. After pointing out the lack of attention to recognizing analogies in NLP, the study emphasized the strength and limitation of various models on psychometric analogy data sets from educational problems. The transformer models used in the study included BERT, GPT-2, RoBERTa, GPT-3. Their performances were compared against word embedding models, perplexity-based point-wise mutual information, and random answers. GPT-2 and RoBERTa were the top performer, while BERT performed worst than word embedding.
The clustering of word vectors has been used to analyze how well an embedding model's word vectors can be grouped by topic or other taxonomies. It has been applied to various domain-specific studies including business [3] and medicine [13]. Zhang et al. [17] demonstrated that incorporating word embedding
with a kernel-based k-mean clustering method provides superior performances within a topic extraction pipeline. In their study, word2vec was used to extract features from bibliometric data set, claiming the advantage to skip human feature engineering. A specifically designed k-mean clustering model is used for topic extraction. The clustering methodology is a polynomial kernel function integrated into a cosine similarity-based k-mean clustering. The performance of the proposed model was compared to different models used for topic extraction including a standard k-mean algorithm, principal component analysis, and a fuzzy c-mean algorithm. Closer to our work, an exploration of the word-class distribution in word vector spaces [11] investigated how well distribution models including Centroid-based, Gaussian Model, Gaussian Mixture Model, k-Nearest Neighbor, Support Vector Machine, and OffSet, can estimate the likelihood of a word to belong to a class. The study experimented with pre-trained vectors from Glove, and classes from the WordNet taxonomy.
While analogy tasks try to find the best word to complete a pair to be similar to another, our problem focuses on probing for a vector representation of word-pair relationships, so that they are efficiently grouped by clustering models. This by extension call for investigating the performances of clustering models for this task.
## 3 Methodology
### Relation Vectors
To obtain a _relationship vector_ from vector representations of words, we experiment with different _pooling_ strategies. Here we define by pooling the mechanism by which we reduce a set of vectors representing different words, into a single one. The obtained vector will represent the relationship between the set of pooled vectors. Our first pooling strategy is to use the subtraction operator.. This strategy is derived from the linguistic regularity observation such as \(king-man+woman=queen\). We can deduce \(king-man=queen-woman\), where we consider the subtraction, the operator that gives a representation of the type of relationship between word vectors on both sides of the equation. Thus if \(v_{r}\) is the relation vector representing the relation between the word vectors \(v_{s}\) and \(v_{o}\), we have:
\[v_{r}=v_{s}-v_{o} \tag{1}\]
The second strategy is to apply the absolute value function to each component of the vector resulting from the subtraction.
\[v_{r}=\langle|v_{o_{1}}-v_{s_{1}}|,|v_{o_{2}}-v_{s_{2}}|,...,|v_{o_{n}}-v_{s_{ n}}|\rangle \tag{2}\]
Where \(v_{o_{i}}\) and \(v_{s_{i}}\) are respectively the \(i^{th}\) dimensional coordinate of \(v_{o}\) and \(v_{s}\). The third consist of adding the two vectors involved in the relationship.
\[v_{r}=v_{s}+v_{o}\]
The fourth pooling strategy constitutes a vector with the coordinate obtained by taking the minimum of each dimensional coordinate of the word vectors involved in the relationship.
\[v_{r}=\langle min(v_{o_{1}},v_{s_{1}}),min(v_{o_{2}},v_{s_{2}}),...,min(v_{o_{n}}, v_{s_{n}})\rangle \tag{4}\]
Conversely, the fifth pooling strategy, named max pooling, consists of creating the relationship vector using the maximum of each dimensional coordinate.
\[v_{r}=\langle max(v_{o_{1}},v_{s_{1}}),max(v_{o_{2}},v_{s_{2}}),...,max(v_{o_{n} },v_{s_{n}})\rangle \tag{5}\]
The last pooling strategy we use in our exploration is the average pooling.
\[v_{r}=\langle(v_{o_{1}}+v_{s_{1}})/2,(v_{o_{2}}+v_{s_{2}})/2,...,(v_{o_{n}}+v_{ s_{n}})/2\rangle \tag{6}\]
Table 1 gives the summary of the pooling strategies considered is this study.
### Clustering
Clustering is an active research field, that consist of grouping similar objects into sets. It can be achieved with various algorithms that differ in mechanisms employed to constitute a cluster. Although more than seventy clustering models that can be classified into nearly twenty categories are commonly used [16], we experiment with four types of the most widely used clustering mechanisms, including centroid-based, hierarchical-based, distribution-based and density-based.
#### 3.2.1 Centroid-based clustering
represents their cluster based on a central vector that is usually randomly selected and then optimized. k-mean [6] and its variants including k-mean++ [2] and k-medians are part of this clustering category.
\begin{table}
\begin{tabular}{l l} \hline \hline Name & Formula \\ \hline Substraction & \(v_{r}=v_{1}-v_{2}\) \\ Substraction absolute value & \(|v_{1}-v_{2}|\) \\ Addition & \(v_{r}=v_{1}+v_{2}\) \\ Minimum & \(v_{r}=\langle min(v_{o_{1}},v_{s_{1}}),min(v_{o_{2}},v_{s_{2}}),...,min(v_{o_{n }},v_{s_{n}})\rangle\) \\ Maximum & \(v_{r}=\langle max(v_{o_{1}},v_{s_{1}}),max(v_{o_{2}},v_{s_{2}}),...,max(v_{o_{n }},v_{s_{n}})\rangle\) \\ Mean & \(v_{r}=\langle(v_{o_{1}}+v_{s_{1}})/2,(v_{o_{2}}+v_{s_{2}})/2,...,(v_{o_{n}}+v_ {s_{n}})/2\rangle\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Pooling mechanisms for relationship vector representation.
#### 3.3.2 Hierarchical clustering
aims to build a hierarchy of clusters, either bottom-up (agglomerative) or top-down (divisive). The agglomerative approach starts by considering each data point as a cluster. Pairs of clusters are then gradually merged, moving up the hierarchy. Conversely, the divisive approach, start by considering all data points as one cluster. Clusters are then progressively split moving down the hierarchy. A measure of dissimilarity is used to determine which clusters should be merged or split. The dissimilarity is measured with a distance metric and a linkage criterion.
#### 3.3.3 Distribution-based clustering
assumes the data points follow a statistical distribution. While distribution based clustering models can capture statistical relationships within attributes of the data point, they work efficiently when the data distribution is known and the model parameters are set accordingly.
#### 3.3.4 Density-based clustering
forms clusters by grouping data points from dense areas into clusters. Such clustering algorithm allows arbitrary-shaped but they might not be precise with data sets of varying densities. Furthermore, outlier data points are considered noise. Common density-based clustering includes Density-Based Spatial Clustering of Application with Noise (DBSCAN) [5], Ordering points to identify the clustering structure (OPTICS), and Mean-shift.
## 4 Experiment
This section describes the data set used and selected algorithms used for experimentation. The experiments are conducted with the clustering model implementation of the Scikit-learn [9] and Scipy [15] python packages.
We use the adjusted random score to evaluate the clustering accuracy of each setting.
### Datasets
We use GloVe pre-trained word vectors 1 trained on a corpus composed of the 2014's Wikipedia dump and the 5th edition of the English Gigaword. The training corpus has six billion tokens, four hundred thousand unique words, and is lower-cased. Our experiments use the 100 dimensions pre-trained vectors version.
Footnote 1: [https://nlp.stanford.edu/projects/glove](https://nlp.stanford.edu/projects/glove)
The word pairs are drawn from the word pairs analogy data set of Mikolov et al. [8]. It contains 14 categories of word pairs for analogies tasks. Table 2 provides a sample of the data set word pairs for two categories.
For each word pair, their relation vector is obtained following the different pooling strategies. Figure 1 gives a 2D projection overview of the relationship vectors for each pooling strategy.
Figure 1: TSNE 2D projections of relation vectors for different pooling strategy
### K-mean
K-mean is not only the most used clustering mechanism but probably the most well-known clustering algorithm overall, because of its simple approach. The basic running steps of k-mean are the following:
1. Define the number of clusters and randomly initialize their center points.
2. Each data point is added to the cluster of its closest center point.
3. Center points are recomputed, using the _mean_ of all vectors in a given cluster.
4. The two previous steps are repeated, until the center points respectively converge.
In addition to being simple, k-mean has a linear run time complexity \(O(n)\). Some drawbacks are the need to specify the number of clusters, and the randomized initialization can provide different clustering results on different algorithms run. Table 3 provides the adjusted random scores for K-mean clustering.
### Gaussian mixture
The Gaussian Mixture Model is a widely used clustering model of this type. It assumes that the point in the data set follows a gaussian distribution. The model can use two parameters such as the mean and the standard deviation to describe the shape of the clusters. The parameters are found using the Expectation-Maximization optimization algorithm. The Gaussian Mixture Model clustering works as follows:
\begin{table}
\begin{tabular}{c l} \hline Countries’ capital & Currencies \\ \hline baghdad - iraq & japan - yen \\ bangkok - thailand & korea - won \\ beijing - china & latvia - lats \\ berlin - germany & lithuania - litas \\ bern - switzerland & macedonia - denar \\ \hline \end{tabular}
\end{table}
Table 2: Sample of the word pairs dataset.
\begin{table}
\begin{tabular}{c c c c c c} \hline \(X_{subs}\) & \(X_{add}\) & \(X_{abs}\) & \(X_{min}\) & \(X_{max}\) & \(X_{mean}\) \\ \hline kmean **0.792549** & 0.363478 & 0.296212 & 0.334140 & 0.313051 & 0.363478 \\ \hline \end{tabular}
\end{table}
Table 3: Adjusted random scores for K-mean clustering on the pooling strategies.
1. Randomly initialize the Gaussian distribution parameters for each cluster for the selected number of clusters.
2. Compute the probability of each data point to belong to a cluster, given the Gaussian distribution.
3. Update the set of parameters for the Gaussian distributions, based on the data point probabilities. The new parameters are computed, maximizing the probability of data points within the cluster.
4. The last two steps are iteratively repeated until the clusters' centers respectively converge.
The Gaussian mixture can be seen as a more flexible k-mean. Instead of assuming clusters with a circular shape such as k-mean, Gaussian mixture can shape ellipse-like clusters.
Table 4 provides the adjusted random score for the Gaussian mixture model clustering.
### Agglomerative Clustering
The hierarchical agglomerative clustering model can be described as follows:
1. Every point in the data set is initially considered a cluster.
2. Using a distance metric, clusters are jointly merged by pairs of the closest to one another.
3. The previous step is repeated until a unique cluster is formed. The clustering structure is then defined by choosing when to stop the clusters combination.
The dissimilarity is measured with a distance metric and a linkage criterion. We test various configurations of distance metrics as (dis)similarity measures, and linkage criterion. The distance metrics used include euclidean distance, cosine similarity, manhattan, l1, and l2. The linkage criterion is the strategy used to merge clusters at different time steps. We experiment with the following linkage criterion:
* ward: minimize the variance of the clusters being merged,
* average: merge two clusters with the minimal average distances of each observation,
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(X_{subs}\) & \(X_{add}\) & \(X_{abs}\) & \(X_{min}\) & \(X_{max}\) & \(X_{mean}\) \\ \hline gmm **0.854000** & 0.293000 & 0.351000 & 0.330000 & 0.228000 & 0.293000 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The adjusted random score of the Gaussian Mixture Model for different pooling strategies.
* complete/maximum: merges two clusters with the smallest maximum distances between all observations of the two sets,
* single: merges two clusters with the smallest minimum distances between all observations of the two sets.
The adjusted random scores of different configurations for the linkage and distance metric parameters are available in table 5.
### Dbscan
DBSCAN is the ubiquitous density-based clustering. The mechanism of DBSCAN can be summarized as follows [12].
1. Find points within \(\epsilon\) distance of every point and identify the core points which are points with more than a minimum number of points within distance \(\epsilon\).
2. Determine the connected components of core points on the neighbor graph, excluding all non-core points.
3. Assign each non-core point to a nearby cluster within an \(\epsilon\) distance neighbor, consider it a noisy point.
\begin{table}
\begin{tabular}{l r r r r r r} \hline & \multicolumn{1}{c}{\(X_{subs}\)} & \multicolumn{1}{c}{\(X_{add}\)} & \multicolumn{1}{c}{\(X_{abs}\)} & \multicolumn{1}{c}{\(X_{min}\)} & \multicolumn{1}{c}{\(X_{max}\)} & \multicolumn{1}{c}{\(X_{mean}\)} \\ \hline (ward, euclidean) & **0.682373** & 0.302749 & 0.342485 & 0.317647 & 0.283547 & 0.302749 \\ (single, euclidean) & 0.006886 & **0.007961** & 0.006723 & 0.004333 & 0.006045 & **0.007961** \\ (complete, euclidean) & **0.502632** & 0.296154 & 0.081222 & 0.167146 & 0.299644 & 0.296154 \\ (average, euclidean) & 0.022881 & **0.293066** & 0.016404 & 0.129966 & 0.249915 & **0.293066** \\ (single, cosine) & 0.004966 & **0.009726** & 0.003661 & 0.008308 & 0.008689 & **0.009726** \\ (complete, cosine) & **0.695384** & 0.309572 & 0.420896 & 0.281730 & 0.356601 & 0.309572 \\ (average, cosine) & **0.612819** & 0.292957 & 0.273538 & 0.319111 & 0.448297 & 0.292957 \\ (single, manhattan) & 0.006766 & 0.007777 & **0.007989** & 0.005402 & 0.003136 & 0.007777 \\ (complete, manhattan) & **0.550522** & 0.301899 & 0.030703 & 0.273196 & 0.309888 & 0.301899 \\ (average, manhattan) & 0.021224 & **0.291166** & 0.016537 & 0.132257 & 0.269386 & **0.291166** \\ (single, l1) & 0.006766 & 0.007777 & **0.007989** & 0.005402 & 0.003136 & 0.007777 \\ (complete, l1) & **0.550522** & 0.301899 & 0.030703 & 0.273196 & 0.309888 & 0.301899 \\ (average, l1) & 0.021224 & **0.291166** & 0.016537 & 0.132257 & 0.269386 & **0.291166** \\ (single, l2) & 0.006886 & **0.007961** & 0.006723 & 0.004333 & 0.006045 & **0.007961** \\ (complete, l2) & **0.502632** & 0.296154 & 0.081222 & 0.167146 & 0.299644 & 0.296154 \\ (average, l2) & 0.022881 & **0.293066** & 0.016404 & 0.129966 & 0.249915 & **0.293066** \\ \hline \end{tabular}
\end{table}
Table 5: The adjusted random score for different configurations of distance and linkage parameters of agglomerative clustering, for different pooling strategies
We experimented with different metric types including euclidean, cosine, and manhattan, as well as different values to consider points as neighbors of a core point. Table 6 provides the adjusted random score for the different experimental configurations for DBSCAN.
## 5 Discussion
### Results
Our results suggest that the subtraction pooling strategy might be the best operation for word embedding-based word relationship representation, compared to the five others, experimented with, in the investigation. This finding supports the assumption of using the subtraction operator in word vector-based analogy tasks [8]. Table 7 gives the model configuration with the highest average score on each pooling strategy for different clustering models.
In addition, the k-mean centroid-based clustering algorithm yields the highest scores for 3 of of 6 pooling strategies. The best score comes from clustering relation vectors from the subtraction pooling strategy using the Gaussian Mixture
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \(X_{subs}\) & \(X_{add}\) & \(X_{abs}\) & \(X_{min}\) & \(X_{max}\) & \(X_{mean}\) \\ \hline (euclidan, 050) & **0.028191** 0.028177 & 0.028177 & 0.028177 & 0.028177 & 0.028177 & 0.028177 \\ (cosine, 025) & **0.325427** 0.217478 & 0.014572 & 0.243042 & 0.266950 & 0.217478 \\ (cosine, 030) & **0.627861** 0.043688 & 0.000000 & 0.039113 & 0.261000 & 0.043688 \\ (cosine, 050) & **0.512095** 0.006298 & 0.000000 & 0.003168 & 0.001254 & 0.006298 \\ (manhattan, 050) & **0.028191** 0.028177 & 0.028177 & 0.028177 & 0.028177 & 0.028177 \\ \hline \hline \end{tabular}
\end{table}
Table 6: The adjusted random score of DBSCAN for different pooling strategies.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \(X_{subs}\) & \(X_{add}\) & \(X_{abs}\) & \(X_{min}\) & \(X_{max}\) & \(X_{mean}\) \\ \hline kmean & **0.792549** 0.363478 & 0.296212 & 0.334140 & 0.313051 & 0.363478 \\ gmm & **0.853806** 0.293151 & 0.351499 & 0.329660 & 0.228340 & 0.293151 \\ agglomerative: complete, cosine & **0.695384** 0.309572 & 0.420896 & 0.281730 & 0.356601 & 0.309572 \\ dbscan: cosine, 025 & **0.325427** 0.217478 & 0.014572 & 0.243042 & 0.266950 & 0.217478 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Configurations with the highest score average from the different clustering models experimented with.
clustering model. Although the Gaussian Mixture Model is a distribution-based clustering mechanism it is built on top of the centroid-based k-means. This reinforces the superiority of centroid-based clustering in our experimental setup, followed by the agglomerative hierarchical-based clustering configured with complete linkage and cosine similarity.
### Application
The ability to group representation of similar relationships between pairs of words especially named entities. Point toward an unsupervised approach to categorize relationships among words including named entities. This can in turn be used as link categorization when building knowledge graphs. Thus alleviating in some cases the need for manual data labeling for the type of links between nodes. It can go as far as providing data for graph learning models such as graph convolutional neural network that aims at including links type in addition to nodes type in their learning process.
### Future Work
This work gives two pointers for further steps. One is extending the experimentation, with more word relationship representation strategy and clustering models. Additional relationship representation can include learned one by using an autoencoder on the pair of word vectors. Another direction is to derive a formal explanation of the current findings, of this exploratory study.
## 6 Conclusion
This study explores possibilities for a word embedding based word to word relationship representation and their clustering ability in regards to grouping similar relationships. Different relationship representations are obtained by applying basic operations on the coordinate of pairs of vectors. The subtraction pooling strategy and centroid-based clustering models tend to give better results in our exploratory setup. Further work might extend the exploration or provide a formal explanation of the findings.
|
2304.13430 | The Logic of Logic Programming | Our position is that logic programming is not programming in the Horn clause
sublogic of classical logic, but programming in a logic of (inductive)
definitions. Thus, the similarity between prototypical Prolog programs (e.g.,
member, append, ...) and how inductive definitions are expressed in
mathematical text, is not coincidental but essential. We argue here that this
provides a natural solution to the main lingering semantic questions of Logic
Programming and its extensions. | Marc Denecker, David S. Warren | 2023-04-26T10:35:15Z | http://arxiv.org/abs/2304.13430v1 | # The Logic of Logic Programming
###### Abstract
Our position is that logic programming is not programming in the Horn clause sublogic of classical logic, but programming in a _logic of (inductive) definitions_. Thus, the similarity between prototypical Prolog programs (e.g., member, append,...) and how inductive definitions are expressed in mathematical text, is not coincidental but essential. We argue here that this provides a natural solution to the main lingering semantic questions of Logic Programming and its extensions.
## 1 Introduction
There is much ado about the "declarative semantics" of logic programs. But for a pure positive logic program, virtually every logic programmer around the world will accept that the Least Herbrand Model (LHM) [7] represents the state of affairs of the universe and the predicates as determined by the program1. E.g., take the member program:
Footnote 1: Negation in programs is discussed in Section 5.
member(X,[X|T]). member(X,[H|T]) :- member(X,T).
Assuming the vocabulary consists of predicate member/2, list functor \(|/2\) and the constant symbols \([]\), 0, 1, 2,..., then the domain of the LHM is the Herbrand universe, the set of terms built from these symbols, and the LHM contains \(member(t,l)\) iff \(t\) is a member of list \(l\) (\(l\) not necessarily ending with \([]\)).
Historically, e.g., in [7], logic programs are explained as Horn theories, i.e., sets of material implications. But, Horn theories are satisfied in an extremely broad class of (Herbrand and first-order) structures, most of which do not match at all with the LHM. For instance, the Herbrand structure in which all member atoms are true, satisfies the Horn theory while it is _full of errors_, e.g., member(0,[1,2,3]). It is proven in [7] that the Horn theory logically entails all atomic facts true in the LHM. However, the Horn theory entails _none_ of the intended negative literals (e.g., \(\neg member(0,[1,2,3])\)) and only few of the composite formulas true in the LHM and expected to hold in a theory of list-membership in the given universe. Arguably, the Horn theory is inadequate
as an "axiomatisation" of the LHM and the way programmers interpret their program. This inadequacy emerges not only for the member program, but for virtually all logic programs that logic programmers write.
Logic programming is two languages in one: a declarative and a procedural one. Our goal is to investigate its declarative logic. We argue that such a logic should at least play the following roles. (1) The logic should explain what the states of affairs are that programmers associate with their program. Here in this case, it is the LHM and only the LHM. In this logic, a logic program should be an "axiomatisation" of the LHM, in the sense that mathematical logicians see this. (2) The declarative logic should be related to a range of natural language expressions that closely correspond to the formal expressions, both in syntax and semantics. (3) (and related) The declarative logic should give non-ambiguous insight in the meaning of the main connectives of logic programs (":-" " " and "not"). (4) The declarative logic should explain not only the meaning of finalized logic programs but also the meaning of programs in development, components of programs. It should explain how to interpret the program while the programmer is writing it, and not only when it is finished. E.g., when applying logic programming to query a family database \(DB\) for siblings, the programmer may write:
sibling(X,Y) :- child_of(X,P), child_of(Y,P), X \== Y.
The programmer writes this query without knowledge of \(DB\) and must have a precise understanding of this query on its own, independent of the \(DB\). The declarative logic should explain this. Notice that the LHM semantics itself, for all its virtues, does _not_ provide formal semantics for such stand-alone program components.
The goal of this paper is to present such a declarative logic and argue that it satisfies the above criteria. It is based on existing ideas [2, 3, 4]. A logic program is seen as a combination of definitions of all its predicates, along with an implicit "axiom" that constrains the set of function symbols to be the constructors of the universe. E.g., the two rules of the member program are read as an inductive definition of membership in mathematical text.
**Definition**_We define list membership by induction:_
* _(base case) x is a member of list [x]t];_
* _(inductive case) x is a member of [h]t] if x is a member of t._
This mathematical (non-formal) definition defines the membership as the least relation that satisfies these two rules. Equivalently, it is the limit of the _induction process_: the process that starts with the empty set, and proceeds by iterated application of these rules until a fixpoint is reached. In this view, the similarity between prototypical logic programs and (non-formal) inductive definitions in mathematical text is not coincidental but essential. As a result, the second condition of the previous paragraph will be satisfied.
The paper is structured as follows. Section 2 introduces the logic underlying LP and discusses the meaning of the rule operator. Sections 3 and 4 use it to analyze full logic programs and their components. Section 5 considers negation.
The declarative logic of Logic Programming
As explained in the Introduction, we now design the core declarative logic \(\mathcal{L}_{\mathcal{D}}\) (syntax and formal semantics) to formalize a logic program as a combination of an _axiom of the universe_ and a _definition_ of all its predicates.
Most mathematical and philosophical logicians will agree that, to understand an expression, one needs to know its _truth conditions_: the states of affairs in which it is _true_, and the states of affairs in which it is _false_. A logic semantics that specifies this is a _truth conditional semantics_. The common way to formalize this is through a satisfaction relation \(M\models\psi\) (or a truth function \(\psi^{M}\)). Here, \(\psi\) is an expression, theory or program, and \(M\) a structure that is an abstraction of a state of affairs. First-order logic's (FO) satisfaction relation \(\models_{FO}\) specifies a truth conditional semantics. A semantics of this type abstracts all computational and operational aspects and formalizes the answer to the essential question of declarative meaning: _when is an expression true, when is it false?_ In our case, the expressions \(\psi\) will be logic programs \(\Pi\) and their components: an _axiom of the universe_ and component definitions. E.g., a structure \(M\) interpreting the relation member by a relation containing \((0,[1,2,3])\)_will not satisfy_ the member definition and is not a _model_ of the definition. Hence, this definition is _false_ in this structure.
The semantics of FO is based on _first-order structures_, while in Logic Programming, often only _Herbrand structures_ are used. In this paper we will use first-order structures. In particular, we cannot formalize the meaning of an axiom of the universe without considering also structures (abstractions of states of affairs) that _do not_ satisfy it.
**Definition 2.1**.: _A (first-order) structure \(M\) for a (first-order) vocabulary \(\Sigma\) consists of a non-empty universe \(\mathcal{U}^{M}\), and appropriate values \(\sigma^{M}\) in \(\mathcal{U}^{M}\) for all symbols \(\sigma\in\Sigma\) (an element of, or a relation or function of appropriate arity on \(\mathcal{U}^{M}\))._
**Definition 2.2**.: _The Herbrand universe \(HU(\Sigma)\) for \(\Sigma\) is the set of terms over \(\Sigma\). \(M\) is a Herbrand structure of \(\Sigma\) if \(\mathcal{U}^{M}\) is \(HU(\Sigma)\) and constants \(c\) and functors \(f/n\) have Herbrand values, i.e., \(c^{M}=c\) and \(f^{M}(t_{1},\ldots,t_{n})=f(t_{1},\ldots,t_{n})\)._
This is the standard notion of Herbrand structure except when \(\Sigma\) contains no constant symbols. In that case, \(HU(\Sigma)\) is empty and no Herbrand structures exist. Instead, in logic programming semantics, one will often add an arbibrary constant to \(\Sigma\) so that Herbrand structures always exist.
First-order structures do have one complication compared to Herbrand structures: while two different Herbrand structures represent necessarily different states of affairs, all _isomorphic_ first-order structures \(M,N\) (notation \(M\cong N\)) represent the _same_ state of affairs. The following is a natural property for any logic \(\mathcal{L}\) with satisfaction relation \(\models_{\mathcal{L}}\): for any pair of isomorphic structures \(M,N\), and for any expression (theory, program) \(\psi\), \(M\models\psi\) iff \(N\models\psi\). Stated differently, \(M\) is a model of \(\psi\) iff \(N\) is a model of \(\psi\). We will show that our semantics satisfies this constraint (Definition 2.4 and Theorem 2.1).
**Definition 2.3**.: _For a given vocabulary \(\Sigma\), and two (first-order) structures \(M,N\) interpreting at least \(\Sigma\), we say that \(M\) and \(N\) are isomorphic relative to \(\Sigma\) (notation \(M\cong_{\Sigma}N\)) if there exists a 1-1 mapping \(b:\mathcal{U}^{M}\rightarrow\mathcal{U}^{N}\) such that \(b(\sigma^{M})=\sigma^{N}\) for each \(\sigma\in\Sigma\). 2 Two structures \(M,N\) are isomorphic (notation \(M\cong N\)) if they interpret the same set of symbols \(\Sigma\) and \(M\cong_{\Sigma}N\)._
Footnote 2: Here \(b\) is extended to functions and relations on \(\mathcal{U}^{M}\) in the standard way.
Below, we define the syntax and the satisfaction relation \(\models_{\mathcal{D}}\) of the logic \(\mathcal{L}_{\mathcal{D}}\) with a truth conditional semantics for logic programs \(\Pi\) and their components. The outcome should be that \(M\models_{\mathcal{D}}\Pi\) iff \(M\) is (isomorphic to) the LHM of \(\Pi\). The logic \(\mathcal{L}_{\mathcal{D}}\) will turn out to be a sublogic of the logic FO(ID) [3] containing definitions with a.o., negation in the body and integrating it with FO.
The Herbrand AxiomLogic Programming uses constant and function symbols in a radically more restrictive way than in FO. In LP, they are treated as constructors of the universe of the program; in FO, the universe is arbitrary, and constants and functors may denote arbitrary objects and functions in it. Semantically, this leads LP to use Herbrand structures, while FO admits the much broader class of first order structures.
Why does Prolog consider only Herbrand structures? To the programmer, the universe of the program consists of all data structures. Compound terms (e.g., the term [1, 2, 3]) are used as data structures, i.e., containers of data, from which later data can be retrieved through unification. But this works only when functors are interpreted as _constructors_. For example, consider the first-order structure \(M^{\prime}\) for the member Horn theory with universe \(\{a\}\). All constants are interpreted by \(a\), every n-ary function symbol is interpreted as the map of n-tuple \((a,\ldots,a)\) to \(a\), every n-ary predicate is interpreted as \(\{(a,\ldots,a)\}\). Structure \(M^{\prime}\) provably satisfies the Horn theory. The data structures represented by terms _all_ collapse into the same object \(a\); all information stored in them has vanished and is completely lost. This is why constructors are important in logic programming. It is the same in functional programming. Also there, the universe of a program is built as the collection of terms formed by a set of constructors. Of course, functional programs have many defined non-constructor functions as well. In LP, the only such functions are the interpreted ones, e.g., \(+,\times\).
The _Herbrand Axiom for_ a set \(\mathcal{CF}\) of constant and function symbols is syntactically denoted as \(\mathcal{H}(\mathcal{GF})\). It expresses that the universe is \(HU(\mathcal{CF})\) and that symbols in \(\mathcal{CF}\) are its _constructors_.
**Definition 2.4**.: _A structure \(M\) satisfies \(\mathcal{H}(\mathcal{CF})\), notation \(M\models_{\mathcal{D}}\mathcal{H}(\mathcal{GF})\), if \(\mathcal{U}^{M}\) is \(HU(\mathcal{CF})\) and all symbols in \(\mathcal{GF}\) have Herbrand values. Or, if \(M\) is isomorphic to such a structure._
Recall that if \(\mathcal{GF}\) contains no constants, then \(HU(\mathcal{GF})\) is empty and \(\mathcal{H}(\mathcal{GF})\) is inconsistent.
The Herbrand Axiom can be expressed as a combination of the unique name axiom and the domain closure axiom for \(\mathcal{GF}\). The domain closure axiom cannot be expressed in FO but requires second-order logic or inductive definitions. More discussion is beyond the scope of this article.
Not every structure \(M\) satisfying \(\mathcal{H}(\mathcal{CF})\) is a Herbrand structure. If \(M\) interprets a function symbol \(f/n\) not in \(\mathcal{CF}\), \(M\) is not a Herbrand structure and \(f/n\) is not interpreted as a constructor. E.g., take again \(\mathcal{CF}=\{[\![,|/2,0,1,2,\ldots\}\). Take its unique Herbrand structure and expand it for the numerical product functor \(\times/2\) to the structure \(M_{\times}\) by interpreting \(\times\) by the function \(\times^{M_{\times}}:HU(\mathcal{CF})^{2}\mapsto HU(\mathcal{CF})\) that maps pairs of numbers \(n,m\) to \(n\times m\), the product of \(n\) and \(m\), and maps all other pairs to \([\![)\). Although \(M_{\times}\) is not a Herbrand structure, it satisfies \(\mathcal{H}(\mathcal{CF})\). Such structures are needed for the semantics of Constraint LP, e.g., CLP(R); they are also used in examples below.
The simplest definition logicDefinitions, non-formally, are a common and precise form of human knowledge. E.g., a non-inductive definition of sibling:
**Definition 1**.: _A person \(x\) is a sibling of \(y\) if 3\(x\) and \(y\) are different and they share a parent._
Footnote 3: In non-formal definitions, “if” is often used where, logically speaking, “iff” is intended.
An inductive definition is that of the reachablity relation of a graph.
**Definition 2**.: _We define the reachability relation \(R\) of graph \(G\) by induction:_
* _if_ \((x,y)\in G\) _then_ \((x,y)\in R\)_;_
* _if_ \((x,y)\in R\) _and_ \((y,z)\in G\) _then_ \((x,z)\in R\)_._
(Non-formal) definitions _define_ concepts _in terms of_ other concepts. We call the latter the _parameters_ of the definition. E.g., _sibling_ is defined in terms of _parent_, and the reachability relation \(R\) in terms of \(G\). A definition does not constrain the parameter concepts but derives, for each possible assignment of values for the parameters, a unique value for the defined concept.
Basically a (non-formal, inductive) definition specifies how the value (or extension) of the defined set is obtained from the value of the parameters. It can be explained in two equivalent ways: non-constructively, the defined set is the least set that satisfies the rules _interpreted as material implications_; constructively, the defined set is the result of the _induction process_: starting from the empty set, the rules are iteratively applied in a bottom up fashion until a fixpoint is reached. The constructive and non-constructive methods are well-known to be equivalent.
Importantly, while the above explanations are typically used for inductive definitions, both work for non-inductive definitions as well. E.g., it may be overkill to use an induction process to construct the value of the sibling relation from a given parent relation, but it does construct the correct relation.
We now introduce the formal syntax for expressing definitions.
**Definition 2.5**.: _A formal definition \(D\) is a non-empty set of definitional rules of the form:_
\[A\gets B_{1},\cdots,B_{n}.\]
_where \(A,B_{1},\ldots,B_{n}\) are standard atomic formulas. We allow \(B_{i}\) also to be \(\mathbf{t},\mathbf{f}\) and equality and disequality atoms \(s=t,s\neq t\). All variables are implicitly quantified universally in front of the rule. A predicate symbol is defined by \(D\) if it occurs in the head of a rule; a parameter symbol of \(D\) is any other non-variable symbol that occurs in \(D\). The set of defined predicates is denoted \(Def(D)\), the set of parameters as \(Param(D)\)._
This rule-based definition construct is syntactically similar to Aczel's (propositional) definition logic in [1] and Martin-Lof (predicate) definition logic in [6]. However, following intuitionistic tradition, Martin-Lof proposed only proof theory, not formal model semantics.
**Definition 2.6**.: _A formal definition \(D\) is inductive if its dependency graph4 contains a cycle._
Footnote 4: The graph consisting of pairs \((P,Q)\) of predicate symbols such that \(Q\) appears in the head and \(P\) in the body of a rule in \(D\).
E.g., the formal representation of the sibling definition:
\[D_{s}=\left\{\begin{array}{l} sibling(x,y)\gets child\_of(x,z),child\_of(y, z),x\neq y\end{array}\right\}\]
E.g., the formal representation of the non-formal definition of reachability:
\[D_{R}=\left\{\begin{array}{l}R(x,y)\gets G(x,y)\\ R(x,z)\gets R(x,y),G(y,z)\end{array}\right\}\]
E.g., an inductive definition of the product of a list of numbers:
\[D_{L}=\left\{\begin{array}{l} Listproduct([],1)\leftarrow\\ Listproduct([h|t],h\times p)\gets Listproduct(t,p)\end{array}\right\}\]
A formal definition can define multiple predicates by simultaneous induction. For a non-formal example, take the inductive definition of even and odd numbers: 0 is even; if n is even then n+1 is odd; if n is odd, n+1 is even.
From now on, sets \(D\) of rules can be seen as Horn theories and as definitions, and they can be evaluated in two satisfaction relations \(\models_{FO}\) and \(\models_{\mathcal{D}}\).
Below, we define the satisfaction relation \(\models_{\mathcal{D}}\) for definitions \(D\), basically by copy paste of the above non-constructive explanation of non-formal definitions.
**Definition 2.7**.: _Let \(D\) be a definition and \(M\) a first-order structure interpreting all symbols in \(D\). We define that \(M\) satisfies \(D\) (notation \(M\models_{\mathcal{D}}D\)) if \(M\models_{FO}D\) and for all structures \(N\) with the same universe and values of parameters in \(Param(D)\) as \(M\), if \(N\models_{FO}D\) then it holds that \(p^{M}\subseteq p^{N}\) for all \(p\in Def(D)\)._
A definition does not in itself incorporate an assumption about the nature of constants and functors as constructors. This is why \(\models_{\mathcal{D}}\) needs to be defined in terms of first order structures. Thanks to this, it specifies the semantics of the definition \(D_{l}\) of \(Listproduct\) which uses the non-constructor function \(\times\).
Definition 2.7 makes use of the non-constructive characterisation of definitions. Alternatively, the value \(p^{M}\) of defined predicates can be characterized constructively, stating that \(M\) is to be the limit of a (potentially transfinite) induction process \(M_{0},M_{1},\ldots,M_{n},M_{n+1},\ldots\) that starts at the structure \(M_{0}\) identical to \(M\) except that defined predicates are interpreted by the empty set. From then on, \(M_{n+1}\) is obtained from \(M_{n}\) by applying one or more or all applicable rules in a bottom up way (and taking the union values for limit ordinals). In the constructive interpretation, rules specify the atomic operations of the induction process. Martin-Lof called them _productions_. The induction process in case of programs with and without negation was formally described in [4].
**Example 2.1**.: _The definition \(D_{R}\) of reachability (given above) contains only predicate symbols and has no Herbrand structures but \(D_{R}\) has infinitely many first-order models. In each model \(M\), the value \(R^{M}\) is the reachability relation (a.k.a. the transitive closure) of \(G^{M}\). Consider \(M\):_
\[\mathcal{U}^{M}=\{a,b,c\},G^{M}=\{(a,b),(b,a),(c,c)\},R^{M}=\{(a,a),(b,b),(c,c),(a,b),(b,a)\}\]
_To verify that \(M\models_{\mathcal{D}}D_{R}\), we can verify that \(R^{M}\) minimally satisfies the Horn theory \(D_{R}\) in \(M\). Alternatively, we can build the induction process of \(D_{R}\) in the context of \(M\) and verify that it constructs \(R^{M}\). The following sequence of elements of \(R\) can be derived by iterated rule application:_
\[\langle(a,b),(b,a),(a,a),(b,b),(c,c)\rangle\]
_It is well-known that reachability cannot be expressed in FO. 5 6_
Footnote 5: Here is a folk proof that “\(R\) is the reachability relation of \(G\)” is not expressible in FO. Assume it was expressible in FO, by the FO theory \(\Psi\). Choose new constants \(A,B\), consider the FO theory \(\Psi\cup\{R(A,B)\}\cup\{\psi_{n}|n\in\mathbb{N}_{0}\}\) where \(\psi_{n}=\neg(\exists x_{1},\ldots,x_{n-1}:G(A,x_{1})\wedge\ldots\wedge G(x_{ i},x_{i+1})\wedge\ldots\wedge G(x_{n-1},B))\) expresses that there is no path of length \(n\) from \(A\) to \(B\). This theory is unsatisfiable since it states that \(B\) is reachable from \(A\) but there are no finite paths from \(A\) to \(B\). Therefore by the compactness theorem it has a finite subset \(\Omega\) that is unsatisfiable, which is impossible since clearly, its superset \(\Psi\cup\{R(A,B)\}\cup\Omega\) is satisfiable. Indeed, \(\Omega\) “forbids” only a finite number of lengths of paths from \(A\) to \(B\). QED
Footnote 6: Clark completion of a rule set sometimes agrees with its semantics in \(\mathcal{L}_{\mathcal{D}}\) but not for many inductive rule sets, e.g., \(D_{R}\). For example, adding \((a,c),(b,c)\) to \(R^{M}\) yields a model of the Clark completion in which \(R^{M}\) is not the reachability relation.
**Example 2.2**.: _A small part of the (infinite) induction process of the definition \(D_{L}\) of_ Listproduct _in the context of the structure \(M_{\times}\) introduced before is:_
\[\langle([],1),([2],2),([3,2],6),([5,3,2],30),\ldots\rangle\]
Two important theorems follow (no proof provided). Let \(D\) be a definition over vocabulary \(\Sigma\).
**Theorem 2.1**.: _If \(M\cong_{\Sigma}N\) then \(M\models_{\mathcal{D}}D\) iff \(N\models_{\mathcal{D}}D\)._
The following theorem specifies what should be a property of every logic of definitions: given the universe and values of the parameter undefined symbols, \(D\) uniquely determines the values of its defined symbols.
**Theorem 2.2**.: _Every structure interpreting \(Param(D)\) has a unique expansion for the defined symbols that satisfies \(D\)._
Explaining full logic programs
The position of this paper on the declarative reading of logic programs boils down to the following.
A logic program \(\Pi\) is a theory \(\{{\cal H}({\cal CF}),D\}\) of the logic \({\cal L}_{\cal D}\), consisting of a definition \(D\) that defines every predicate in \(\Pi\) and the Herbrand Axiom \({\cal H}({\cal CF})\) for a set \({\cal CF}\) of constants and functors.
It is well-known that each program \(\Pi\) has a unique least Herbrand interpretation \(LHM^{{\cal CF}}(\Pi)\), where \({\cal CF}\) contains (at least) all constant and function symbols in \(\Pi\)[7]. Let \(\Sigma\) be the set of all symbols in \(\Pi\) and \({\cal CF}\).
**Theorem 3.1**.: _A structure \(M\) satisfies \(D\) and \({\cal H}({\cal CF})\) iff \(M\cong_{\Sigma}LHM^{{\cal CF}}(\Pi)\). 7_
Footnote 7: This theorem reassures us that the LHM semantics is correct as a model semantics for \({\cal L}_{\cal D}\). Importantly, the original paper [7] that introduced LHM did not mention definitions and presented the LHM as the _denotation_ of the Horn theory, the set of entailed atomic formulas.
Thus, any \(\Sigma\)-structure \(M\) isomorphic with the LHM is also a model of \(\Pi\), moreover any extension of such \(M\) with _arbitrary values_ for any set of additional symbols, is also a model of \(\Pi\). These are the only models of \(\Pi\). The satisfaction relation \(\models_{\cal D}\) implements the principle of isomorphism. Importantly, it also implements the natural principle that \(\Pi\) contains no information about symbols not occurring in \(\Pi\).
**Example 3.1**.: _Taking \({\cal CF}=\{|/2\}\) for the_ member _program results in an inconsistent \({\cal H}({\cal CF})\). But, for any extended \({\cal CF}\) with at least one constant, this program determines the correct membership relation within the intended universe. All predicates other than_ member _are unconstrained by it._
**Example 3.2**.: _A homework problem commonly given early in introductory Prolog courses is to define family relationships, such as sibling, grandparent, or ancestor, using just a binary child_of relation, and to test it using the student's own family. To define sibling, a student may submit:_
* sibling(X,Y) :- child_of(X,P), child_of(Y,P), X == Y. child_of(tessa,david). child_of(jonah,david).
_The LHM is the unique state of affairs that the student had in mind. The universe is as described by \({\cal H}(\{tessa,jonah,david\})\). The definition can be interpreted non-constructively through minimal satisfaction, or constructively through the induction process which constructs the intended relations in at most 4 steps. The rules can be interpreted as productions in the induction process. The facts of_ child_of _behave, not as a conjunction of true facts, but as an exhaustive enumeration involving that same set of facts. The rule for_ sibling _behaves, not as a weak material implication, but as a necessary and sufficient condition to be a sibling._
Explaining components of logic programs
The previous section defines a program \(\Pi\) as consisting of two modules only: \(D\) and \(\mathcal{H}(\mathcal{CF})\). A program can be large and complex, in which case it may really only be understood by its programmer(s) in a piecemeal way. Therefore, a large program must be able to be split into a collection of natural, meaningful components which the programmer understands, and develops more or less independently of each other. The definitional view shines bright light on this.
Taking the constructive view of definitions, rules are to be viewed as productions in the induction process. We define predicates by describing how their values must be constructed from the values of parameters. The basic operations are bottom-up execution of the rules. As such, single rules are not truth functional expressions. Satisfaction of a rule in a structure is not defined. Of course a rule entails a material implication, but this captures only a small bit of how to understand it.
This leads us to the following question: what are the least components of logic programs for which it makes sense to ask the question: _when is it true, when is it false_? It is not the rule, that much is clear.
The "formulas" of the logic, the basic truth functional components of a program are its (sub)definitions, i.e., rule subsets of \(D\) that define one or more predicates. Take the family program of Example 3.2. To the programmer, the program clearly consists of three components: the (implicit) Herbrand Axiom, and the definitions of child_of and of sibling. The first definition defines child_of by exhaustive enumeration and contains no knowledge of sibling. The second defines sibling in terms of the parameter child_of and contains no knowledge of the value of this parameter.
Not every partition \(D_{1},\ldots,D_{n}\) of \(D\) yields a sensible modularization in subdefinitions of it. A subdefinition \(D_{i}\) that defines one or more predicates, should contain all rules of \(D\) involved in the induction process of these predicates. Thus, formally, a component definition \(D_{i}\) of \(D\) should contain all rules of \(D\) with the same predicates in the head. For the same reason, if two or more predicates are defined by simultaneous induction, their rules should not be spread out over multiple subdefinitions but be concentrated in one definition; otherwise, the induction process cannot be computed. Thus, there should not be cycles in the dependency relation over multiple subdefinitions. In summary, the definitional view on logic programming suggests that a program \(\Pi\) can be naturally split in modules \(D_{1},\ldots,D_{n},\mathcal{H}(\mathcal{CF})\), such that each predicate is defined in exactly one module \(D_{i}\) and there are no cycles in the dependency relation involving predicates defined in different modules. The following theorem proves the correctness of this hypothesis.
**Theorem 4.1**.: _For every structure \(M\) interpreting all symbols of \(\Pi\) and \(\mathcal{CF}\), the following statements are equivalent:_
1. \(M\cong_{\Sigma_{D}}LHM^{\mathcal{CF}}(\Pi)\)__
2. \(M\) _satisfies_ \(D\) _and_ \(\mathcal{H}(\mathcal{CF}))\)_;_
3. \(M\) _satisfies_ \(D_{1},\ldots,D_{n}\) _and_ \(\mathcal{H}(\mathcal{GF})\)_)._
A proof of this theorem can be found in [3]. The theorem says that \(\Pi\) is logically equivalent to \(\mathcal{H}(\mathcal{GF})\) and the conjunction of the definitions \(D_{1},\ldots,D_{n}\). This indeed shows that a programmer can develop such modules \(D_{i}\) and reason with them independently of each other.
## 5 Negation
The nature of negation (as failure) is probably the most troubling question in the history of Logic Programming.8 For 50 years now, the general conviction is that the negation not in Prolog cannot be classical negation \(\neg\). Where does that idea come from? It comes from the fact that Horn theories do not entail the falsity of atoms. E.g., consider the query
Footnote 8: The LP community seems much less worried about the nature of the rule operator, while for more than hundred years, the logic science knows there are considerable troubles with material implication.
7- member(0,[1,2,3]) no The answer "no" expresses that the member Horn logic theory does not entail the truth of member(0,[1,2,3]) but neither does it entail its falsity. As a consequence, Prolog would be unsound if in the following query, the symbol not is interpreted as classical negation \(\neg\):
7- not member(0,[1,2,3]) yes This is the first and main reason why not is believed to be non-classical negation.
But the definitional view sheds a completely different light on the issue. Definitions augmented with the Herbrand Axiom do indeed semantically entail the falsity of many defined atomic facts. In particular, since the LHM is the unique model (modulo isomorphism), any defined atom \(A\) that is false in the LHM is false in _every_ model. Therefore, this theory semantically entails the truth of the classically negated fact \(\neg A\). In particular, member(0,[1,2,3]) is false in _every_ structure that satisfies the member program (as defined in this paper), and hence, \(\neg\)member(0,[1,2,3]) is semantically entailed by the program.
And so, in the following procedure compress/2 which removes duplicates in a list, not can and should be interpreted as classical negation \(\neg\):
compress([],[]). compress([X|T],[X|T1]) :- compress(T,T1), not member(X,T1). compress([X|T],T1) :- compress(T,T1), member(X,T1).
The resulting program (defining compress and member) is a stratified program.
_Stratification_ is also a natural principle of great importance in mathematics and science: it is the principle that once a concept is well-defined, it can be used to define new concepts in terms of it. It is a key building principle of science. Mathematical logicians have studied infinite, even transfinite stratified stacks of definitions, called _iterated inductive definitions_[5, 6]. The well-known principle of definition by structural induction is an instance of this [4]. In [2, 3, 4], it was argued that the well-founded semantics implements a semantic stratification principle, and that logic programs under this semantics can be viewed as a finite description of this type of definition.
To conclude, the definitional view on logic programs sheds a different light on the nature of language constructs: negation as failure is indeed classical negation! It is the rule operator that is non-classical: much stronger than a material implication, it is a production operator in the induction process.
We conclude by noting that we have realized the four goals put forward in the Introduction.
|
2307.05702 | Qubit Recycling in Entanglement Distillation | Quantum entanglement distillation is a process to extract a small number of
high-fidelity entanglement from a large number of low-fidelity ones, which in
essence is to trade yield (or survival rate) for fidelity. Among existing
distillation approaches, Gisin's local filtering protocol is commonly adopted
in photonic quantum systems for distilling entangled photons in polarization
basis. Yet, the performance of Gisin's filter is cursed by the same fundamental
trade-off between fidelity and yield. To address this challenge, in this work,
we propose a protocol to recycle the disposed photons and improve their
fidelity by a designed (and optimized) local operator. The key parameters of
the proposed protocol are calculated by solving a constrained optimization
problem. In so doing, we achieve significantly higher yield of high-fidelity
entanglement pairs. We further evaluate the performance of our designed
protocol under two common configurations of Gisin's filter, namely full filter
and partial filter. Compared with existing distillation protocols, the results
demonstrate that our design achieves as much as 31.2% gain in yield under the
same fidelity, while only incurring moderate system complexity in terms of
invested hardware and extra signaling for synchronization. | Stuart Pelletier, Ruozhou Yu, George Rouskas, Jianqing Liu | 2023-07-11T18:11:06Z | http://arxiv.org/abs/2307.05702v1 | # Qubit Recycling in Entanglement Distillation
###### Abstract
Quantum entanglement distillation is a process to extract a small number of high-fidelity entanglement from a large number of low-fidelity ones, which in essence is to trade yield (or survival rate) for fidelity. Among existing distillation approaches, Gisin's local filtering protocol is commonly adopted in photonic quantum systems for distilling entangled photons in polarization basis. Yet, the performance of Gisin's filter is cursed by the same fundamental trade-off between fidelity and yield. To address this challenge, in this work, we propose a protocol to recycle the disposed photons and improve their fidelity by a designed (and optimized) local operator. The key parameters of the proposed protocol are calculated by solving a constrained optimization problem. In so doing, we achieve significantly higher yield of high-fidelity entanglement pairs. We further evaluate the performance of our designed protocol under two common configurations of Gisin's filter, namely full filter and partial filter. Compared with existing distillation protocols, the results demonstrate that our design achieves as much as 31.2% gain in yield under the same fidelity, while only incurring moderate system complexity in terms of invested hardware and extra signaling for synchronization.
Entanglement distillation, Gisin's local filter, POVM, Optimization, Protocol design
## I Introduction
Quantum entanglement as a physical phenomenon in the microscopic world once troubled Einstein who called it "spooky action at a distance," but it was later validated by the well-known Bell inequality test. Nowadays, despite many unanswered scientific questions around quantum entanglement, quantum networks have been widely engineered and deployed around the globe. The common goal of all these quantum networks is to distribute entanglement in large volume and high quality [1], as entanglement is central to numerous applications in future quantum internet such as quantum teleportation, quantum computation, and quantum cryptography [2, 3].
When interacting with the environment like quantum memory and fibre channels, quantum entanglement unavoidably experiences coherence degradation that may lead to entanglement sudden death [4]. The common way to cope with decoherence is entanglement distillation, by which a smaller number of highly entangled states are extracted from a large number of weakly entangled states [5]. Among existing entanglement distillation protocols, Bennett's controlled-NOT (CNOT) operation [6] and Gisin's local filtering operation [7] are featured as mainstream approaches. Compared with Bennett's approach, Gisin's local filter has two appealing merits: (1) only local operations are needed (i.e., no classical communications); (2) only a single copy of the entangled state is needed (i.e., no ancilla entanglements are scarified).
Since its inception in 1996, Gisin's local filter has been extensively researched in both theory and experiments for entanglement distillation. In principle, a pair of weakly entangled qubits (and likewise for multipartite (>2 qubits) entanglement, such as the GHZ state) can become strongly entangled when passing through Gisin's filters. Any qubits reflected by the filter, however, will have their entanglement weakened, or in some cases, destroyed. Such qubits can either be measured or discarded as they are deemed useless at that point. While this uselessness holds true in many (ideal) cases, for some input states and/or under certain (practical) filter configurations, these reflected qubits are shown to have non-zero concurrence, i.e., they are still entangled despite weak strength. A natural question to ask is whether such reflected qubits can be recycled and turned into strongly entangled states. One can obviously anticipate a much higher yield of usable entanglement if the answer to this question is affirmative.
To this end, we present in this paper a novel protocol -- consisting of a non-unitary transformation and multi-party agreement on coincidence count -- to harvest and improve the weakly entangled qubits that are reflected by Gisin's filters. To search for the optimal non-unitary operator, we formulate a constrained optimization problem that maximizes the high-fidelity survival rate, i.e., the total entanglement yield with the minimum requirement on their fidelity. The protocol is integrated into and examined under two common filter-based entanglement distillation setups, namely the full filtering and partial filtering schemes. Based on numerical simulations, we demonstrate the superior performance of our qubit-recycling protocol in terms of high-fidelity survival rate compared to existing filter schemes.
The paper is organized as follows. To begin with, we survey the recent advances in entanglement distillation in Section II. Next, we introduce the basic concepts that are relevant to our research problem in Section III. We then describe the principle and design details of our proposed protocol in Section IV. To evaluate the performance of the protocol, we present the simulation results in Section V. Lastly, we conclude the paper in Section VI with an outlook for the future work.
## II Related Works
In this section, we review recent advancements in entanglement distillation that have contributed to the ongoing development of the field. We organize our discussion into three subtopics: (1) distillation of multipartite states, which
extends the scope of entanglement distillation beyond simple bipartite systems; (2) distillation using hyperentanglement, an emerging approach that utilizes multiple modes of entanglement to enhance distillation; and (3) distillation using reset-and-reuse operations in a quantum computer, a novel methodology that employs the inherent capabilities of quantum computing hardware to facilitate the distillation process by recycling and re-entangling ancilla qubits. By examining these recent developments, we aim to provide an overview of the current state of entanglement distillation research and highlight the significance and novelty of our proposed qubit recycling protocol.
#### Ii-B1 Distillation of Multipartite Entanglement States
The distillation of multipartite entangled states, such as GHZ states, has garnered attention due to the advantages of entanglement being shared between more than two parties. Huang et al. [8] proposed a single-copy-based distillation scheme for amplitude-damped W states and amplitude-damped GHZ states. De Bone et al. [9] investigated the creation and distillation of GHZ states out of nonperfect Bell pairs. They introduced a heuristic dynamic programming algorithm to optimize protocols for creating and purifying GHZ states.
#### Ii-B2 Distillation Utilizing Hyperentanglement
Utilizing hyperentanglement has been explored as a promising technique for enhancing entanglement distillation schemes. Zhou and Sheng [10] proposed an efficient two-step entanglement purification protocol for polarization entanglement using a single copy of states by utilizing hyperentanglement in the time bin and spatial modes. Ecker et al. [11] experimentally demonstrated single-copy entanglement distillation using pairs of single photons entangled in both the polarization and energy-time domains.
#### Ii-B3 Reset-and-Reuse
In recent work by Germain et al. [12], the authors explore the potential of a reset-and-reuse operation in quantum computers to substantially reduce yield loss in entanglement distillation protocols. They implement multi-pass distillation schemes, specifically BBPSSW and DEJMPS, and test them on the IBM-Q environment. This reset-and-reuse feature shows a significant minimization in the number of qubits required for distillation, bringing the number of qubits required per pass down from exponential to constant -- a notably large improvement. It should be noted that such a reset-and-reuse operation, while available in quantum computers, is not currently available in a quantum network setting, as there are many challenges associated with re-entangling distance-separated ancillary qubits after measurement. Our work proposes a novel single-copy qubit recycling protocol which does not require any such re-entangling and can thus be used by a quantum network with currently available hardware.
## III Preliminaries
### _Gisin's Local Filter_
In the demonstrative experiment by Kwiat [13], Gisin's local filter was realized by a series of coated glass slabs, tilted against the vertical axis by the Brewster's angle, as shown by an example in Fig. 1. By adjusting the configuration of these slabs (e.g., angles and coated materials), the transmission probability \(T_{H}\) (resp., \(T_{V}\)) for horizontally (resp. vertically) polarized incident photons can be tuned, owing to the well-known polarization-dependent reflectivity [14]. As a result, undesired states (i.e., noises) can be selectively blocked (and reflected in another direction), thus leaving the surviving photons to be more concentrated in the desired entangled states. In theory, the Gisin's local filter can be modeled as a positive operator-valued measurement (POVM), namely \(\{M_{0},M_{1}\}\) where \(M_{0}=\left(\begin{smallmatrix}a&0\\ 0&\beta\end{smallmatrix}\right)\) and \(M_{1}=I-M_{0}\) are positive semi-definite Hermitian. \(M_{0}\) (\(M_{1}\) likewise) is realized by the projector \(m_{0}=\sqrt{\alpha}|0\rangle\langle 0|+\sqrt{\beta}|1\rangle\langle 1|\) and \(M_{0}=m_{0}*m_{0}^{\dagger}\). When implementing the POVM (or Gisin's filter) in photonic systems, \(\alpha\) and \(\beta\) respectively denote the transmission probability \(T_{H}\) and \(T_{V}\) of the glass slabs. That is to say, the design of Gisin's local filter is boiled down to the construction of \(\alpha\)'s and \(\beta\)'s.
### _Channel Decoherence Model_
In this work, we consider a (photonic) quantum network that distributes EPR pairs between any two arbitrary nodes. An entanglement source (ES) generates EPR pairs by directing a laser beam at a BBO (beta-barium borate) crystal. Without loss of generality, the EPR pair in the state of \(|\Phi^{+}\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)\) or \(\rho=|\Phi^{+}\rangle\langle\Phi^{+}|\) is assumed.
Then, each qubit of the EPR pair is distributed to Alice and Bob through independent decoherence channels. We consider the amplitude damping model in which state \(|1\rangle\) may decay into \(|0\rangle\). Mathematically, an amplitude damping channel \(\mathcal{E}\) is described by the following super-operators, a.k.a, Kraus operators:
\[E_{0}^{i}=\begin{bmatrix}1&0\\ 0&\sqrt{\gamma_{i}}\end{bmatrix},\quad E_{i}^{i}=\begin{bmatrix}0&\sqrt{ \gamma_{i}}\\ 0&0\end{bmatrix}, \tag{1}\]
where \(i\in\{A,B\}\), \(\gamma_{i}=1-e^{-t_{i}/T_{1}}\) is a time-dependent damping factor in which \(T_{1}\) is defined as the time it takes for the \(|1\rangle\) state to settle into the \(|0\rangle\) (vice versa). Denote \(\tilde{\gamma_{i}}=1-\gamma_{i}\). After channel decoherence, the received state at Alice and Bob is
\[\rho^{\prime}=\mathcal{E}(\rho)=\sum_{j=0}^{1}\sum_{k=0}^{1}\left(E_{j}^{A} \otimes E_{k}^{B}\right)\rho\left(E_{j}^{A}\otimes E_{k}^{B}\right)^{\dagger}. \tag{2}\]
For the sake of notation simplicity, in the remainder of this
Figure 1: Gisin’s filter implemented by a Brewster plate.
paper, we consider the same fading channel for ES-A and ES-B, i.e., \(\gamma=\gamma_{A}=\gamma_{B}\).
## IV Design Principles of Qubit Recycling
In this section, we consider two common entanglement distillation setups in the literature, with one being that both Alice and Bob implement Gisin's local filters (coined as "full filtering") while the other being that either Alice or Bob implements a Gisin's local filter (coined as "partial filtering"). While both setups have their merits, we will investigate the best use case of our proposed qubit-recycling idea and how much gain it can offer.
### _Qubit Recycling under Full Filtering_
#### Iv-A1 Typical full filtering design
To offset the decoherence incurred by the amplitude damping channel and restore the received state \(\rho^{\prime}\) closer to its original entanglement state \(\rho\), Alice and Bob implement Gisin's local filters, which are mathematically defined as the POVMs \(\{M_{A,0},M_{A,1}\}\) and \(\{M_{B,0},M_{B,1}\}\) respectively, for entanglement distillation. We consider the local filters performed by Alice and Bob described by the operation:
\[M_{i,0}=\begin{bmatrix}\alpha_{i}&0\\ 0&\beta_{i}\end{bmatrix},M_{i,1}=\begin{bmatrix}\beta_{i}&0\\ 0&\alpha_{i}\end{bmatrix}, \tag{3}\]
where \(\alpha_{i},\beta_{i}\in(0,1)\) and \(\alpha_{1}+\beta_{i}=1\) complying with the POVM's property. In existing work, full filtering schemes have been widely explored, wherein Alice and Bob each distills her/his respective qubit independently. This process is mathematically described by applying POVMs on both qubits. We refer to the state after undergoing both filters, i.e., the state Alice and Bob want to keep, as
\[\tilde{\rho}_{11}=\frac{1}{S_{11}}(\sqrt{M_{A,1}}\otimes\sqrt{M_{B,1}})\rho^{ \prime}(\sqrt{M_{A,1}}\otimes\sqrt{M_{B,1}})^{\dagger}. \tag{4}\]
where \(S_{11}\) is the normalization factor that is \(S_{11}=\text{Tr}\{(\sqrt{M_{A,1}}\otimes\sqrt{M_{B,1}})\rho^{\prime}(\sqrt{M_{ A,1}}\otimes\sqrt{M_{B,1}})^{\dagger}\}\). The value of \(S\) represents the likelihood that both Alice's and Bob's qubits pass through the Gisin's local filters, thus can be considered as the success probability, or survival rate, of the distillation process. Note that as we consider indentical channels for ES-A and ES-B, that is \(\gamma_{A}=\gamma_{B}\), Alice's and Bob's filter will have the same configurations. Therefore, we can drop the subscript for A and B and simply let \(\alpha=\alpha_{A}=\alpha_{B}\) (likewise for \(\beta\)).
The calculation of the POVM parameters \(\{\alpha,\beta\}\) is usually performed by solving a constrained optimization problem that seeks to maximize the high-fidelity yield, i.e. the success probability while meeting a minimum requirement on the entanglement fidelity. The reason for posing a hard constraint on fidelity is because some quantum applications (e.g., QKD) have a stringent requirement on the minimum fidelity to be considered usable (e.g., satisfying a minimum secret key rate) [15]. Mathematically,
\[\{\alpha^{*},\beta^{*}\}=\operatorname*{arg\,max}_{\{\alpha,\beta\}}S_{11}; \quad\text{s.t.}\operatorname{Tr}\left[\sqrt{\sqrt{\rho}\tilde{\rho}_{11} \sqrt{\rho}}\right]^{2}\geq F_{th}. \tag{5}\]
This problem is a typical multivariate quadratic optimization problem, which can be easily proven to be convex by checking the second order derivatives of the objective and constraint functions. By Slater's condition, the necessary and sufficient conditions for a solution \(\{\alpha^{*},\beta^{*}\}\) to be the optimal solution are the KKT conditions.
#### Iv-A2 Residue entanglement in reflected qubits
When Alice's and Bob's local filters are configured using the parameters \(\{\alpha^{*},\beta^{*}\}\), a photon pair passing through both filters is guaranteed to have a desired fidelity level. Yet, with such optimal filter configuration, the reflected qubit(s) could still be usable in the sense that there is a certain degree of entanglement remained.
**Proposition 1**.: _Suppose an EPR pair passes through an amplitude damping channel with parameter \(\gamma\) and is filtered using Gisin's local filter with a POVM with parameters \(\{\alpha,\beta\}\). The resulting state of the reflected photons, \(\tilde{\rho}_{00}\), is entangled when \(\alpha,\beta\neq 0\) and \(\gamma\neq 1\)._
Proof.: Note that \(\tilde{\rho}_{00}=\)
\[\begin{bmatrix}\alpha^{2}\left(\frac{1}{2}+\frac{\gamma^{2}}{2}\right)&0&0& \frac{1}{2}\alpha\beta\left(1-\gamma\right)\\ 0&\frac{1}{2}\alpha\beta\left(1-\gamma\right)\gamma&0&0\\ 0&0&\frac{1}{2}\alpha\beta\left(1-\gamma\right)\gamma&0\\ \frac{1}{2}\alpha\beta\left(1-\gamma\right)&0&0&\frac{1}{2}\beta^{2}\left(1- \gamma\right)^{2}\end{bmatrix}.\]
This density matrix is separable if and only if its partial transpose is positive [16]. This is called the PPT condition, which is equivalent to the condition that its partial transpose has exclusively non-negative eigenvalues. In other words, if at least one of its eigenvalues is negative, then the state \(\tilde{\rho}_{00}\) is entangled. Note that its partial transpose1 is the density matrix
Footnote 1: The partial transpose generally is taken with respect to one qubit, corresponding to either Alice’s or Bob’s qubit. However, the eigenvalues of the partial transpose are invariant under which qubit the partial transpose is taken on, because the partial transpose with respect to Alice’s qubit is equal to the transpose of the partial transpose taken with respect to Bob’s qubit. In this case, then, since the partial transpose is symmetric, it is the same partial transpose matrix for both Alice’s and Bob’s qubits.
\[\begin{bmatrix}\alpha^{2}\left(\frac{1}{2}+\frac{\gamma^{2}}{2}\right)&0&0&0\\ 0&\frac{1}{2}\alpha\beta(1-\gamma)\gamma&\frac{1}{2}\alpha\beta(1-\gamma)&0\\ 0&\frac{1}{2}\alpha\beta(1-\gamma)&\frac{1}{2}\alpha\beta(1-\gamma)\gamma&0\\ 0&0&0&\frac{1}{2}\beta^{2}(1-\gamma)^{2}\end{bmatrix}\]
which has eigenvalues
\[\begin{split}\lambda_{1}&=-\frac{1}{2}\alpha\beta(-1+\gamma)^{2}, &\lambda_{2}=\frac{1}{2}\beta^{2}(-1+\gamma)^{2},\\ \lambda_{3}&=\frac{1}{2}\alpha^{2}(1+\gamma^{2}),&\lambda_{4}=\frac{1}{2} \alpha\beta(1-\gamma^{2}).\end{split} \tag{6}\]
Note that \(\lambda_{2},\lambda_{3}\) and \(\lambda_{4}\) all take on non-negative values for all \(\alpha,\beta,\gamma\in[0,1]\). The eigenvalue \(\lambda_{1}\), however, takes on a negative value except when \(\alpha=0\), \(\beta=0\) or \(\gamma=1\). Therefore, \(\tilde{\rho}_{00}\), is entangled when \(\alpha,\beta\neq 0\) and \(\gamma\neq 1\).
#### Iv-A3 Recycling reflected qubits
In light of the remaining usable entanglement in the reflected qubits, we propose a second
Gisin's local filter, denoted as Filter\({}_{\text{A/B},2}\), to harvest them. The basic idea is shown in Fig. 2, in which the reflected qubits are distilled by another filter. Then, the two light paths are integrated and analyzed by a single-photon avalanche detector (SPAD). Note that a small portion of the reflected qubits from Filter\({}_{,1}\) will be reflected by Filter\({}_{,2}\) again. While they can be looped back for further recycling, we choose to measure them as their entanglement strength becomes much weaker than that observed when they are only reflected once. Technically, by calculating the concurrence following Proposition 1, we can show that the entanglement strength progressively deteriorates as qubits are reflected by each subsequent filter.
To determine the optimal configurations of Filter\({}_{,2}\), let us first define an outcome space for Filter\({}_{,1}\) as \(\Omega_{1}\) = \(\{T_{A,1}T_{B,1}\), \(T_{A,1}R_{B,1}\), \(R_{A,1}T_{B,1}\), \(R_{A,1}R_{B,1}\}\). For example, the outcome \(w=R_{A,1}T_{B,1}\) implies that Alice's qubit is reflected while Bob's is transmitted. In the traditional full filtering scheme, this outcome would be considered a failure because no coincidence click is observed. In addition, we can define the outcome space for the second-tier local filters \(\Omega_{2}\) = \(\{\)\(\emptyset_{A,2}\emptyset_{B,2}\), \(T_{A,2}\emptyset_{B,2}\), \(R_{A,2}\emptyset_{B,2}\), \(\emptyset_{A,2}T_{B,2}\), \(\emptyset_{A,2}R_{B,2}\), \(T_{A,2}T_{B,2}\), \(T_{A,2}R_{B,2}\), \(R_{A,2}T_{B,2}\), \(R_{A,2}T_{B,2}\), \(R_{A,2}R_{B,2}\)) in which \(\emptyset\) is an null event that implicitly tells that no qubit arrives at this filter. Among these possible outcomes, we collect the outcomes which result in the final distilled entanglement in a set \(\Omega_{\angle}\) = \(\{T_{A,1}T_{B,1}\)\(\land\)\(\emptyset_{A,2}\emptyset_{B,2}\), \(T_{A,1}R_{B,1}\)\(\land\)\(\emptyset_{A,2}T_{B,2}\), \(R_{A,1}T_{B,1}\)\(\land\)\(T_{A,2}\emptyset_{B,2}\), \(R_{A,1}R_{B,1}\)\(\land\)\(T_{A,2}T_{B,2}\)) which gives us the survival rate \(P_{\angle}\) = \(\sum_{i=1}^{4}\Pr(\omega_{i}\in\Omega_{\angle})\).
Specifically, the survival rates for the four cases in \(\Omega_{\angle}\) are respectively calculated as follows
\[\Pr(T_{A,1}T_{B,1}\land\emptyset_{A,2}\emptyset_{B,2})=S_{11}\] \[\Pr(T_{A,1}R_{B,1}\land\emptyset_{A,2}T_{B,2})=\] \[\qquad\qquad\qquad\qquad\Pr(\{\sqrt{M_{A,1}}\otimes\sqrt{M_{B,0} }\}\rho^{\prime}(\sqrt{M_{A,1}}\otimes\sqrt{M_{B,0}})^{\dagger}\}\] \[\qquad\qquad\qquad\times\Pr\{(I\otimes\sqrt{M_{B,1}^{{}^{\prime} }})\tilde{\rho}_{10}(I\otimes\sqrt{M_{B,1}^{{}^{\prime}}})^{\dagger}\}\] \[\Pr(R_{A,1}T_{B,1}\land T_{A,2}\emptyset_{B,2})=\] \[\qquad\qquad\qquad\Pr\{(\sqrt{M_{A,0}}\otimes\sqrt{M_{B,1}})\rho ^{\prime}(\sqrt{M_{A,0}}\otimes\sqrt{M_{B,1}})^{\dagger}\}\] \[\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,1}^{{}^{\prime}}}\otimes I )\tilde{\rho}_{01}(\sqrt{M_{A,1}^{{}^{\prime}}}\otimes I)^{\dagger}\}\] \[\Pr(R_{A,1}R_{B,1}\land T_{A,2}T_{B,2})=\] \[\qquad\qquad\qquad\qquad\Pr\{(\sqrt{M_{A,0}}\otimes\sqrt{M_{B, 0}})\rho^{\prime}(\sqrt{M_{A,0}}\otimes\sqrt{M_{B,0}})^{\dagger}\}\] \[\qquad\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,1}^{{}^{\prime} }}\otimes I)\tilde{\rho}_{01}(\sqrt{M_{A,1}^{{}^{\prime}}}\otimes I)^{\dagger}\}\] \[\Pr(R_{A,1}R_{B,1}\land T_{A,2}T_{B,2})=\] \[\qquad\qquad\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,0}}\otimes \sqrt{M_{B,0}})\rho^{\prime}(\sqrt{M_{A,0}}\otimes\sqrt{M_{B,0}})^{\dagger}\}\] \[\qquad\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,1}^{{}^{\prime}}} \otimes I)\tilde{\rho}_{01}(\sqrt{M_{A,1}^{{}^{\prime}}}\otimes I)^{\dagger}\}\] \[\Pr(R_{A,1}R_{B,1}\land T_{A,2}T_{B,2})=\] \[\qquad\qquad\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,0}}\otimes \sqrt{M_{B,0}})\rho^{\prime}(\sqrt{M_{A,0}}\otimes\sqrt{M_{B,0}})^{\dagger}\}\] \[\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,1}^{{}^{\prime}}}\otimes I )\tilde{\rho}_{00}(\sqrt{M_{A,1}^{{}^{\prime}}}\otimes I)^{\dagger}\}\] \[\qquad\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,1}^{{}^{\prime}}} \otimes I)\tilde{\rho}_{01}(\sqrt{M_{A,1}^{{}^{\prime}}}\otimes I)^{\dagger}\}\] \[\qquad\qquad\qquad\qquad\Pr\{R_{A,1}R_{B,1}\land T_{A,2}T_{B,2})=\] \[\qquad\qquad\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,0}}\otimes \sqrt{M_{B,0}})\rho^{\prime}(\sqrt{M_{A,0}}\otimes\sqrt{M_{B,0}})^{\dagger}\}\] \[\qquad\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,1}}\otimes\sqrt{ M_{B,1}})\tilde{\rho}_{00}(\sqrt{M_{A,1}^{{}^{\prime}}}\otimes I)^{\dagger}\}\] \[\qquad\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,1}}\otimes\sqrt{ M_{B,1}})\tilde{\rho}_{01}(\sqrt{M_{A,1}^{{}^{\prime}}}\otimes I)^{\dagger}\}\] \[\qquad\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,1}}\otimes\sqrt{ M_{B,1}})\tilde{\rho}_{01}(\sqrt{M_{A,1}^{{}^{\prime}}}\otimes I)^{\dagger}\}\] \[\qquad\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,1}}\otimes\sqrt{ M_{B,1}})\tilde{\rho}_{00}(\sqrt{M_{A,1}^{{}^{\prime}}}\otimes I)^{\dagger}\}\] \[\qquad\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,1}}\otimes\sqrt{ M_{B,1}})\tilde{\rho}_{01}(\sqrt{M_{A,1}^{{}^{\prime}}}\otimes I)^{\dagger}\}\] \[\qquad\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,1}}\otimes\sqrt{ M_{B,1}})\tilde{\rho}_{00}(\sqrt{M_{A,1}^{{}^{\prime}}}\otimes I)^{\dagger}\}\] \[\qquad\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,1}}\otimes\sqrt{ M_{B,1}})\tilde{\rho}_{01}(\sqrt{M_{A,1}^{{}^{\prime}}}\otimes I)^{\dagger}\}\] \[\qquad\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,1}}\otimes\sqrt{ M_{B,1}})\tilde{\rho}_{01}(\sqrt{M_{A,1}^{{}^{\prime}}}\otimes I)^{\dagger}\}\] \[\qquad\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,1}}\otimes\sqrt{ M_{B,1}})\tilde{\rho}_{00}(\sqrt{M_{A,1}^{{}^{\prime}}}\otimes I)^{\dagger}\}\] \[\qquad\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,1}}\otimes\sqrt{ M_{B,1}})\tilde{\rho}_{01}(\sqrt{M_{A,1}^{{}^{\prime}}}\otimes I)^{\dagger}\}\] \[\qquad\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,1}}\otimes\sqrt{ M_{B,1}})\tilde{\rho}_{01}(\sqrt{M_{A,1}^{{}^{\prime}}}\otimes I)^{\dagger}\}\] \[\qquad\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,1}}\otimes\sqrt{ M_{B,1}})\tilde{\rho}_{00}(\sqrt{M_{A,1}^{{}^{\prime}}}\otimes I)^{\dagger}\}\] \[\qquad\qquad\qquad\times\Pr\{(\sqrt{M_{A,1}}\otimes\sqrt{M_{B,1}}
to the following analogous constrained optimization problem.
\[\{\alpha^{\prime*},\beta^{\prime*}\}=\] \[\operatorname*{arg\,max}_{\{\alpha^{\prime},\beta^{\prime}\}}\sum_{i= 1}^{2}\,\operatorname*{Pr}(\omega^{\prime}_{i}\in\Omega^{\prime}_{\omega^{ \prime}})\,\cdot\,\mathbb{1}(\operatorname*{Tr}\left[\sqrt{\sqrt{\rho}\hat{ \rho}_{1,\omega^{\prime}_{i}}\sqrt{\rho}}\right]^{2}\geq F_{th}), \tag{10}\]
## V Performance Evaluation
### _Simulation Methodology_
In order to evaluate the performance of our proposed qubit recycling protocol, we developed a simulation model which solves the constrained optimization problems (5), (7), (9), and (10). The simulation is implemented in Python, and consists of the following steps:
1. **Initialization**: At the beginning of the simulation, the initial parameters and constraints of the problem are defined. The quantum system \(\rho\) is prepared, we define a range of \(\gamma\) values to evaluate, and we fix our \(F_{th}\) value. Specifically, \(F_{th}\) values of 0.7 and 0.9 were selected.
2. **First filter parameter optimization**: The simulation first assumes a single filter model as a benchmark, and refines the parameters of the local POVM operator Filter,,1 through an iterative optimization algorithm. The optimization process iterates through the given \(\gamma\) value range for our given \(F_{th}\) value and finds the \(\{\alpha,\beta\}\) values which respectively maximize (5) and (9).
3. **Second filter parameter optimization**: Given the optimized \(\{\alpha,\beta\}\) value corresponding to a given \(\gamma\) and \(F_{th}\) for Filter,, a second filter Filter,,2 is optimized using similar iterative methods to solve (7) and (10).
4. **Evaluation**: The optimized local operators are then applied to the prepared quantum system, and the survival rate and fidelity of the resulting entanglement pairs are calculated, both for the normal filtering case (i.e., benchmark), and for filtering with recycling case, for comparison. Specifically, the normal filtering case is separately instantiated with full filtering and partial filtering schemes.
By following the aforementioned simulation methodology, we are able to determine the optimal design of our local operator for recycling the disposed photons, achieving a significant increase in high-fidelity survival rate over the optimized benchmark scheme. In the following subsections, we will discuss the specific results obtained for the full filtering and partial filtering schemes.
### _Full Filter Results_
Our simulation results demonstrate that the full filtering scheme with qubit recycling shows a significant improvement in survival rate compared to the benchmark single filter protocol, shown in Fig. 2(a) and Fig. 2(b). For the \(F_{th}=0.7\) case, our design adds 20.8% to 31.2% additional survival rate compared to the benchmark, for \(\gamma\in(0.3676,0.4059)\). Similarly, for the \(F_{th}=0.9\) case we observe a survival rate addition between 30.6% and 31.2%, for \(\gamma\in(0.1056,0.1085)\).
The limited range of \(\gamma\) values is easily interpreted, as the values lower than this produce states with fidelity above the threshold with no filtering necessary, thus the optimal choice is to not use Gisin's local filter. In other words, the channel introduces such an insignificant amount of noise that the entanglement can simply pass through the channel without any filtering and still maintain high fidelity. For \(\gamma\) values above this range, the amplitude damping effect is so strong that there does not exist a \(\{\alpha,\beta\}\) values such that filtering will achieve a fidelity greater than \(F_{th}\). To achieve high-fidelity entanglement in the high \(\gamma\) range, one could cascade filters in series with Filter,,1 which constitutes an orthogonal research topic.
An analysis of the contributions of different filtering events to the final survival rate reveals that the improvement comes mostly from the \(\tilde{\rho}_{10}\) and \(\tilde{\rho}_{01}\) cases -- implying that one photon is reflected in one arm while its entangled counterpart is transmitted in the other arm, as shown in the histogram in Fig. 2(c). This indicates that our proposed qubit recycling protocol effectively recycles the disposed photons in these cases, leading to a higher overall survival rate without compromising the fidelity of the entangled photon pairs.
### _Partial Filter Results_
Our simulation results for the partial filtering scheme show an improvement in the survival rate of similar degree to the full filtering scheme, which can be seen in Fig. 2(d) and Fig. 2(e). Specifically, for the \(F_{th}=0.7\) case, the partial filtering scheme adds between 20.5% and 25.0% to the benchmark survival rate, for \(\gamma\in(0.3676,0.3824)\). For the \(F_{th}=0.9\) case, we similarly observe an additional 24.3%-25.0% increase in survival rate, for \(\gamma\in(0.1056,0.1079)\).
We note an observed tradeoff between the full and partial filtering schemes. The partial filtering scheme has \(\gamma\) ranges for which it is a viable design which are proper subsets of the corresponding full filter's \(\gamma\) ranges, however the overall survival rates are significantly higher for the partial filtering scheme within those ranges. For the \(F_{th}=0.7\) case, the highest survival rate using the full filtering scheme is 56.1%, while the corresponding partial filtering scheme has a 74.9% survival rate. A similar difference is observed in the \(F_{th}=0.9\) case, where the full filtering scheme has a maximum survival rate of 56.2%, and the corresponding partial filtering scheme has a survival rate of 75.0%.
This tradeoff can be explained by the increase in the probability of photons being initially transmitted through the first filter. Given that the filter is on only one side of the entanglement pair in the partial filtering scheme, compared to both sides in the full filtering scheme, the probability of being transmitted is much greater. This allows for less filtering being possible, though, which explains the smaller \(\gamma\) ranges for which we see a gain in survival rate. These differences in contribution of the transmitted photons can be seen by comparing the histograms Fig. 2(c) and Fig. 2(f).
### _Synchronization and Multi-party Agreement_
The results confirm the effectiveness of our qubit recycling protocol in enhancing the performance of entanglement distillation in both the full filtering and partial filtering schemes. However, both the full filtering scheme as well as the partial filtering scheme suffer from a potential synchronization challenge, which occurs in the \(\tilde{\rho}_{10}\) and \(\tilde{\rho}_{01}\) cases in the full filtering scheme, or the \(\tilde{\rho}_{0}\) case in the partial filtering scheme, where one photon in an entangled pair passes through its first filter (or is not filtered in the case of the partial filtering scheme), while the corresponding photon reflects off of its respective first filter, and subsequently passes through its second filter. As a result, the arrival times of the photons at Alice's and Bob's detectors will be different, leading to a discrepancy in their timesheets. When Alice and Bob compare their timesheets to identify photon coincidences, this discrepancy may cause difficulties in recognizing these events as coincidences, potentially leading them to be incorrectly discarded.
This time discrepancy can be avoided if Alice and Bob each measure the distance of their respective recycled light paths and share this information with each other, as well as the entanglement source. Alice and Bob can then compensate for the time difference for the recycled photons. In addition, the entanglement source can also use this information to emit photons only at intervals which are not equal to the interval between the arrivals of the entangled photons in these cases. This allows Alice and Bob to be certain that any photons arriving with such an interval between them can in fact be labeled a coincidence pair.
Furthermore, it is important to note that in the full filtering scheme, even if the \(\tilde{\rho}_{10}\) and \(\tilde{\rho}_{01}\) cases are excluded, the inclusion of the \(\tilde{\rho}_{00}\) case alone still results in a benefit in survival rate, albeit at a lower amount. Specifically, we see a 6.06% increase in survival rate for \(F_{th}\) = 0.7, and a 6.24% increase for \(F_{th}\) = 0.9. This is illustrated by Fig. (c)c.
## VI Conclusion and Future Work
In this paper, we have presented a novel qubit recycling protocol for improving the yield of high-fidelity entangled qubits in photonic quantum systems. By employing a second local filter, our approach effectively reclaims discarded entangled qubits, resulting in a substantial increase in the yield of high-fidelity entanglement pairs. Our proposed protocol achieves up to a 31.2% gain in high-fidelity survival rate while incurring only moderate system complexity in terms of invested hardware and extra signaling for synchronization. Our work demonstrates the potential of qubit recycling in quantum entanglement distillation, which could have implications for the development of scalable and robust quantum communication networks.
An avenue for future work is to examine the applications of qubit recycling in different network models (e.g. multipartite entanglement, non-symmetric noise channels.) Another avenue is examining the local filter with a zero-valued parameter which breaks entanglement in the reflected photons. In some network models, using such a filter can be optimal, so finding use of these photons could lead to improvement over our proposed protocol.
Figure 3: (a, b) The survival rates with respect to \(\gamma\) for given \(F_{th}\) values for the full filtering scheme and (d, e) for the partial filtering scheme. A breakdown of which outcomes contribute to survival rate for a given \(\gamma\) and \(F_{th}\) is plotted for (c) the full filtering and (f) the partial filtering schemes. |
2305.13719 | Can electromagnetic charge inhabit in Rastall gravity? | One of the eminent generalizations of theory of general relativity is the
Rastall gravity which was {constructed} based on the assumption of the
non-conserved energy-momentum tensor of the matter field. Despite in the
literature several solutions of black holes in the Rastall gravity coupled to
the electromagnetic field have been presented, in the current paper we argue
that the Rastall gravity with non-conserved energy-momentum tensor (with
$\lambda\neq0$ and $R\neq0$) cannot couple to the electrodynamics, i.e., the
electromagnetically charged black hole solution cannot be obtained in this
case. This statement is adequate for both linear and nonlinear electrodynamics
with the electric, magnetic, or dyonic charges coupled to the Rastall gravity. | Bobir Toshmatov, Zdeněk Stuchlík, Bobomurat Ahmedov | 2023-05-23T06:14:28Z | http://arxiv.org/abs/2305.13719v1 | # Can electromagnetic charge inhabit in Rastall gravity?
###### Abstract
One of the eminent generalizations of theory of general relativity is the Rastall gravity which was constructed based on the assumption of the non-conserved energy-momentum tensor of the matter field. Despite in the literature several solutions of black holes in the Rastall gravity coupled to the electromagnetic field have been presented, in the current paper we argue that the Rastall gravity with non-conserved energy-momentum tensor (with \(\lambda\neq 0\) and \(R\neq 0\)) cannot couple to the electrodynamics, i.e., the electromagnetically charged black hole solution cannot be obtained in this case. This statement is adequate for both linear and nonlinear electrodynamics with the electric, magnetic, or dyonic charges coupled to the Rastall gravity.
keywords: General relativity, Rastall gravity, electrodynamics +
Footnote †: journal: Journal of LaTeX Templates
## 1 Introduction
General relativity is the prevailing theory of gravity, which describes the force as a result of massive objects warping spacetime. It is well-known that the theory of general relativity is one of the simplest and most elegant theories of gravitation that has explained several hidden mysteries of the nature and universe. The correctness of general relativity has been verified by several experimental tests in the weak field regimes [1]. However, its correctness in the strong gravitational fields is still under debate, as recent observational breakthrough events such as detection of the gravitational waves from the coalescence of two black holes or neutron stars in binaries [2; 3; 4; 5; 6] and obtained images of the supermassive black holes in the centers of Milky Way [7] and M87 [8] galaxies have still left some room for the alternative and modified theories of gravity. Furthermore, there are still big numbers of shortcomings in theoretical and observational aspects of the science, such as accelerated universe, the existence of dark matter and energy, on account of which modification of general relativity is inevitable.
There are possible modified and alternative theories of gravity that have been constructed in mainly two ways. In the first method, the fundamental assumptions of general relativity are still kept but new additional terms are added to the Lagrangian density of the general relativity on account of which all field equations are modified with additional terms. In the second method, the fundamental assumptions of general relativity are changed. The Rastall gravity belongs to the latter group. It is clear that one of the fundamental assumptions of general relativity is the null covariant divergence of the energy-momentum tensor. However, Rastall followed the second method and attempted to modify the theory of general relativity by assuming that the covariant derivative of the energy-momentum tensor varies directly as the derivative of the Ricci scalar [9; 10]. Clearly, it explicitly represents a nonminimal coupling between matter and geometry in curved spacetimes. In flat spacetimes, the energy-momentum tensor satisfies a conservation law as the curvature of spacetime vanishes. As a result, the field equations become equivalent to the vacuum Einstein equations. Even in the curved spacetime with matter field, the field equations of the Rastall gravity can be as the Einstein equa
tions in terms of the effective energy-momentum tensor which we will present later in the next section. From this point of view, The Rastall gravity can be considered as the equivalent to general relativity as stated in [11]. Moreover, these theories are equivalent in vacuum and in non-vacuum case with conserved energy-momentum tensors. However, in the matter field with non-conserved energy-momentum tensor, these two theories are different. In spite of that, there are still various issues that these two theories result from different consequences [12]. Over the last decades, Rastall gravity has gained attention of scientists and as a result of it, a vast number of papers related to the new solutions, or properties of the solutions of black holes in Rastall gravity, have been published [13; 14; 15; 16; 17]. Among them there are the ones related to the charged solutions [18; 19; 20; 21; 22; 23; 24]. Unsurprisingly, in this paper we present results that question possibility of obtaining charged black hole solutions in Rastall gravity.
We study the possibility of the construction of the black hole solution in Rastall gravity coupled to the electrodynamics. The paper is organized as follows: In section 2 we revisit Rastall gravity and equations of motion. Section 3 is devoted to a system of Rastall gravity coupled to the Maxwell electrodynamics. In section 4 we repeat the calculations presented in the previous section for the nonlinear electrodynamics. Finally, in sections 5 and 6 we discuss and summarize the main results obtained in the paper. Throughout the paper, we use the geometrized units in which the Newtonian gravitational constant \(G_{N}\), and speed of light \(c\) are set as \(G_{N}=c=1\).
## 2 Rastall gravity revisited
According to Rastall's theory of gravity, the conservation of the energy-momentum tensor, \(T^{\mu\nu}\), is assumed to be given by [9]
\[\nabla_{\nu}T^{\mu\nu}=\lambda\nabla^{\mu}R\, \tag{1}\]
where \(\lambda\) is a free parameter, so-called Rastall coupling parameter and \(R\) is a Ricci scalar. The nondivergence-free energy-momentum assumption that varies directly as the derivative of Ricci scalar, represents a nonminimal coupling between matter and geometry. In other words, if there is no matter field (vacuum), the theory of general relativity is recovered. Thus, Rastall gravity being generalization of general relativity, it includes general relativity in which the energy-momentum tensor conserved that corresponds to \(\lambda=0\) or \(R=\text{const}\). (inner region in Fig. 1) and the case in which the energy-momentum tensor is not conserved that corresponds to \(\lambda\neq 0\) and \(R\neq\text{const}\). (outer region in Fig. 1).
According to the assumption of a non-divergence-free energy-momentum tensor in the curved spacetime, Rastall obtained a consistent set of field equations
\[\tilde{G}_{\mu\nu}\equiv G_{\mu\nu}+\lambda\kappa g_{\mu\nu}R=\kappa T_{\mu \nu}\, \tag{2}\]
with the Einstein tensor defined by
\[G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R\, \tag{3}\]
where \(\kappa=8\pi\). In the case of general relativity which corresponds to \(\lambda=0\), equation (2) reduces to the well-known Einstein's field equations \(G_{\mu\nu}=\kappa T_{\mu\nu}\). The trace of the field equation (2) results the following equation:
\[R=\frac{\kappa}{4\kappa\lambda-1}T\,\qquad\kappa\lambda\neq\frac{1}{4}. \tag{4}\]
In terms of the expression (4), one can rewrite the field equation (2) in the alternative form as
\[G_{\mu\nu}=\kappa\tilde{T}_{\mu\nu}\, \tag{5}\]
where
\[\tilde{T}_{\mu\nu}=T_{\mu\nu}-\frac{\kappa\lambda}{4\kappa\lambda-1}g_{\mu \nu}T. \tag{6}\]
One can see from (5) that the Rastall term in the field equation can be considered as a new additional matter field to the right-hand side of the Einstein equations. Therefore, in this sense, Rastall gravity can be considered formally equivalent to general relativity [11]. Now by choosing the appropriate matter field and energy-momentum tensor and fixing the background ansatz,
Figure 1: Schematic picture of the Rastall gravity that generalizes general relativity (GR) is presented.
one can construct the solution in Rastall gravity. For simplicity, we choose the static, spherically symmetric spacetime as
\[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}\left(d\theta^{2}+\sin^{2}\theta d \phi^{2}\right)\,. \tag{7}\]
For the spacetime line element (7) non-zero components of the left hand side of the field equations (2) are given as
\[\tilde{G}^{t}_{\phantom{t}t}=\frac{rf^{\prime}+f-1}{r^{2}}+\lambda \kappa R=\tilde{G}^{r}_{\phantom{t}r}\,,\] \[\tilde{G}^{\theta}_{\phantom{t}\theta}=\frac{f^{\prime\prime}}{2 }+\frac{f^{\prime}}{r}+\lambda\kappa R=\tilde{G}^{\phi}_{\phantom{t}\phi}\,, \tag{8}\]
where the Ricci scalar equals
\[R=-f^{\prime\prime}-\frac{2\left(2rf^{\prime}+f-1\right)}{r^{2}}\,. \tag{9}\]
In the following section, we obtain such solutions in Rastall gravity for electrically charged matter fields. The solutions will be obtained through mathematical calculations and analysis in the framework of the newly developed formalism.
## 3 Maxwell electrodynamics coupled to Rastall gravity
Since in the previous section all equations related to the geometry of the spacetime are given, in this section we solve these equations by setting the energy-momentum tensor of the matter field by the Maxwell electrodynamics:
\[T_{\mu\nu}=\frac{2}{\kappa}\left(F^{\phantom{\mu}\alpha}_{\mu}F_{\nu\alpha}- \frac{1}{4}g_{\mu\nu}F_{\delta\gamma}F^{\delta\gamma}\right)\,, \tag{10}\]
where the electromagnetic field tensor is determined by the vector potential \(A_{\mu}\) as \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\). Their explicit forms are identical for both electrically and magnetically charged cases as
\[T^{t}_{\phantom{t}t}=T^{r}_{\phantom{r}r}=-\frac{Q^{2}}{\kappa r^{4}}\,\,, \qquad T^{\theta}_{\phantom{t}\theta}=T^{\phi}_{\phantom{t}\phi}=\frac{Q^{2}}{ \kappa r^{4}}\,. \tag{11}\]
By simplifying the field equations (5) with (8) and (11), we obtain the following two independent differential equations:
\[\frac{1-rf^{\prime}-f}{r^{2}}-\lambda\kappa R=\frac{Q^{2}}{r^{4}}, \tag{12}\]
\[\frac{f^{\prime\prime}}{2}+\frac{f^{\prime}}{r}+\lambda\kappa R=\frac{Q^{2}}{ r^{4}}\,. \tag{13}\]
If the above differential equations are solved, separately, in general relativity limit (\(\lambda=0\)), both equations (12) and (13) would result in the well-known Reissner-Nordstrom solution
\[f(r)=1-\frac{2M}{r}+\frac{Q_{\sigma}^{2}}{r^{2}}\,\,. \tag{14}\]
From the fact that the Reissner-Nordstrom solution (14) is obtained by solving both differential equations (12) and (13), we are convinced that the linear electrodynamics can couple to the general relativity.
In the next step, we solve the differential equations (12) and (13) separately in Rastall gravity (\(\lambda\neq 0\)). By solving the first differential equation (12), we obtain the metric function as following:
\[f(r)=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}+cr^{-2+\frac{1}{\kappa}}\,, \tag{15}\]
while from the second differential equation (13) we arrive at the metric function
\[f(r)=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}+cr^{\frac{4\kappa}{1-2\kappa}}\,, \tag{16}\]
where \(c\) is the integration constant. For \(f(r)\) to be the solution, both metric functions (15) and (16) must be identical. This can happen only in the following two cases:
1. the integration constant is zero (\(c=0\)). In this case both solutions are reduced to the Reissner-Nordstrom black hole solution. We should note that the Reissner-Nordstrom solution is also the solution of the Rastall gravity only because of the fact that it is a solution of general relativity coupled to the Maxwell electrodynamics. We strengthen our this statement with the fact that for the Reissner-Nordstrom solution the Ricci scalar vanishes (\(R=0\)) and consequently, energy-momentum tensor becomes conserved and all field equations (including (1), (2) and (5)) take the form of the ones of pure general relativity;
2. exponents of \(r\) in the last terms of both metric functions (15) and (16) are equal when \(\lambda\kappa=1/4\). In this case the metric function takes the de-Sitter-like form as (it was obtained in [21]) \[f(r)=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}+cr^{2}\,.\] (17) However, even if the metric function (17) satisfies both differential equations (12) and (13), it does not satisfy the trace of the field equation (4), as
\(\lambda\kappa=1/4\) is the forbidden value (see also [9]). This means that even though the metric function satisfies both differential equations, it fails to satisfy a crucial condition that needs to be met in order to be considered a valid solution. The restricted value of \(\lambda\kappa=1/4\) is an essential requirement that the metric function must fulfill in order to be considered a solution to the field equations. Without satisfying this condition, it can be concluded that the metric function is not a valid solution. Therefore, it is not a solution.
Thus, as we have discussed in above two cases that the charged black hole solution can exist only in general relativity that is a subset of Rastall gravity.
## 4 Nonlinear electrodynamics coupled to Rastall gravity
We have shown in the previous section that Maxwell electrodynamics cannot couple to Rastall gravity to produce charged black hole solution. As a continuation, we generalize the scenario to the nonlinear electrodynamics coupled to Rastall gravity. We again adopt the spacetime in a static, spherically symmetric form. The energy-momentum tensor of the nonlinear electrodynamics is given by
\[T_{\mu\nu}=\frac{2}{\kappa}\left({\cal L}_{F}F_{\mu}^{\ \alpha}F_{\nu\alpha}- \frac{1}{4}g_{\mu\nu}{\cal L}\right)\,, \tag{18}\]
where \({\cal L}_{F}\) denotes a derivative of the Lagrangian density of the nonlinear electrodynamics with respect to invariant, \(F\equiv F_{\mu\nu}F^{\mu\nu}\), \({\cal L}_{F}=\partial_{F}{\cal L}\). In general, the electromagnetic four-potential, \(A_{\mu}\), is written for the spherically symmetric spacetimes in terms of the 1-forms of Carter tetrad for the background line element (7) as [25]
\[A=-\frac{\varphi}{\sqrt{f}}\omega^{f}-\frac{Q_{m}\cot\theta}{r} \omega^{\theta}\,. \tag{19}\]
where the 1-forms of the Carter tetrad is defined as
\[\omega^{t}=\sqrt{f}dt\,\qquad\omega^{r}=\frac{1}{\sqrt{f}}dr\,\] \[\omega^{\theta}=rd\theta\,\qquad\omega^{\phi}=r\sin\theta d \phi. \tag{20}\]
Thus, the electromagnetic field four-potential (19) can be written explicitly as
\[A=\varphi(r)dt-Q_{m}\cos\theta d\phi\, \tag{21}\]
where the time and azimuthal components of the potential (21) correspond to the electrically and magnetically charged black holes, respectively, as \(\varphi\) and \(Q_{m}\) are electric scalar potential and magnetic charge, respectively. Unlike the case of linear electrodynamics in a previous section, in the case of nonlinear electrodynamics electrically and magnetically charged electromagnetic fields do not generate the identical solutions [26; 27; 28; 29; 30] and their characteristic oscillations are not isospectral [31; 32; 33; 34; 35]. Therefore, below we present a construction of the electrically and magnetically charged black hole solutions in Rastall gravity coupled to the nonlinear electrodynamics, separately.
### Electrically charged black holes
In the case of the electrically charged black hole, the time component of the four-potential (21) of the electromagnetic field survives as
\[A_{\mu}=\{\varphi(r),\ 0,\ 0,\ 0\}. \tag{22}\]
After calculating the components of the electromagnetic field tensor, we find the electromagnetic field invariant as
\[F_{e}=-2\varphi^{\prime 2}. \tag{23}\]
The nonvanishing components of the energy-momentum tensor of the nonlinear electrodynamics are found to be
\[T^{t}_{\ t} = -\frac{{\cal L}-2{\cal L}_{F}F_{e}}{\kappa}=T^{r}_{\ r}\,\] \[T^{\theta}_{\ \theta} = -\frac{{\cal L}}{\kappa}=T^{\phi}_{\ \phi}. \tag{24}\]
By solving the field equations (2) with (8) and (24), we obtain the following expressions for the Lagrangian density of the nonlinear electrodynamics:
\[{\cal L} = -f^{\prime\prime}-\frac{2f^{\prime}}{r}-2\lambda\kappa R, \tag{25}\] \[{\cal L}_{F} = -\frac{r^{2}f^{\prime\prime}-2f+2}{2r^{2}F_{e}}. \tag{26}\]
It can be easily noticed from the expressions (25) and (26) that the Lagrangian density of the nonlinear electrodynamics is the function of Rastall coupling parameter (\(\partial{\cal L}/\partial\lambda\neq 0\)), while its derivative with respect to the electromagnetic invariant, \({\cal L}_{F}\), is independent of the Rastall coupling parameter (\(\partial{\cal L}_{F}/\partial\lambda=0\)). At this point, we come across a problem with the equations, as expressions of the Lagrangian density (25) and (26) contradict each other. Mathematically, if the Lagrangian density is the function of the parameter \(\lambda\), its derivative with respect to the electromagnetic field invariant, \(F_{e}\), which is not a function of the parameter \(\lambda\) (\(\partial{\cal L}/\partial\lambda=0\)) must also be the function of \(\lambda\), as \({\cal L}_{F}={\cal L}^{\prime}/F^{\prime}\) with
the prime being the derivative with respect to the radial coordinate, \(r\). However, it is not the case here. In short, if the differential equations (25) and (26) are solved for any Lagrangian density of the nonlinear electrodynamics separately, one would obtain the metric function with the Rastall term in the first solution (solution of (25)), while in the second one (solution of (26)) there is no term with the Rastall parameter. In other words, if one solves the differential equations for the nonlinear electrodynamics Lagrangian density separately, the first solution (the solution of (25)) would include the Rastall term, while the second solution (the solution of (26)) would not include it. Therefore, there are two cases to address this contradiction:
1. \(\lambda=0\) that corresponds to general relativity coupled to the nonlinear electrodynamics, as the energy-momentum tensor becomes conserved. The obtained solution can be considered the solution of the Rastall gravity only because of the fact that general relativity is a subset of the Rastall gravity;
2. \(R=0\) that corresponds to general relativity coupled to the nonlinear electrodynamics. However, since \(R=0\) implies the solution to be Reissner-Nordstrom-like and it is the solution of general relativity coupled to the linear electrodynamics, this system does not provide any solution not only in the general relativity coupled to the nonlinear electrodynamics, but also in Rastall gravity coupled to the nonlinear electrodynamics.
Thus, in both cases the final solution would be the one in general relativity and not in Rastall gravity with non-conserved energy-momentum tensor. Thus, we are convinced that one cannot obtain the electrically charged black hole solution in Rastall gravity with \(\lambda\neq 0\) and \(R\neq 0\) coupled to the nonlinear electrodynamics.
### Magnetically charged black holes
In the magnetically charged black hole solution, the four potential of the electromagnetic field (21) is given by
\[A_{\mu}=\{0,0,0,-Q_{m}\cos\theta\}. \tag{27}\]
In this case, the electromagnetic field invariant takes the following form:
\[F_{m}=\frac{2Q_{m}^{2}}{r^{4}}. \tag{28}\]
The independent components of the energy-momentum tensor of the nonlinear electrodynamics are given as
\[T^{t}_{\ t} = -\frac{\mathcal{L}}{\kappa}=T^{r}_{\ r}\,\] \[T^{\theta}_{\ \theta} = -\frac{\mathcal{L}-\mathcal{L}_{F}F_{m}}{\kappa}=T^{\phi}_{\ \phi}. \tag{29}\]
By solving the field equations (2) with (8) and (29), we obtain the following expressions for the Lagrangian density:
\[\mathcal{L} = \frac{2\left(1-rf^{\prime}-f\right)}{r^{2}}-2\lambda\kappa R, \tag{30}\] \[\mathcal{L}_{F} = \frac{r^{2}f^{\prime\prime}-2f+2}{2r^{2}F_{m}}. \tag{31}\]
Thus, the situation is the same as in the case of the electrically charged black hole solution in a previous subsection, and consequently the conclusion is also similar. Since the results in both electrically and magnetically charged cases are the same, it is obvious that it remains unchanged in the dyonically charged case too. Therefore, we do not here report this case.
## 5 Discussion
In previous sections, we have shown by solving the field equations (2) that the electromagnetic field cannot couple to Rastall gravity in black hole solutions. However, in that process, we have not used the equation on non-conserved energy-momentum tensor in Rastall gravity (1) and the Maxwell equations for the nonlinear electrodynamics which are given by
\[\nabla_{\mu}\left(\mathcal{L}_{F}F^{\mu r}\right)=0. \tag{32}\]
These two equations contradict each other, as equation (32) requires the covariant derivative of the right hand side of the field equation (2) to vanish, while the covariant derivative of the left hand side of (2) is nonzero via (1). To overcome this problem, we must consider either \(\lambda=0\) or \(R=0\) cases. The former case represents the absence of the Rastall term in the equations of motion, i.e., it corresponds to general relativity with Einstein equations coupled to the nonlinear electrodynamics. This type of solutions are obtained and well studied in the vast literature such as [26; 27; 28]. In the latter case when the Ricci scalar (or trace of the energy-momentum tensor) vanishes, there is only one solution that corresponds to the Reissner-Nordstrom-like one as
\[R=0\quad\Rightarrow\quad f(r)=1-\frac{2M}{r}+\frac{c}{r^{2}}. \tag{33}\]
with \(c\) being the integration constant. From this solution it turns out that to have all field equations satisfied, the following equation also must give the same metric function as (33):
\[R_{\mu\nu}=\kappa T_{\mu\nu}. \tag{34}\]
Despite the field equations have changed, they still do not represent the general Rastall gravity containing nonzero \(\lambda\). The solution of the field equation (34) for the Maxwell electrodynamics represents the Reissner-Nordstrom black hole solution which is again independent from the Rastall parameter.
By solving equations (34) for the nonlinear electrodynamics (18) with metric function (33), we obtain that the Lagrangian density of the nonlinear electrodynamics is linearly related to the electromagnetic field invariant, \(F\), as
\[\mathcal{L}\propto F,\qquad\mathcal{L}_{F}\propto\text{constant}\, \tag{35}\]
whose solution is independent from the Rastall parameter. From this approach also we have confirmed that Rastall gravity (with \(\lambda\neq 0\) and \(R\neq 0\)) cannot coexist with the electrodynamics.
## 6 Conclusion
Rastall gravity is a simple mathematical generalization of general relativity on account of the non-minimal coupling between the geometry and energy-momentum tensor of the matter field. The original physical motivation of this theory was that the local energy-momentum conservation law in the flat spacetime does not necessarily imply its conservation in the curved spacetime with matter field. This on turn implies several new interesting properties in the gravitational theory. One of such properties has been addressed in this paper. Despite in many literature new black hole solutions have been obtain within the framework of Rastall gravity coupled to the electrodynamics (sometimes some additional second matter field is also considered), we have shown here that in this system one cannot obtain the electromagnetically (electrically, magnetically, or dyonically) charged black hole solutions in Rastall gravity with non-conserved energy-momentum tensor (\(\lambda\neq 0\) and \(R\neq 0\)) coupled not only to the linear but also to the nonlinear electrodynamics. In other words, the electromagnetically charged solutions of black hole cannot be obtained in Rastall gravity corresponding to the outer region of Fig. 1.
## Acknowledgement
BT acknowledges the support of program "CZ.02.2.69/0.0/0.0/18-053/0017871: Podpora mezinarodni mobility na Slezske univerzite v Opave" at the Institute of Physics, Silesian University in Opava. Authors acknowledge the support of Ministry of Innovative Development of the Republic of Uzbekistan Grants No. F-FA-2021-432 and MRB-2021-527.
|
2306.12157 | Rigorous derivation of the Efimov effect in a simple model | We consider a system of three identical bosons in $\mathbb{R}^3$ with
two-body zero-range interactions and a three-body hard-core repulsion of a
given radius $a>0$. Using a quadratic form approach we prove that the
corresponding Hamiltonian is self-adjoint and bounded from below for any value
of $a$. In particular this means that the hard-core repulsion is sufficient to
prevent the fall to the center phenomenon found by Minlos and Faddeev in their
seminal work on the three-body problem in 1961. Furthermore, in the case of
infinite two-body scattering length, also known as unitary limit, we prove the
Efimov effect, \emph{i.e.}, we show that the Hamiltonian has an infinite
sequence of negative eigenvalues $E_n$ accumulating at zero and fulfilling the
asymptotic geometrical law $\;E_{n+1} / E_n \; \to \; e^{-\frac{2\pi}{s_0}}\,\;
\,\text{for} \,\; n\to +\infty$ holds, where $s_0\approx 1.00624$. | Davide Fermi, Daniele Ferretti, Alessandro Teta | 2023-06-21T10:11:28Z | http://arxiv.org/abs/2306.12157v1 | # Rigorous derivation of the Efimov effect in a simple model
###### Abstract.
We consider a system of three identical bosons in \(\mathbb{R}^{3}\) with two-body zero-range interactions and a three-body hard-core repulsion of a given radius \(\mathfrak{a}>0\). Using a quadratic form approach we prove that the corresponding Hamiltonian is self-adjoint and bounded from below for any value of \(\mathfrak{a}\). In particular this means that the hard-core repulsion is sufficient to prevent the fall to the center phenomenon found by Minlos and Faddeev in their seminal work on the three-body problem in 1961. Furthermore, in the case of infinite two-body scattering length, also known as unitary limit, we prove the Efimov effect, _i.e._, we show that the Hamiltonian has an infinite sequence of negative eigenvalues \(\mathsf{E}_{n}\) accumulating at zero and fulfilling the asymptotic geometrical law \(\mathsf{E}_{n+1}/\mathsf{E}_{n}\ \to\ \mathsf{e}^{-\frac{2\mathfrak{a}}{ \mathfrak{a}_{0}}}\) for \(\mathfrak{n}\ \to+\infty\) holds, where \(\mathfrak{s}_{0}\approx 1.00624\).
_Keywords: zero-range interactions, three-body Hamiltonians, Efimov effect. MSC 2020: 81Q10; 81Q15; 70F07; 46N50._
The authors acknowledge the support of: MUR grant Dipartimento di Eccellenza 2023-2027 of Dipartimento di Matematica, Politecnico di Milano, and Gran Sasso Science Institute, and Dipartimento di Matematica "G. Castelnuovo", "Sapienza" Universita di Roma; GNFM Gruppo Nazionale per la Fisica Matematica - INdAM; the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (ERC CoG UniCoSM, grant agreement n. 724939).
## 1. Introduction
The Efimov effect is an interesting physical phenomenon occurring in three-particle quantum systems in dimension three ([10, 11], see also [25]). It consists in the appearing of an infinite sequence of negative eigenvalues \(\mathsf{E}_{n}\), with \(\mathsf{E}_{n}\ \to\ 0\) for \(n\ \to\ \infty\), of the three-body Hamiltonian if the two-particle subsystems do not have bound states and at least two of them exhibit a zero-energy resonance (or, equivalently, an infinite two-body scattering length). A remarkable feature of the effect is that the distribution of eigenvalues satisfies the universal geometrical law
\[\lim_{n\,\to\,+\infty}\frac{\mathsf{E}_{n+1}}{\mathsf{E}_{n}}\ =\ \mathrm{e}^{-\frac{2\pi}{8}}\,, \tag{1.1}\]
where the parameter \(s>0\) depends only on the mass ratios and, possibly, on the statistics of the particles.
According to an intuitive physical picture, the three-particle bound states (or trimers) associated to the eigenvalues are determined by a long range, attractive effective interaction of kinetic origin, which is produced by the resonance condition and does not depend on the details of the two-body potentials. Roughly speaking, in a trimer the attraction between two particles is mediated by the third one, which is moving back and forth between the two. It should also be stressed that the Efimov effect disappears if the two-body potentials become more attractive causing the destruction of the zero-energy resonance. For interesting experimental evidence of Efimov quantum states see, _e.g._, [19].
The first mathematical result on the Efimov effect was obtained by Yafaeev in 1974 [33]. He studied a symmetrized form of the Faddeev equations for the bound states of the three-particle Hamiltonian and proved the existence of an infinite number of negative eigenvalues. In 1993 Sobolev [29] used a slightly different symmetrization of the equations and proved the asymptotics
\[\lim_{z\,\to\,0^{-}}\frac{\mathsf{N}(z)}{|\mathrm{log}|z||}=\frac{s}{2\pi}\,, \tag{1.2}\]
where \(\mathsf{N}(z)\) denotes the number of eigenvalues smaller than \(z<0\). Note that (1.2) is consistent with the law (1.1). In the same year Tamura [31] obtained the same result under more general conditions on the two-body potentials. Other mathematical proofs of the effect were obtained by Ovchinnikov and Sigal in 1979 [27] and Tamura in 1991 [30] using a variational approach based on the Born-Oppenheimer approximation. For more recent results on the subject, see [5] (for the case of two identical fermions and a different particle), [15] (for a two-dimensional variant of the problem) and [16].
We notice that in the above mentioned mathematical results a rigorous derivation of the law (1.1) is lacking. It is also worth observing that, before the seminal works of Efimov, Minlos and Faddeev [23, 24] studied the problem of constructing the Hamiltonian for a system of three bosons with zero-range interactions in dimension three. It was known that such Hamiltonian cannot be defined considering only pairwise zero-range interactions. Minlos and Faddeev showed that a self-adjoint Hamiltonian can be constructed by imposing suitable two-body boundary conditions at the coincidence hyperplanes, _i.e._, when the positions of two particles coincide, and also a three-body boundary condition at the triple-coincidence point, when the positions of all the three particles coincide. They also proved that the Hamiltonian is unbounded form below, due to the presence of an infinite sequence of negative eigenvalues diverging to \(-\infty\). Such instability property can be seen as a fall to the center phenomenon and it is due to the fact that the interaction becomes too strong and attractive when the three particles are very close to each other. A further interesting result of the analysis of Minlos and Faddeev, even if it is not explicitly emphasized, is the proof of the Efimov effect in the case of infinite two-body scattering length
(corresponding to the resonant case), with a rigorous derivation of the law (1.1). This in particular shows that the occurrence of the Efimov effect can be obtained also with zero-range interactions, the only crucial condition being the presence of an infinite two-body scattering length. Such a result is somewhat tainted by the fact that the Hamiltonian in unbounded from below and therefore unsatisfactory from the physical point of view.
Our aim is to present a mathematical proof of the Efimov effect and law (1.1) for a bounded from below Hamiltonian obtained by a slight modification of the Minlos and Faddeev Hamiltonian. We mention that the problem of constructing a lower bounded Hamiltonian for a three-body system with zero-range interactions has been recently approached in the literature (see, _e.g._, [4, 13, 12, 22]). The idea is to introduce an effective three-body force acting only when the three particles are close to each other, preventing the fall to the center phenomenon.
In the present work we consider a Hamiltonian with two-body zero-range interactions and another type of three-body interaction. More precisely, the effective three-body force is replaced by a three-body hard-core repulsion. We shall prove that such Hamiltonian is self-adjoint and bounded from below and then prove the Efimov effect, _i.e._, the existence of an infinite sequence of negative eigenvalues satisfying (1.1) when the two-body scattering length is infinite.
Our work can be viewed as an attempt to make rigorous the original physical argument of Efimov. Indeed, Efimov takes into account three identical bosons and his approach is based on the replacement of the two-body potential with a boundary condition, which is essentially equivalent to consider a two-body zero-range interaction. Then, he introduces hyper-spherical coordinates and shows that if the two-body scattering length is infinite then the problem becomes separable and in the equation for the hyper-radius \(\mathsf{R}\) the long range, attractive effective potential \(-(\mathsf{s}_{0}^{2}+1/4)/\mathsf{R}^{2}\) appears. The behavior for small \(\mathsf{R}\) of this potential is too singular and an extra boundary condition at short distance must be imposed. After this ad hoc procedure, he obtains the infinite sequence of negative eigenvalues satisfying the law (1.1) as a consequence of the large \(\mathsf{R}\) behavior of the effective potential.
The self-adjoint and bounded from below Hamiltonian constructed in this paper can be considered as the rigorous counterpart of the ad hoc regularization scheme mentioned above. Furthermore, we show that the eigenvalues and eigenvectors found in a formal way in the physical literature are in fact eigenvalues and eigenvectors of our Hamiltonian in a rigorous sense and, accordingly, we obtain a mathematical proof of (1.1).
Let us introduce some notation. Here and in the sequel: \(\mathsf{x}_{1},\mathsf{x}_{2},\mathsf{x}_{3}\in\mathbb{R}^{3}\) are the coordinates of the three bosons in a fixed inertial reference frame; the units of measure employed are such that \(\hbar=\mathsf{m}_{1}=\mathsf{m}_{2}=\mathsf{m}_{3}=1\). It is convenient to introduce the system of Jacobi coordinates \(\mathbf{r}_{\mathrm{cm}},\mathbf{x},\mathbf{y}\in\mathbb{R}^{3}\) defined as
\[\mathbf{r}_{\mathrm{cm}}:=\frac{\mathbf{x}_{1}+\mathbf{x}_{2}+\mathbf{x}_{3} }{3}\,,\qquad\mathbf{x}:=\mathbf{x}_{2}-\mathbf{x}_{1}\,,\qquad\mathbf{y}:= \frac{2}{\sqrt{3}}\left(\mathbf{x}_{3}-\frac{\mathbf{x}_{1}+\mathbf{x}_{2}}{ 2}\right)\,.\]
Correspondingly, we have \(\mathbf{x}_{1}=\mathbf{r}_{\mathrm{cm}}-\frac{1}{2}\,\mathbf{x}-\frac{1}{2 \sqrt{3}}\,\mathbf{y}\), \(\mathbf{x}_{2}=\mathbf{r}_{\mathrm{cm}}+\frac{1}{2}\,\mathbf{x}-\frac{1}{2 \sqrt{3}}\,\mathbf{y}\), \(\mathbf{x}_{3}=\mathbf{r}_{\mathrm{cm}}+\frac{1}{\sqrt{3}}\,\mathbf{y}\). The transpositions \(\sigma_{\mathrm{ij}}\) (\(\mathsf{i},\mathsf{j}\in\{1,2,3\}\)) exchanging the \(\mathsf{i}^{\mathrm{th}}\) and the \(\mathsf{j}^{\mathrm{th}}\) particles are represented by the following changes of coordinates \(\sigma_{12}:(\mathbf{r}_{\mathrm{cm}},\mathbf{x},\mathbf{y})\ \to(\mathbf{r}_{\mathrm{cm}},-\mathbf{x}, \mathbf{y})\,,\ \sigma_{23}:(\mathbf{r}_{\mathrm{cm}},\mathbf{x},\mathbf{y})\ \to(\mathbf{r}_{\mathrm{cm}},\frac{1}{2}\,\mathbf{x}+\frac{\sqrt{3}}{2}\, \mathbf{y},\frac{\sqrt{3}}{2}\,\mathbf{x}-\frac{1}{2}\,\mathbf{y})\), and \(\sigma_{31}:(\mathbf{r}_{\mathrm{cm}},\mathbf{x},\mathbf{y})\ \to(\mathbf{r}_{\mathrm{cm}},\frac{1}{2}\,\mathbf{x}-\frac{\sqrt{3}}{2}\, \mathbf{y},-\frac{\sqrt{3}}{2}\,\mathbf{x}-\frac{1}{2}\,\mathbf{y})\).
Upon factorizing the center of mass coordinate \(\mathbf{r}_{\mathrm{cm}}\) (_i.e._, adopting the center-of-mass reference frame) the heuristic Hamiltonian describing our three-boson system is expressed by
\[\mathsf{H}=-\Delta_{\mathbf{x}}-\Delta_{\mathbf{y}}+V_{\mathrm{a}}^{\mathrm{ hc}}(\mathbf{x},\mathbf{y})+\delta(\mathbf{x})+\delta\Big{(}\tfrac{1}{2}\, \mathbf{x}-\tfrac{\sqrt{3}}{2}\,\mathbf{y}\Big{)}+\delta\Big{(}\tfrac{1}{2} \,\mathbf{x}+\tfrac{\sqrt{3}}{2}\,\mathbf{y}\Big{)}\,, \tag{1.3}\]
where, at a formal level, \(V_{a}^{\rm hc}\) indicates a hard-core potential corresponding to a Dirichlet boundary condition on the hyper-sphere of radius \(a\) in \(\mathbb{R}^{6}\), centered at \(({\bf x},{\bf y})=({\sf 0},{\sf 0})\) and the "\(\delta\)-potentials" represent the zero-range interactions between the pair of particles \((1,2)\), \((2,3)\) and \((3,1)\) respectively. Notice that
\[({\bf x}_{1}-{\bf x}_{2})^{2}+({\bf x}_{2}-{\bf x}_{3})^{2}+({\bf x}_{3}-{\bf x }_{1})^{2}=\tfrac{3}{2}\,(|{\bf x}|^{2}+|{\bf y}|^{2}),\]
therefore the hard-core potential \(V_{a}^{\rm hc}\) plays the role to prevent the three bosons from reaching the triple-coincidence point \({\bf x}_{1}={\bf x}_{2}={\bf x}_{3}\), avoiding the above mentioned fall to the center phenomenon.
The bosonic Hilbert space of states for our system is
\[{\rm L}_{s}^{2}(\Omega_{a}):=\left\{\psi\in{\rm L}^{2}(\Omega_{a})\,\Big{|}\, \psi({\bf x},{\bf y})=\psi(-{\bf x},{\bf y})=\psi\Big{(}\tfrac{1}{2}\,{\bf x} +\tfrac{\sqrt{3}}{2}\,{\bf y},\tfrac{\sqrt{3}}{2}\,{\bf x}-\tfrac{1}{2}\,{ \bf y}\Big{)}\right\}, \tag{1.4}\]
where
\[\Omega_{a}:=\left\{({\bf x},{\bf y})\in\mathbb{R}^{6}\,\,\Big{|}\,\,|{\bf x}|^ {2}+|{\bf y}|^{2}>a^{2}\,\right\}. \tag{1.5}\]
Definition (1.4) encodes the symmetry by exchange given by \(\sigma_{12}\) and \(\sigma_{23}\) which clearly imply also the condition corresponding to the exchange performed by \(\sigma_{31}\), _i.e._\(\psi({\bf x},{\bf y})=\psi\Big{(}\tfrac{1}{2}\,{\bf x}-\tfrac{\sqrt{3}}{2}\,{ \bf y},-\tfrac{\sqrt{3}}{2}\,{\bf x}-\tfrac{1}{2}\,{\bf y}\Big{)}\). In the following we shall construct the rigorous counterpart of (1.3) as a self-adjoint and bounded from below operator in \({\rm L}_{s}^{2}(\Omega_{a})\). The first step is to interpret the formal unperturbed operator \(\,-\Delta_{\bf x}-\Delta_{\bf y}+V_{a}^{\rm hc}({\bf x},{\bf y})\,\) as the Dirichlet Laplacian in \(\Omega_{a}\), namely
\[\operatorname{dom}\!\left(H_{D}\right)={\rm L}_{s}^{2}(\Omega_{a})\cap H_{0}^ {1}(\Omega_{a})\cap H^{2}(\Omega_{a})\,,\qquad H_{D}\psi=(-\Delta_{\bf x}- \Delta_{\bf y})\psi\,. \tag{1.6}\]
It is well known that (1.6) is the self-adjoint and positive operator uniquely defined by the positive quadratic form
\[\operatorname{dom}\!\left(Q_{D}\right):={\rm L}_{s}^{2}(\Omega_{a})\cap H_{0} ^{1}(\Omega_{a})\,,\qquad Q_{D}[\psi]:=\int_{\Omega_{a}}\!\!\!{\rm d}{\bf x}\, \,{\rm d}{\bf y}\,\Big{(}|\nabla_{\bf x}\psi|^{2}+|\nabla_{\bf y}\psi|^{2} \Big{)}\,. \tag{1.7}\]
The second, and more relevant, step is to define a self-adjoint perturbation of the Dirichlet Laplacian \(H_{D}\) supported by the coincidence hyperplanes
\[\pi_{12}=\left\{({\bf x},{\bf y})\in\Omega_{a}\,\Big{|}\,{\bf x}={\sf 0} \right\},\quad\quad\pi_{23}=\left\{({\bf x},{\bf y})\in\Omega_{a}\,\Big{|}\,{ \bf x}=\sqrt{3}\,{\bf y}\right\},\quad\quad\pi_{31}=\left\{({\bf x},{\bf y}) \in\Omega_{a}\,\Big{|}\,{\bf x}=-\sqrt{3}\,{\bf y}\right\}.\]
Following the analogy with the one particle case [1], a natural attempt is to construct an operator which, roughly speaking, acts as the Dirichlet Laplacian \(H_{D}\) outside the hyperplanes and it is characterized by a (singular) boundary condition on each hyperplane. Specifically, given \(\alpha\in\mathbb{R}\), we demand that
\[\psi({\bf x},{\bf y})=\frac{\xi({\bf y})}{4\pi|{\bf x}|}+\alpha\,\tilde{\xi}({ \bf y})+{\sf o}(1)\,,\qquad\text{for fixed}\,\,\,{\bf y}\in{\rm B}_{a}^{\rm c} \,\,\,\text{and}\,\,\,|{\bf x}|\,\,\to\,{\sf 0}. \tag{1.8}\]
where \({\rm B}_{a}^{\rm c}:=\mathbb{R}^{3}\setminus{\rm B}_{a}\) and \({\rm B}_{a}=\left\{{\bf y}\in\mathbb{R}^{3}\,\big{|}\,|{\bf y}|<a\right\}\). The above condition describes the interaction between particles 1 and 2. Due to the bosonic symmetry requirements, it also accounts for the interactions between the other two admissible pairs of particles. We recall that \(-\alpha^{-1}\) has the physical meaning of two-body scattering length. The strategy for the mathematical proof is based on a quadratic form approach, similar to the one adopted for the construction of singular perturbations of a given self-adjoint and positive operator in analogous contexts (see, _e.g._, [32, 8, 4]).
More precisely, starting from the formal Hamiltonian (1.3) characterized by the boundary condition (1.8), we construct the corresponding quadratic form \(Q_{D,\alpha}\) and we formulate our main results (section 2).
The main technical part of the paper is the proof that \(Q_{D,\alpha}\) is closed and bounded from below in \({\rm L}_{s}^{2}(\Omega_{a})\) (section 3).
Then, we define the Hamiltonian \(H_{D,\alpha}\) of our system as the unique self-adjoint and bounded from below operator associated to \(Q_{D,\alpha}\) and we give an explicit characterization of the domain and action of \(H_{D,\alpha}\), providing also an expression for the associated resolvent operator (section 4).
Finally we show that for \(\alpha=0\) the Efimov effect occurs and the law (1.1) holds (section 5).
In Appendix A we collect some useful representation formulas for the integral kernel of the resolvent of the free Laplacian in \(\mathbb{R}^{6}\) and for the integral kernel of the resolvent of the Dirichlet Laplacian in \(\Omega_{a}\).
In Appendix B we recall the solution of the eigenvalue problem in the case \(\alpha=0\) following the treatment usually given in the physical literature.
## 2. Formulation of the main results
In this section we first give a heuristic argument to derive the quadratic form associated to the formal Hamiltonian (1.3) and then we formulate our main results.
Let us introduce the potentials \(G^{\lambda}_{ij}\xi_{ij}\) (\(\lambda>0\)) produced by a suitable charge \(\xi_{ij}\) with support concentrated on the hyperplane \(\pi_{ij}\):
\[\big{(}G^{\lambda}_{ij}\xi_{ij}\big{)}(\mathbf{X})=\int_{\Omega_{a}}\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Taking this into account and noting that
\[\int_{\mathbb{R}^{3}}\!\!\mathrm{d}\boldsymbol{y}^{\prime}\, \mathrm{R}^{\lambda}_{0}\big{(}\boldsymbol{x},\boldsymbol{y};\boldsymbol{x}^{ \prime},\boldsymbol{y}^{\prime}\big{)} =\frac{1}{(2\pi)^{6}}\int_{\mathbb{R}^{3}}\!\!\mathrm{d} \boldsymbol{y}^{\prime}\,\mathrm{R}^{\lambda}_{0}\big{(}\boldsymbol{x}, \boldsymbol{y};\boldsymbol{0},\boldsymbol{y}^{\prime}\big{)}\,\,\xi( \boldsymbol{y}^{\prime})\] \[=\frac{1}{(2\pi)^{3}}\int_{\mathbb{R}^{3}}\!\!\mathrm{d} \boldsymbol{h}\,\,\frac{\mathrm{e}^{\mathrm{i}\boldsymbol{h}\cdot(\boldsymbol{ x}-\boldsymbol{x}^{\prime})}}{|\boldsymbol{h}|^{2}+\lambda}=\frac{\mathrm{e}^{- \sqrt{\lambda}}\xi_{\boldsymbol{y}-\boldsymbol{x}^{\prime}}|}{4\pi|\boldsymbol {x}-\boldsymbol{x}^{\prime}|}\,,\]
from Eq. (2.1) we infer the following:
\[\big{(}\mathrm{G}^{\lambda}_{12}\xi_{12}\big{)}(\boldsymbol{X})= \int_{\mathbb{B}^{\lambda}_{\mathrm{a}}}\!\!\mathrm{d}\boldsymbol{y}^{\prime} \,\mathrm{R}^{\lambda}_{0}\big{(}\boldsymbol{x},\boldsymbol{y};\boldsymbol{0},\boldsymbol{y}^{\prime}\big{)}\,\,\xi(\boldsymbol{y}^{\prime})\] \[=\xi(\boldsymbol{y})\int_{\mathbb{B}^{\lambda}_{\mathrm{a}}} \!\!\mathrm{d}\boldsymbol{y}^{\prime}\,\mathrm{R}^{\lambda}_{0}\big{(} \boldsymbol{x},\boldsymbol{y};\boldsymbol{0},\boldsymbol{y}^{\prime}\big{)}+ \int_{\mathbb{B}^{\lambda}_{\mathrm{a}}}\!\!\mathrm{d}\boldsymbol{y}^{\prime} \,\mathrm{R}^{\lambda}_{0}\big{(}\boldsymbol{x},\boldsymbol{y};\boldsymbol{0},\boldsymbol{y}^{\prime}\big{)}\,\,\big{[}\xi(\boldsymbol{y}^{\prime})-\xi( \boldsymbol{y})\big{]}+\int_{\mathbb{B}^{\lambda}_{\mathrm{a}}}\!\!\mathrm{d} \boldsymbol{y}^{\prime}\,\mathrm{g}^{\lambda}\big{(}\boldsymbol{x},\boldsymbol {y};\boldsymbol{0},\boldsymbol{y}^{\prime}\big{)}\,\,\xi(\boldsymbol{y}^{ \prime})\] \[\qquad\qquad\qquad\qquad+\int_{\mathbb{B}^{\lambda}_{\mathrm{a}}} \!\!\mathrm{d}\boldsymbol{y}^{\prime}\,\mathrm{R}^{\lambda}_{0}\big{(} \boldsymbol{x},\boldsymbol{y};\boldsymbol{0},\boldsymbol{y}^{\prime}\big{)}\, \,\big{[}\xi(\boldsymbol{y}^{\prime})-\xi(\boldsymbol{y})\big{]}+\int_{ \mathbb{B}^{\lambda}_{\mathrm{a}}}\!\!\mathrm{d}\boldsymbol{y}^{\prime}\, \mathrm{g}^{\lambda}\big{(}\boldsymbol{x},\boldsymbol{y};\boldsymbol{0}, \boldsymbol{y}^{\prime}\big{)}\,\,\xi(\boldsymbol{y}^{\prime})\,;\]
Notice that, due to the singularity for \(|\boldsymbol{x}|\,\,\to\,0\), the potential \(\mathrm{G}^{\lambda}_{12}\xi_{12}\) does not belong to \(\mathrm{H}^{1}(\Omega_{\mathrm{a}})\). Of course, the same is true for \(\mathrm{G}^{\lambda}_{23}\xi_{23}\) and \(\mathrm{G}^{\lambda}_{31}\xi_{31}\), due to the same kind of singularities for \(|\boldsymbol{x}-\sqrt{3}\boldsymbol{y}|\,\,\to\,0\) and for \(|\boldsymbol{x}+\sqrt{3}\boldsymbol{y}|\,\,\to\,0\), respectively. Moreover, such a singular behavior is exactly of the same form appearing in (1.8). This fact suggests to write a generic element of the operator domain as
\[\psi=\varphi^{\lambda}+\mathrm{G}^{\lambda}\xi,\qquad\text{with}\,\,\varphi^{ \lambda}\in\mathrm{dom}\big{(}\mathrm{H}_{\mathrm{D}}\big{)}\,.\]
In view of the previous arguments, Eq. (1.8) is equivalent to
\[\varphi^{\lambda}(\boldsymbol{0},\boldsymbol{y})=\Big{(}\alpha+ \tfrac{\sqrt{\lambda}}{4\pi}\Big{)}\xi(\boldsymbol{y})+\xi(\boldsymbol{y}) \int_{\mathbb{B}_{\mathrm{a}}}\!\!\mathrm{d}\boldsymbol{y}^{\prime}\,\mathrm{R} ^{\lambda}_{0}\big{(}\boldsymbol{0},\boldsymbol{y};\boldsymbol{0},\boldsymbol {y}^{\prime}\big{)}-\int_{\mathbb{B}^{\lambda}_{\mathrm{a}}}\!\!\mathrm{d} \boldsymbol{y}^{\prime}\,\mathrm{R}^{\lambda}_{0}\big{(}\boldsymbol{0}, \boldsymbol{y};\boldsymbol{0},\boldsymbol{y}^{\prime}\big{)}\,\,\big{[}\xi( \boldsymbol{y}^{\prime})-\xi(\boldsymbol{y})\big{]}\] \[-\int_{\mathbb{B}^{\lambda}_{\mathrm{a}}}\!\!\mathrm{d} \boldsymbol{y}^{\prime}\,\mathrm{g}^{\lambda}\big{(}\boldsymbol{0},\boldsymbol {y};\boldsymbol{0},\boldsymbol{y}^{\prime}\big{)}\xi(\boldsymbol{y}^{\prime})- \int_{\mathbb{B}^{\lambda}_{\mathrm{a}}}\!\!\mathrm{d}\boldsymbol{y}^{\prime}\, \Big{[}\mathrm{R}^{\lambda}_{\mathrm{D}}\Big{(}\boldsymbol{0},\boldsymbol{y}; \tfrac{\sqrt{3}}{2}\,\boldsymbol{y}^{\prime},-\tfrac{1}{2}\,\boldsymbol{y}^{ \prime}\Big{)}+\mathrm{R}^{\lambda}_{\mathrm{D}}\Big{(}\boldsymbol{0}, \boldsymbol{y};\tfrac{\sqrt{3}}{2}\,\boldsymbol{y}^{\prime},-\tfrac{1}{2}\, \boldsymbol{y}^{\prime}\Big{)}\Big{]}\xi(\boldsymbol{y}^{\prime})\,. \tag{2.7}\]
We now proceed to compute the quadratic form associated to the formal Hamiltonian \(\mathrm{H}\) introduced in (1.3). For functions of the form \(\psi=\varphi^{\lambda}+\mathrm{G}^{\lambda}\xi\), taking into account that \((\mathrm{H}_{\mathrm{D}}+\lambda)\mathrm{G}^{\lambda}\xi=0\) outside the two-particle coincidence hyperplanes, a heuristic computation yields
\[\big{\langle}\psi\big{|}(\mathrm{H}+\lambda)\psi\big{\rangle} =\lim_{\varepsilon\,\to\,0^{+}}\int_{\Omega_{\mathrm{a}, \epsilon}}\!\!\mathrm{d}\boldsymbol{X}\,\,\overline{\psi}\,\big{(}\mathrm{H}_{ \mathrm{D}}+\lambda\big{)}\psi=\lim_{\varepsilon\,\to\,0^{+}}\int_{\Omega_{ \mathrm{a},\epsilon}}\!\!\mathrm{d}\boldsymbol{X}\,\,\overline{(\varphi^{\lambda}+ \mathrm{G}^{\lambda}\xi)}\,\,\big{(}\mathrm{H}_{\mathrm{D}}+\lambda\big{)}( \varphi^{\lambda}+\mathrm{G}^{\lambda}\xi)\] \[=\big{\langle}\varphi^{\lambda}\big{|}(\mathrm{H}_{\mathrm{D}}+ \lambda)\varphi^{\lambda}\big{\rangle}+\lim_{\varepsilon\,\to\,0^{+}}\int_{ \Omega_{\mathrm{a},\epsilon}}\!\!\mathrm{d}\boldsymbol{X}\,\,\overline{(\mathrm{G}^{ \lambda}\xi)}\,\,\big{(}\mathrm{H}_{\mathrm{D}}+\lambda\big{)}\varphi^{\lambda}\,, \tag{2.8}\]
where \(\Omega_{\alpha,\varepsilon}=\Omega_{a}\,\cap\,\{|\mathbf{x}|>\varepsilon\}\,\cap\, \{|\mathbf{x}-\sqrt{3}\mathbf{y}|>\varepsilon\}\,\cap\,\{|\mathbf{x}+\sqrt{3} \mathbf{y}|>\varepsilon\}\). By means of Eqs. (2.4), (2.5), (2.6) and of the bosonic symmetry of \(\varphi^{\lambda}\) one has
\[\lim_{\varepsilon\,\rightarrow\,0^{+}}\int_{\Omega_{\alpha, \varepsilon}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
We are now in position to define the quadratic form in \(\mathrm{L}_{\mathrm{s}}^{2}(\Omega_{\mathrm{a}})\)
\[\mathrm{Q}_{\mathrm{D},\alpha}[\uppsi]:=\mathrm{Q}_{\mathrm{D}}\left[ \varphi^{\lambda}\right]+\lambda\left\|\varphi^{\lambda}\right\|_{\mathrm{L}^{2 }}^{2}-\lambda\left\|\psi\right\|_{\mathrm{L}^{2}}^{2}+3\,\Phi^{\lambda}_{ \alpha}[\xi]\,, \tag{2.20}\] \[\mathrm{dom}\big{(}\mathrm{Q}_{\mathrm{D},\alpha}\big{)}:=\left\{ \uppsi=\varphi^{\lambda}+\mathrm{G}^{\lambda}\xi\ \Big{|}\ \varphi^{\lambda}\!\!\in\! \mathrm{L}_{\mathrm{s}}^{2}(\Omega_{\mathrm{a}})\cap\mathrm{H}_{0}^{1}(\Omega_{ \mathrm{a}})\,,\ \ \xi\!\!\in\!\mathrm{dom}\big{(}\Phi^{\lambda}_{\alpha}\big{)}\,,\ \ \lambda\!\!>\!\!0\right\}\,, \tag{2.21}\]
where \(\mathrm{Q}_{\mathrm{D}}\) is the Dirichlet quadratic form defined in (1.7).
We stress that the definition of the quadratic forms (2.11), (2.12) and (2.20), (2.21) is the starting point of our rigorous analysis.
Before proceeding, let us mention an equivalent characterization of the potential \(\mathrm{G}^{\lambda}\xi\). This will play a role in some of the proofs presented in the following sections. Let \(\tau_{\mathrm{ij}}:\mathrm{dom}\big{(}\mathrm{H}_{\mathrm{D}}\big{)}\subset \mathrm{H}^{2}(\Omega_{\mathrm{a}})\ \to\ \mathrm{H}^{1/2}(\tau_{\mathrm{ij}})\) be the Sobolev trace operator defined as the unique bounded extension of the evaluation map \(\tau_{\mathrm{ij}}\varphi:=\varphi\ |\ \tau_{\mathrm{ij}}\) acting on smooth functions \(\varphi\in\mathrm{C}_{\mathrm{c}}^{\infty}(\Omega_{\mathrm{a}})\). We set
\[\boldsymbol{\tau}:=\tau_{12}\oplus\tau_{23}\oplus\tau_{31}:\mathrm{dom}\big{(} \mathrm{H}_{\mathrm{D}}\big{)}\ \to\ \mathrm{L}^{2}(\pi_{12})\oplus\mathrm{L}^{2}(\pi_{23})\oplus\mathrm{L}^{2}( \pi_{31})\,.\]
It is crucial to notice that the range of \(\boldsymbol{\tau}\) actually keeps track of the bosonic symmetry encoded in \(\mathrm{dom}\big{(}\mathrm{H}_{\mathrm{D}}\big{)}\subset\mathrm{L}_{\mathrm{s }}^{2}(\Omega_{\mathrm{a}})\). Taking this into account, noting that \(\mathrm{R}_{\mathrm{D}}^{\lambda}:\mathrm{L}_{\mathrm{s}}^{2}(\Omega_{\mathrm{ a}})\ \to\ \mathrm{dom}\big{(}\mathrm{H}_{\mathrm{D}}\big{)}\) and using the natural embedding \(\mathrm{H}^{1/2}(\tau_{\mathrm{ij}})\hookrightarrow\mathrm{L}^{2}(\pi_{ \mathrm{ij}})\), it is easy to check that
\[\mathrm{G}^{\lambda}=\big{(}\boldsymbol{\tau}\,\mathrm{R}_{\mathrm{D}}^{ \lambda}\big{)}^{*}\ :\ \mathrm{dom}\big{(}\mathrm{G}^{\lambda}\big{)}\subset \mathrm{L}^{2}(\pi_{12})\oplus\mathrm{L}^{2}(\pi_{23})\oplus\mathrm{L}^{2}( \pi_{31})\ \to\ \mathrm{L}_{\mathrm{s}}^{2}(\Omega_{\mathrm{a}})\,, \tag{2.22}\]
where, in compliance with (2.6) and (2.12), we put
\[\mathrm{dom}\big{(}\mathrm{G}^{\lambda}\big{)}:=\left\{\xi=(\xi_{12},\xi_{23},\xi_{31})\ \Big{|}\ \xi_{12}(\mathbf{y})=\xi(\mathbf{y})\,,\ \xi_{23}(\mathbf{y})=\xi_{31}(\mathbf{y})=\xi(-2 \mathbf{y})\,,\ \ \xi\!\in\!\mathrm{dom}\big{(}\Phi^{\lambda}_{\alpha}\big{)}\right\}\,.\]
Accordingly, with a slight abuse of notation we can rephrase Eq. (2.5) as
\[\mathrm{G}^{\lambda}\xi\equiv\mathrm{G}^{\lambda}\xi\,,\qquad\text{for all}\ \ \ \xi\in\mathrm{dom}\big{(}\mathrm{G}^{\lambda}\big{)}\,.\]
In the sequel we shall refer especially to the bounded operator
\[\tau\equiv\tau_{12}:\mathrm{dom}\big{(}\mathrm{H}_{\mathrm{D}}\big{)}\ \to\ \mathrm{L}^{2}(\pi_{12})\equiv\mathrm{L}^{2}(\mathrm{B}_{\mathrm{a}}^{\mathrm{c} })\,. \tag{2.23}\]
In the rest of this section we formulate the main results of the paper.
**Theorem 2.1** (Closedness and lower-boundedness of \(\mathrm{Q}_{\mathrm{D},\alpha}\)).:
_(i) The quadratic form_ \(\Phi^{\lambda}_{\alpha}\) _in_ \(\mathrm{L}^{2}(\mathrm{B}_{\mathrm{a}}^{\mathrm{c}})\) _defined by (_2.11_), (_2.12_) is closed and bounded from below for any_ \(\lambda>0\)_. More precisely, there exists a constant_ \(\mathrm{B}>0\) _such that_
\[\big{|}\Phi^{\lambda}_{\alpha}[\xi]\big{|}\leqslant\mathrm{B}\left(\sqrt{ \lambda}\left\|\xi\right\|_{\mathrm{L}^{2}}^{2}+\left\|\xi\right\|_{\mathrm{L}^ {2}_{\mathrm{u}}}^{2}+\left\|\xi\right\|_{\mathrm{H}^{1/2}}^{2}\right),\qquad \text{for any}\ \,\lambda>0\,. \tag{2.24}\]
_Furthermore, there exist_ \(\lambda_{0}>0\)_,_ \(\Lambda_{0}>0\) _and_ \(\Lambda(\lambda)>0\)_, with_ \(\Lambda(\lambda)=\mathrm{o}(1)\) _for_ \(\lambda\ \to+\infty\)_, such that_
\[\Phi^{\lambda}_{\alpha}[\xi]\geqslant\Lambda_{0}\sqrt{\lambda}\left\|\xi\right\| _{\mathrm{L}^{2}}^{2}+\Lambda(\lambda)\left(\left\|\xi\right\|_{\mathrm{L}^{2}_{ \mathrm{u}}}^{2}+\left\|\xi\right\|_{\mathrm{H}^{1/2}}^{2}\right),\qquad \text{for any}\ \,\lambda>\lambda_{0}\,. \tag{2.25}\]
_(ii) The quadratic form_ \(\mathrm{Q}_{\mathrm{D},\alpha}\) _in_ \(\mathrm{L}_{\mathrm{s}}^{2}(\Omega_{\mathrm{a}})\) _defined by (_2.20_), (_2.21_) is independent of_ \(\lambda\)_, closed and lower-bounded._
Let us define \(\Gamma^{\lambda}_{\alpha}\), \(\lambda>0\), as the unique self-adjoint and lower-bounded operator in \(\mathrm{L}^{2}(\mathrm{B}_{\mathrm{a}}^{\mathrm{c}})\) associated with the quadratic form \(\Phi^{\lambda}_{\alpha}\). Notice that \(\Gamma^{\lambda}_{\alpha}\) is positive and has a bounded inverse whenever \(\lambda>\lambda_{0}\) (with \(\lambda_{0}\) as in Theorem 2.1). We also recall that \(\tau\) is the Sobolev trace operator on the coincidence hyperplane \(\pi_{12}\), see (2.23).
**Theorem 2.2** (Characterization of the Hamiltonian).:
_The self-adjoint and bounded from below operator \(H_{D,\alpha}\) in \(L^{2}_{s}(\Omega_{\alpha})\) uniquely associated to the quadratic form \(Q_{D,\alpha}\) is characterized as follows:_
\[\mathrm{dom}\big{(}H_{D,\alpha}):=\Big{\{}\psi=\varphi^{\lambda}+ G^{\lambda}\xi\in\mathrm{dom}\big{(}Q_{D,\alpha}\big{)}\,\,\Big{|}\, \varphi^{\lambda}\!\in\!\mathrm{dom}\big{(}H_{D}\big{)}\,,\,\,\xi\in\mathrm{ dom}\big{(}\Gamma^{\lambda}_{\alpha}\big{)}\,,\,\tau\varphi^{\lambda}=\Gamma^{\lambda}_{ \alpha}\xi\,,\,\,\lambda\!>\!0\,\Big{\}}\,,\] \[(H_{D,\alpha}+\lambda)\psi=(H_{D}+\lambda)\varphi^{\lambda}\,. \tag{2.26}\]
_For any \(\lambda>\lambda_{0}\) (with \(\lambda_{0}\) as in Theorem 2.1), the associated resolvent operator \(R^{\lambda}_{D,\alpha}:=(H_{D,\alpha}+\lambda)^{-1}\) is given by the Krein formula_
\[R^{\lambda}_{D,\alpha}=R^{\lambda}_{D}+G^{\lambda}\left(\Gamma^{\lambda}_{ \alpha}\right)^{-1}\tau R^{\lambda}_{D}\,. \tag{2.27}\]
**Remark 2.3**.: As a consequence of the previous Theorem, one immediately sees that \(\Psi_{\mu}\in\mathrm{dom}\big{(}H_{D,\alpha}\big{)}\) is an eigenvector of \(H_{D,\alpha}\) associated to the negative eigenvalue \(-\mu\), with \(\mu>0\), if and only if
\[\Psi_{\mu}=G^{\mu}\xi_{\mu}\,,\qquad\xi_{\mu}\in\mathrm{dom}\left(\Gamma^{ \mu}_{\alpha}\right)\,,\qquad\Gamma^{\mu}_{\alpha}\xi_{\mu}=0\,. \tag{2.28}\]
The last result concerns the proof of the Efimov effect in the case of infinite two-body scattering length, _i.e._, when \(\alpha=0\), also known as the unitary limit.
**Theorem 2.4** (Efimov effect).:
_The Hamiltonian \(H_{D,0}\) has an infinite sequence of negative eigenvalues \(E_{n}\) accumulating at zero and fulfilling_
\[E_{n}=-\,\frac{4}{a^{2}}\,\mathrm{e}^{\frac{2}{a}\,(\emptyset-n\pi)}\Big{(}1+ \,0(1)\Big{)}\,,\qquad\text{for}\,\,\,n\,\,\to\,+\infty \tag{2.29}\]
_where \(\theta=\arg\Gamma(1+\mathrm{i}s_{0})\) and \(s_{0}\approx 1.00624\) is the unique positive solution of the equation_
\[-s\cosh\big{(}\tfrac{\pi}{2}\,s\big{)}+\tfrac{8}{\sqrt{3}}\,\sinh\big{(} \tfrac{\pi}{6}\,s\big{)}=0\,. \tag{2.30}\]
_In particular, the geometrical law (1.1) holds. Furthermore, the eigenvector associated to \(E_{n}\) is given by_
\[\Psi_{n}\big{(}\mathbf{x},\mathbf{y}\big{)}=\psi_{n}\big{(}|\mathbf{x}|,| \mathbf{y}|\big{)}+\psi_{n}\Big{(}\Big{|}\!-\!\tfrac{1}{2}\,\mathbf{x}+\tfrac {\sqrt{3}}{2}\,\mathbf{y}\Big{|}\,,\Big{|}\tfrac{\sqrt{3}}{2}\,\mathbf{x}+ \tfrac{1}{2}\,\mathbf{y}\Big{|}\Big{)}+\psi_{n}\Big{(}\Big{|}\tfrac{1}{2}\, \mathbf{x}+\tfrac{\sqrt{3}}{2}\,\mathbf{y}\Big{|}\,,\Big{|}\tfrac{\sqrt{3}}{2 }\,\mathbf{x}-\tfrac{1}{2}\,\mathbf{y}\Big{|}\Big{)}\,, \tag{2.31}\]
_where_
\[\psi_{n}(\tau,\rho)=\frac{C_{n}}{4\pi\tau\rho}\,\frac{\sinh\big{(}s_{0}\arctan \tfrac{\rho}{\tau}\big{)}}{\sinh\big{(}\tfrac{\pi}{2}\,s_{0}\big{)}}\,K_{ \mathrm{i}s_{0}}\Big{(}\tfrac{t_{n}}{a}\sqrt{\tau^{2}+\rho^{2}}\Big{)}\,, \tag{2.32}\]
\(\tau=|\mathbf{x}|\)_, \(\rho=|\mathbf{y}|\), \(C_{n}\) is a normalization constant, \(K_{\mathrm{i}s_{0}}\) is the modified Bessel function of the second kind with imaginary order and \(t_{n}\) is the \(n\)-th positive simple root of the equation \(K_{\mathrm{i}s_{0}}(t)=0\)._
## 3. Analysis of the quadratic form
As a first step we derive upper and lower bounds for the quadratic form \(\Phi^{\lambda}_{\alpha}[\xi]\) defined in (2.11), (2.12). The estimates reported in the forthcoming Lemma 3.1 ultimately account for the main results stated in Theorem 2.1.
**Lemma 3.1**.: _For any \(\lambda>0\) there holds_
\[\Phi^{\lambda}_{\mathrm{i}}[\xi]>0\,,\qquad\text{for}\,\,\,\mathrm{i}=1,2,3\,. \tag{3.1}\]
_Moreover, there exist positive constants \(A_{i}(\lambda)\)\((i=1,2)\) and \(B_{i}\)\((i=1,2,3,4)\) such that:_
\[A_{1}(\lambda)\,\|\xi\|_{L^{2}_{\mathbb{R}_{+}}}^{2} \leqslant\,\Phi_{1}^{\lambda}[\xi]+\|\xi\|_{L^{2}}^{2}\,\leqslant\,B _{1}\,\|\xi\|_{L^{2}_{\mathbb{R}_{+}}}^{2}\,; \tag{3.2}\] \[A_{2}(\lambda)\,\|\xi\|_{H^{1/2}}^{2} \leqslant\,\Phi_{2}^{\lambda}[\xi]+\|\xi\|_{L^{2}}^{2}\,\leqslant\,B _{2}\,\|\xi\|_{H^{1/2}}^{2}\,;\] (3.3) \[0\leqslant\Phi_{3}^{\lambda}[\xi]\,\leqslant\,B_{3}\,\|\xi\|_{L^ {2}}^{2}\,;\] (3.4) \[\left|\Phi_{4}^{\lambda}[\xi]\right|\leqslant\,B_{4}\,\|\xi\|_{L^ {2}}^{2}\,. \tag{3.5}\]
_In particular, the constants \(A_{1}(\lambda)\) and \(A_{2}(\lambda)\) fulfill_
\[A_{1}(\lambda)=o(1)\,,\qquad A_{2}(\lambda)=o(1)\,,\qquad\quad\text{for}\ \ \lambda\,\to\,+\infty\,. \tag{3.6}\]
Proof.: We discuss separately the terms \(\Phi_{i}^{\lambda}[\xi]\)\((i=1,2,3,4)\) defined in (2.13) - (2.16).
1) Estimates for \(\Phi_{1}^{\lambda}[\xi]\). Making reference to the definition (2.13), we first consider the decomposition
\[\Phi_{1}^{\lambda}[\xi]=\int_{B_{b}\cap B_{\bar{a}}^{c}}\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
by computation similar to those outlined above (recall, in particular, that \(\mathrm{t}^{2}\mathrm{K}_{2}(\mathrm{t})\leqslant 2\) for all \(\mathrm{t}\geqslant 0\)), we infer
\[0\leqslant\int_{\mathrm{B}_{\mathrm{b}}^{c}}\!\!\!\mathrm{d}\boldsymbol{y}\, \left(\int_{\mathrm{B}_{a}}\!\!\!\mathrm{d}\boldsymbol{y}^{\prime}\,\mathrm{R} _{0}^{\lambda}\!\big{(}0,\boldsymbol{y};0,\boldsymbol{y}^{\prime}\big{)}\right) \left|\xi(\boldsymbol{y})\right|^{2}\leqslant\frac{1}{4\pi^{2}(\mathrm{b}- \boldsymbol{a})}\int_{\mathrm{B}_{\mathrm{b}}^{c}}\!\!\!\mathrm{d}\boldsymbol{y }\,\left|\xi(\boldsymbol{y})\right|^{2}.\]
Summing up, we obtain
\[\frac{1}{4\pi^{2}}\left[\alpha+\frac{\mathrm{b}^{2}-\alpha^{2}}{ 2\mathrm{b}}\,\log\!\left(\frac{\mathrm{b}-\alpha}{\mathrm{b}+\alpha}\right) \right]\lambda\left(\alpha+\mathrm{b}\right)\,\mathrm{K}_{2}\big{(}\sqrt{ \lambda}\left(\alpha+\mathrm{b}\right)\big{)}\int_{\mathrm{B}_{\mathrm{b}}\cap \mathrm{B}_{\mathrm{a}}^{c}}\frac{|\xi(\boldsymbol{y})|^{2}}{|\boldsymbol{y}|- \alpha}\] \[\leqslant\Phi_{1}^{\lambda}[\xi]\leqslant\frac{1}{4\pi^{2}} \Bigg{(}\!\int_{\mathrm{B}_{\mathrm{b}}\cap\mathrm{B}_{\mathrm{a}}^{c}}\!\! \!\mathrm{d}\boldsymbol{y}\,\frac{|\xi(\boldsymbol{y})|^{2}}{|\boldsymbol{y}|- \alpha}+\frac{1}{\mathrm{b}-\alpha}\!\int_{\mathrm{B}_{\mathrm{b}}^{c}}\!\! \!\mathrm{d}\boldsymbol{y}\,\left|\xi(\boldsymbol{y})\right|^{2}\Bigg{)}\,.\]
From here we readily deduce (3.2), recalling the basic relations (2.17) (2.19) and noting that
\[\int_{\mathrm{B}_{\mathrm{b}}\cap\mathrm{B}_{\mathrm{a}}^{c}}\frac{|\xi( \boldsymbol{y})|^{2}}{|\boldsymbol{y}|-\alpha}=\int_{\mathrm{B}_{\mathrm{a}}^{c }}\!\!\!\mathrm{d}\boldsymbol{y}\,\,w(\boldsymbol{y})\left|\xi(\boldsymbol{y}) \right|^{2}-\int_{\mathrm{B}_{\mathrm{b}}^{c}}\!\!\!\mathrm{d}\boldsymbol{y}\, \left|\xi(\boldsymbol{y})\right|^{2}\geqslant\left\|\xi\right\|_{\mathrm{L}_{ \mathrm{w}}^{2}}^{2}-\left\|\xi\right\|_{\mathrm{L}^{2}}^{2},\]
The claim in (3.6) regarding the constant \(\mathrm{A}_{1}(\lambda)\) follows by elementary considerations, noting that the map \(\mathrm{t}\mapsto\mathrm{t}^{2}\,\mathrm{K}_{2}(\mathrm{t})\) vanishes with exponential rate in the limit \(\mathrm{t}\,\to\,+\infty\)[26, Eq. 10.40.2].
_2) Estimates for \(\Phi_{2}^{\lambda}[\xi]\)._ Recall the explicit expression (2.14). By arguments similar to those described before, it is easy to see that \(\Phi_{2}^{\lambda}[\xi]\geqslant 0\). Next, let us fix arbitrarily \(\varepsilon>0\) and consider the set
\[\Delta_{\alpha,\varepsilon}\coloneqq\left\{(\boldsymbol{y},\boldsymbol{y}^{ \prime})\in\mathrm{B}_{a}^{c}\times\mathrm{B}_{a}^{c}\,\,\mid\,\,\boldsymbol{y }-\boldsymbol{y}^{\prime}|<\varepsilon\right\}.\]
We re-write the definition (2.14) accordingly as
\[\Phi_{2}^{\lambda}[\xi]=\frac{1}{2}\!\int_{\Delta_{\alpha, \varepsilon}}\!\!\!\mathrm{d}\boldsymbol{y}\,\,\mathrm{d}\boldsymbol{y}^{ \prime}\,\mathrm{R}_{0}^{\lambda}\!\big{(}0,\boldsymbol{y};0,\boldsymbol{y}^{ \prime}\big{)}\left|\xi(\boldsymbol{y})-\xi(\boldsymbol{y}^{\prime})\right|^{2}\] \[\qquad\qquad\qquad\qquad\qquad+\frac{1}{2}\int_{[\mathrm{B}_{a}^{c }\times\mathrm{B}_{a}^{c}]\,\setminus\,\Delta_{\alpha,\varepsilon}}\!\!\! \mathrm{d}\boldsymbol{y}\,^{\prime}\,\mathrm{R}_{0}^{\lambda}\!\big{(}0, \boldsymbol{y};0,\boldsymbol{y}^{\prime}\big{)}\,\left|\xi(\boldsymbol{y})-\xi( \boldsymbol{y}^{\prime})\right|^{2}.\]
On one side, considerations analogous to those reported in part 1) of this proof yield
\[\frac{\lambda\varepsilon^{2}\,\mathrm{K}_{2}\big{(}\sqrt{\lambda}\,\varepsilon \big{)}}{(2\pi)^{3}\,|\boldsymbol{y}-\boldsymbol{y}^{\prime}|^{4}}\leqslant \mathrm{R}_{0}^{\lambda}\!\big{(}0,\boldsymbol{y};0,\boldsymbol{y}^{\prime} \big{)}\leqslant\frac{2}{(2\pi)^{3}\,|\boldsymbol{y}-\boldsymbol{y}^{\prime }|^{4}}\,,\qquad\text{for all }\,(\boldsymbol{y},\boldsymbol{y}^{\prime})\in\Delta_{\alpha, \varepsilon}\,,\]
which implies, in turn,
\[\frac{\lambda\varepsilon^{2}\,\mathrm{K}_{2}\big{(}\sqrt{\lambda} \,\varepsilon\big{)}}{(2\pi)^{3}}\!\int_{\Delta_{\alpha,\varepsilon}}\!\!\! \mathrm{d}\boldsymbol{y}\,\,\mathrm{d}\boldsymbol{y}^{\prime}\,\frac{\left| \xi(\boldsymbol{y})-\xi(\boldsymbol{y}^{\prime})\right|^{2}}{|\boldsymbol{y}- \boldsymbol{y}^{\prime}|^{4}}\] \[\leqslant\int_{\Delta_{\alpha,\varepsilon}}\!\!\!\mathrm{d} \boldsymbol{y}\,\,\mathrm{d}\boldsymbol{y}^{\prime}\,\mathrm{R}_{0}^{\lambda}\! \big{(}0,\boldsymbol{y};0,\boldsymbol{y}^{\prime}\big{)}\left|\xi(\boldsymbol{y })-\xi(\boldsymbol{y}^{\prime})\right|^{2}\leqslant\frac{1}{4\pi^{3}}\!\int_{ \Delta_{\alpha,\varepsilon}}\!\!\!\mathrm{d}\boldsymbol{y}\,\,\mathrm{d} \boldsymbol{y}^{\prime}\,\frac{\left|\xi(\boldsymbol{y})-\xi(\boldsymbol{y}^{ \prime})\right|^{2}}{|\boldsymbol{y}-\boldsymbol{y}^{\prime}|^{4}}\,.\]
On the other side, since
\[0\leqslant\mathrm{R}_{0}^{\lambda}\!\big{(}0,\boldsymbol{y};0,\boldsymbol{y}^{ \prime}\big{)}\leqslant\frac{2}{(2\pi)^{3}\,|\boldsymbol{y}-\boldsymbol{y}^{ \prime}|^{4}}\,,\qquad\text{for all }(\boldsymbol{y},\boldsymbol{y}^{\prime})\in(\mathrm{B}_{a}^{c}\times \mathrm{B}_{a}^{c})\setminus\Delta_{\alpha,\varepsilon}\,,\]
we readily get
\[0\leqslant\int_{(\mathrm{B}_{a}^{c}\times\mathrm{B}_{a}^{c})\setminus\Delta_{ \alpha,\varepsilon}}\!\!\!\mathrm{d}\boldsymbol{y}^{\prime}\,\mathrm{R}_{0}^{ \lambda}\!\big{(}0,\boldsymbol{y};0,\boldsymbol{y}^{\prime}\big{)}\left|\xi( \boldsymbol{y})-\xi(\boldsymbol{y}^{\prime})\right|^{2}\leqslant\frac{1}{4\pi^{3}} \int_{(\mathrm{B}_{a}^{c}\times\mathrm{B}_{a}^{c})\setminus\Delta_{\alpha, \varepsilon}}\!\!\!\mathrm{d}\boldsymbol{y}\,\,\mathrm{d}\boldsymbol{y}^{\prime}\, \frac{\left|\xi(\boldsymbol{y})-\xi(\boldsymbol{y}^{\prime})\right|^{2}}{| \boldsymbol{y}-\boldsymbol{y}^{\prime}|^{4}}\,.\]
The above arguments imply
\[\frac{\lambda\varepsilon^{2}\operatorname{K}_{2}\bigl{(}\sqrt{\lambda}\, \varepsilon\bigr{)}}{16\pi^{3}}\int_{\operatorname{A}_{\mathbf{a},\varepsilon}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(t^{\nu}K_{\nu}(t)\) for any fixed \(\nu>0\), we get
\[\left|\int_{a}^{\infty}\!\!\!\mathrm{d}r\ K_{t+2}\big{(}\sqrt{\lambda}\,r \big{)}\ \xi_{\ell,m}(r)\right|^{2} \leqslant\left(\frac{1}{\lambda^{t+2}}\,\sup_{r>a}\Big{(}\big{(} \sqrt{\lambda}\,r\big{)}^{t+2}\,K_{t+2}\big{(}\sqrt{\lambda}\,r\big{)}\Big{)}^{ 2}\int_{a}^{\infty}\!\!\!\mathrm{d}r\ \frac{1}{r^{2t+6}}\right)\left(\int_{a}^{\infty}\!\!\! \mathrm{d}r\ r^{2}\,|\xi_{\ell,m}(r)|^{2}\right)\] \[\leqslant\frac{K_{t+2}^{2}(\sqrt{\lambda}\,a)}{a\,(2t+5)}\int_{a}^ {\infty}\!\!\!\mathrm{d}r\ r^{2}\,|\xi_{\ell,m}(r)|^{2}.\]
Taking also into account that, for any fixed \(\nu>0\), the map \(t\in\mathbb{R}_{+}\mapsto I_{\nu}(t)\,K_{\nu}(t)\) is continuous, positive and strictly decreasing with \(\lim_{t\to 0^{+}}I_{\nu}(t)\,K_{\nu}(t)=\frac{1}{2\nu}\) (see [3] and [26, Eq. 10.29.2 and SS10.37, together with SS10.30(i)]), we finally obtain
\[\Phi_{3}^{\lambda}[\xi] \leqslant\frac{2}{\pi^{2}a}\sum_{t=0}^{\infty}\,\sum_{|m|\leqslant t }\frac{t+2}{(2t+1)(2t+5)}\ I_{t+2}\big{(}\sqrt{\lambda}\,a\big{)}\ K_{t+2} \big{(}\sqrt{\lambda}\,a\big{)}\int_{a}^{\infty}\!\!\!\mathrm{d}r\ r^{2}\,|\xi _{\ell,m}(r)|^{2}\] \[\leqslant\frac{1}{\pi^{2}a}\sum_{t=0}^{\infty}\,\sum_{|m|\leqslant t }\frac{1}{(2t+1)(2t+5)}\int_{a}^{\infty}\!\!\!\mathrm{d}r\ r^{2}\,|\xi_{\ell, m}(r)|^{2}\] \[\leqslant\frac{1}{5\pi^{2}a}\sum_{t=0}^{\infty}\,\sum_{|m|\leqslant t }\int_{a}^{\infty}\!\!\!\mathrm{d}r\ r^{2}\,|\xi_{\ell,m}(r)|^{2}=\frac{1}{5 \pi^{2}a}\ \|\xi\|_{L^{2}}^{2}\,,\]
which proves the upper bound in Eq. (3.4).
_4) Estimates for \(\Phi_{4}^{\lambda}[\xi]\)._ Let us refer to the definition (2.16) and recall that \(\mathsf{R}_{D}^{\lambda}(\mathbf{X},\mathbf{X}^{\prime})\) is the integral kernel associated to the resolvent operator of the Dirichlet Laplacian \(H_{D}\) on \(\Omega_{a}\subset\mathbb{R}^{6}\). The corresponding heat kernel \(K_{D}(t;\mathbf{X},\mathbf{X}^{\prime})\) is known to fulfill the following Gaussian upper bound, for all \(t>0\), \(\mathbf{X},\mathbf{X}^{\prime}\in\Omega_{a}\) and some suitable \(c_{1},c_{2}>0\) (see, _e.g._, [7, p. 89, Corollary 3.2.8] and [17, 34]):
\[0\leqslant K_{D}(t;\mathbf{X},\mathbf{X}^{\prime})\leqslant\frac{c_{1}}{t^{3 }}\,\mathrm{e}^{-\frac{|\mathbf{X}-\mathbf{X}^{\prime}|^{2}}{4c_{2}t}}.\]
Taking this into account and using a well-known integral representation for the Bessel function \(K_{2}\)[26, Eq. 10.32.10], by classical arguments [7, p. 101, Lemma 3.4.3] we deduce
\[\left|R_{D}^{\lambda}(\mathbf{X},\mathbf{X}^{\prime})\right|\leqslant\int_{0}^ {\infty}\!\!\!\mathrm{d}t\ e^{-\lambda t}\left|K_{D}(t;\mathbf{X},\mathbf{X}^ {\prime})\right|\leqslant c_{1}\!\int_{0}^{\infty}\!\!\frac{\mathrm{d}t}{t^{3} }\,\mathrm{e}^{-\lambda t-\frac{|\mathbf{X}-\mathbf{X}^{\prime}|^{2}}{4c_{2}t }}\leqslant\frac{8\,c_{1}c_{2}\,\lambda}{|\mathbf{X}-\mathbf{X}^{\prime}|^{2}} \,K_{2}\big{(}\sqrt{\lambda/c_{2}}\,|\mathbf{X}-\mathbf{X}^{\prime}|\big{)}\,. \tag{3.8}\]
Then, using the elementary inequality \(\mathbf{y}\cdot\mathbf{y}^{\prime}\geqslant-\big{(}|\mathbf{y}|^{2}+|\mathbf{y }^{\prime}|^{2}\big{)}/2\) and recalling that the map \(t\in\mathbb{R}_{+}\mapsto t^{2}\,K_{2}(t)\) is decreasing with \(\lim_{t\to 0^{+}}t^{2}\,K_{2}(t)=2\), we infer
\[\left|R_{D}^{\lambda}\left(0,\mathbf{y};\frac{\sqrt{3}}{2}\, \mathbf{y}^{\prime},-\frac{1}{2}\,\mathbf{y}^{\prime}\right)\right|\leqslant \frac{8\,c_{1}\,c_{2}\,\lambda(|\mathbf{y}|^{2}+\mathbf{y}\cdot\mathbf{y}^{ \prime}+|\mathbf{y}^{\prime}|^{2})}{(|\mathbf{y}|^{2}+\mathbf{y}\cdot\mathbf{y} ^{\prime}+|\mathbf{y}^{\prime}|^{2})^{2}}\,K_{2}\bigg{(}\sqrt{\lambda/c_{2}(| \mathbf{y}|^{2}+\mathbf{y}\cdot\mathbf{y}^{\prime}+|\mathbf{y}^{\prime}|^{2})} \bigg{)}\\ \leqslant\frac{32\,c_{1}\,c_{2}\,\lambda a^{2}\,K_{2}\big{(}\sqrt{ \lambda/c_{2}}\ a\big{)}}{\big{(}|\mathbf{y}|^{2}+|\mathbf{y}^{\prime}|^{2} \big{)}^{2}}\leqslant\frac{64\,c_{1}\,c_{2}^{2}}{\big{(}|\mathbf{y}|^{2}+| \mathbf{y}^{\prime}|^{2}\big{)}^{2}}\,.\]
On account of the above arguments, by Cauchy-Schwarz inequality and basic symmetry considerations, from (2.16) we infer
\[\left|\Phi_{4}^{\lambda}[\xi]\right|=\left|\,2\int_{\mathsf{B}_{a }^{\varepsilon}\times\mathsf{B}_{a}^{\varepsilon}}\!\!\!\mathrm{d}\mathbf{y}^{ \prime}\,R_{D}^{\lambda}\left(0,\mathbf{y};\frac{\sqrt{3}}{2}\,\mathbf{y}^{ \prime},-\frac{1}{2}\,\mathbf{y}^{\prime}\right)\,\xi(\mathbf{y})\,\xi( \mathbf{y}^{\prime})\right|\] \[\leqslant 64\,c_{1}\,c_{2}^{2}\int_{\mathsf{B}_{a}^{\varepsilon} \times\mathsf{B}_{a}^{\varepsilon}}\!\!\!\mathrm{d}\mathbf{y}^{\prime}\,\frac{1}{ \big{(}|\mathbf{y}|^{2}+|\mathbf{y}^{\prime}|^{2}\big{)}^{2}}\ |\xi(\mathbf{y})|^{2}\leqslant 64\,c_{1}\,c_{2}^{2}\left(\sup_{r>a}\int_{ \mathsf{B}_{a}^{\varepsilon}}\!\!\!\mathrm{d}\mathbf{y}^{\prime}\,\frac{1}{(r^{2 }+|\mathbf{y}^{\prime}|^{2})^{2}}\right)\|\xi\|_{L^{2}}^{2}\,.\]
This in turn implies the thesis (3.4), in view of the fact that
\[\sup_{r>a}\int_{B^{c}_{a}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
## 4. The Hamiltonian
Proof of Theorem 2.2.: Let us first introduce the sesquilinear form defined by the polarization identity, starting from Eq. (2.20). With respect to the decompositions \(\psi_{1}=\varphi_{1}^{\lambda}+\mathrm{G}^{\lambda}\xi_{1}\) and \(\psi_{2}=\varphi_{2}^{\lambda}+\mathrm{G}^{\lambda}\xi_{2}\), this is given by
\[\mathrm{Q}_{\mathrm{D},\alpha}\big{[}\psi_{1},\psi_{2}\big{]} =\frac{1}{4}\Big{(}\mathrm{Q}_{\mathrm{D},\alpha}\big{[}\psi_{1}+ \psi_{2}\big{]}-\mathrm{Q}_{\mathrm{D},\alpha}\big{[}\psi_{1}-\psi_{2}\big{]}- \mathrm{i}\,\mathrm{Q}_{\mathrm{D},\alpha}\big{[}\psi_{1}+\mathrm{i}\psi_{2} \big{]}+\mathrm{i}\,\mathrm{Q}_{\mathrm{D},\alpha}\big{[}\psi_{1}-\mathrm{i} \psi_{2}\big{]}\Big{)}\] \[=\mathrm{Q}_{\mathrm{D}}\big{[}\varphi_{1}^{\lambda},\varphi_{2}^ {\lambda}\big{]}+\lambda^{2}\left\langle\varphi_{1}^{\lambda}\big{|}\varphi_ {2}^{\lambda}\right\rangle-\lambda^{2}\left\langle\psi_{1}\big{|}\psi_{2} \right\rangle+3\,\varphi_{\alpha}^{\lambda}\big{[}\xi_{1},\xi_{2}\big{]}\,,\]
where we have set
\[\mathrm{Q}_{\mathrm{D}}\big{[}\varphi_{1}^{\lambda},\varphi_{2}^ {\lambda}\big{]}:=\int_{\Omega_{a}}\!\!\!\mathrm{d}\mathbf{x}\,\mathrm{d} \mathbf{y}\,\left(\overline{\nabla_{x}\varphi_{1}^{\lambda}(\mathbf{x}, \mathbf{y})}\cdot\nabla_{x}\varphi_{2}^{\lambda}(\mathbf{x},\mathbf{y})+ \overline{\nabla_{y}\varphi_{1}^{\lambda}(\mathbf{x},\mathbf{y})}\cdot\nabla_ {y}\varphi_{2}^{\lambda}(\mathbf{x},\mathbf{y})\right),\] \[\Phi_{\alpha}^{\lambda}\big{[}\xi_{1},\xi_{2}\big{]}:=\left( \alpha+\frac{\sqrt{\lambda}}{4\pi}\right)\left\langle\xi_{1}|\xi_{2}\right\rangle +\Phi_{1}^{\lambda}\big{[}\xi_{1},\xi_{2}\big{]}+\Phi_{2}^{\lambda}\big{[}\xi_ {1},\xi_{2}\big{]}+\Phi_{3}^{\lambda}\big{[}\xi_{1},\xi_{2}\big{]}+\Phi_{4}^{ \lambda}\big{[}\xi_{1},\xi_{2}\big{]}\,,\]
and
\[\Phi_{1}^{\lambda}\big{[}\xi_{1},\xi_{2}\big{]}:=\int_{\mathbb{B} _{a}^{\mathrm{c}}\times\mathbb{B}_{a}^{\mathrm{c}}}\!\!\!\mathrm{d}\mathbf{y} \,\left(\int_{\mathbb{B}_{a}}\!\!\!\mathrm{d}\mathbf{y}\,^{\prime}\,\mathrm{R} _{0}^{\lambda}\big{(}0,\mathbf{y};\!0,\mathbf{y}\,^{\prime}\big{)}\right) \overline{\xi_{1}(\mathbf{y})}\,\xi_{2}(\mathbf{y})\,,\] \[\Phi_{2}^{\lambda}\big{[}\xi_{1},\xi_{2}\big{]}:=\frac{1}{2}\! \int_{\mathbb{B}_{a}^{\mathrm{c}}\times\mathbb{B}_{a}^{\mathrm{c}}}\!\!\! \mathrm{d}\mathbf{y}\,^{\prime}\,\mathrm{R}_{0}^{\lambda}\big{(}0,\mathbf{y}; \!0,\mathbf{y}\,^{\prime}\big{)}\,\overline{(\xi_{1}(\mathbf{y})-\xi_{1}( \mathbf{y}^{\prime}))}\,\big{(}\xi_{2}(\mathbf{y})-\xi_{2}(\mathbf{y}^{\prime })\big{)}\,,\] \[\Phi_{3}^{\lambda}\big{[}\xi_{1},\xi_{2}\big{]}:=-\int_{\mathbb{B }_{a}^{\mathrm{c}}\times\mathbb{B}_{a}^{\mathrm{c}}}\!\!\!\mathrm{d}\mathbf{y} \,\mathrm{d}\mathbf{y}\,^{\prime}\,\mathrm{g}^{\lambda}\big{(}0,\mathbf{y}; \!0,\mathbf{y}\,^{\prime}\big{)}\,\overline{\xi_{1}(\mathbf{y})}\,\xi_{2}( \mathbf{y}^{\prime})\,,\] \[\Phi_{4}^{\lambda}\big{[}\xi_{1},\xi_{2}\big{]}:=-2\int_{\mathbb{ B}_{a}^{\mathrm{c}}\times\mathbb{B}_{a}^{\mathrm{c}}}\!\!\!\mathrm{d} \mathbf{y}\,^{\prime}\,\mathrm{R}_{\mathrm{D}}^{\lambda}\Big{(}0,\mathbf{y}; \!\frac{\sqrt{3}}{2}\,\mathbf{y}\,^{\prime}\!,-\frac{1}{2}\,\mathbf{y}\,^{ \prime}\Big{)}\,\overline{\xi_{1}(\mathbf{y})}\,\xi_{2}(\mathbf{y}^{\prime })\,.\]
Since we have already proved that the form \(\mathrm{Q}_{\mathrm{D},\alpha}\) is closed and lower bounded, there exists a unique associated self-adjoint and lower bounded operator \(\mathrm{H}_{\mathrm{D},\alpha}\). Moreover, if \(\psi_{2}\in\mathrm{dom}\big{(}\mathrm{H}_{\mathrm{D},\alpha}\big{)}\) then there exists an element \(w:=\mathrm{H}_{\mathrm{D},\alpha}\psi_{2}\in\mathrm{L}^{2}(\Omega_{a})\) such that \(\mathrm{Q}\big{\{}\psi_{1},\psi_{2}\big{\}}=\left\langle\psi_{1}|w\right\rangle\) for any \(\psi_{1}\in\mathrm{dom}\big{(}\mathrm{Q}_{\mathrm{D},\alpha}\big{)}\).
For \(\xi_{1}=0\), _i.e._\(\psi_{1}=\varphi_{1}^{\lambda}\in\mathrm{dom}\big{(}\mathrm{Q}_{\mathrm{D}}\big{)}\), we have
\[\mathrm{Q}_{\mathrm{D},\alpha}\big{[}\varphi_{1}^{\lambda},\psi_{2}\big{]}= \mathrm{Q}_{\mathrm{D}}\big{[}\varphi_{1}^{\lambda},\varphi_{2}^{\lambda} \big{]}+\lambda\left\langle\varphi_{1}^{\lambda}\big{|}\varphi_{2}^{\lambda} \right\rangle-\lambda\left\langle\varphi_{1}^{\lambda}\big{|}\psi_{2}\right\rangle =\left\langle\varphi_{1}^{\lambda}\big{|}w\right\rangle.\]
Thus, \(\varphi_{2}^{\lambda}\in\mathrm{dom}\big{(}\mathrm{H}_{\mathrm{D}}\big{)}\) and \(w=\mathrm{H}_{\mathrm{D}}\varphi_{2}^{\lambda}-\lambda\,\mathrm{G}^{\lambda} \xi_{2}\), which entails the identity
\[\big{(}\mathrm{H}_{\mathrm{D},\alpha}+\lambda\big{)}\psi_{2}=\big{(}\mathrm{H}_ {\mathrm{D}}+\lambda\big{)}\varphi_{2}^{\lambda}\,.\]
For \(\xi_{1}\neq 0\), demanding that \(\mathrm{Q}_{\mathrm{D},\alpha}[\psi_{1},\psi_{2}]=\left\langle\psi_{1}|w\right\rangle\) with \(w=\mathrm{H}_{\mathrm{D}}\varphi_{2}^{\lambda}-\lambda\,\mathrm{G}^{\lambda} \xi_{2}\) as before, we obtain
\[\left\langle\mathrm{G}^{\lambda}\xi_{1}\big{|}(\mathrm{H}_{\mathrm{D}}+\lambda) \varphi_{2}^{\lambda}\right\rangle=3\,\Phi_{\alpha}^{\lambda}\big{[}\xi_{1}, \xi_{2}\big{]}\,.\]
On account of the bosonic exchange symmetries (see (2.4) and (2.6)), from the basic identity (2.22) we readily deduce the following, for all \(\xi_{1}\in\mathrm{dom}\big{(}\Phi_{\alpha}^{\lambda}\big{)}\) and \(\varphi_{2}^{\lambda}\in\mathrm{dom}\big{(}\mathrm{H}_{\mathrm{D}}\big{)}\):
\[\left\langle\mathrm{G}^{\lambda}\xi_{1}\big{|}\big{(}\mathrm{H}_{\mathrm{D}}+ \lambda\big{)}\,\varphi\right\rangle_{\mathrm{L}^{2}(\Omega_{a})}=3\,\langle \xi_{1}\,|\,\tau\,\varphi\rangle_{\mathrm{L}^{2}(\mathbb{B}_{a}^{\mathrm{c}})}=3 \int_{\mathbb{B}_{a}^{\mathrm{c}}}\!\!\!\mathrm{d}\mathbf{y}\,\,\overline{ \xi_{1}(\mathbf{y})}\,\big{(}\tau\varphi_{2}^{\lambda}\big{)}(\mathbf{y})\,, \tag{4.1}\]
where \(\tau\) is the Sobolev trace operator introduced in (2.23). Let us also remark that, since \(\mathrm{R}_{0}^{\lambda}\big{(}0,\mathbf{y}\,^{\prime};0,\mathbf{y}\big{)}= \mathrm{R}_{0}^{\lambda}\big{(}0,\mathbf{y};0,\mathbf{y}\,^{\prime}\big{)}\), the definition (2.2) can be conveniently rephrased as
\[\Phi_{2}^{\lambda}\big{[}\xi_{1},\xi_{2}\big{]}=\int_{\mathbb{B}_{a}^{\mathrm{c}} \times\mathbb{B}_{a}^{\mathrm{c}}}\!\!\!\mathrm
Summing up, we obtain
\[\int_{\mathbb{B}^{c}_{a}\times\mathbb{B}^{c}_{a}}\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \((-\Delta)^{1/2}:H^{1}(\mathbb{R}^{3})\to L^{2}(\mathbb{R}^{3})\) is the square root of the Laplacian [9, SS3]. Keeping in mind that the function between square brackets is upper and lower bounded, and recalling again that \(\frac{\xi}{|\mathbf{y}|-a}\in L^{2}(\mathrm{B}_{a}^{c})\) for \(\xi\in H^{1}(\mathrm{B}_{a}^{c})\) if and only if \(\xi\in H^{1}_{0}(\mathrm{B}_{a}^{c})\), we see that \(\mathcal{L}\) is a bounded operator from \(H^{1}_{0}(\mathrm{B}_{a}^{c})\) to \(L^{2}(\mathrm{B}_{a}^{c})\). On the other side, noting that the function \(h(t):=1-\frac{1}{2}\,t^{2}\,K_{2}(t)\) (\(t>0\)) is strictly increasing with \(h^{\prime}(t)=\frac{1}{2}\,t^{2}\,K_{1}(t)\), \(\lim_{t\to 0^{+}}h(t)/t=0\) and \(\lim_{t\to+\infty}h(t)=0\)[26, Eqs. 10.29.4, 10.30.2 and 10.25.3], we infer by direct inspection that \(\mathcal{H}^{\lambda}\in L^{1}(\mathbb{R}^{3})\).1 As a consequence, by elementary estimates and Young's convolution inequality [18, Thm. 4.5.1] we infer
Footnote 1: More precisely, integrating by parts and using a known integral identity for the Bessel function [14, p. 676, Eq. 6.561.16], we obtain \(\|\mathcal{H}^{\lambda}\|_{L^{1}}=\int_{\mathbb{R}^{3}}\!\!\mathrm{d}\mathbf{y}\, \,\frac{1}{|\mathbf{y}|^{4}}\left[1-\frac{1}{2}\,t^{2}\,K_{2}(t)\right]_{t- \sqrt{\lambda}\,|\mathbf{y}|}=4\pi\sqrt{\lambda}\int_{0}^{\infty}\!\!\mathrm{ dt}\,\frac{h(t)}{t^{2}}=4\pi\sqrt{\lambda}\int_{0}^{\infty}\!\!\mathrm{dt}\,\frac{h^{ \prime}(t)}{t}=2\pi\sqrt{\lambda}\int_{0}^{\infty}\!\!\mathrm{dt}\,\,t\,K_{1} (t)=\pi^{2}\sqrt{\lambda}\,.\)
\[\int_{\mathrm{B}_{a}^{c}}\!\!\mathrm{d}\mathbf{y}\,\left|\int_{\mathrm{B }_{a}^{c}}\!\!\mathrm{d}\mathbf{y}^{\prime}\,\mathcal{H}^{\lambda}(\mathbf{y}-\mathbf{y}^{ \prime})\,\left|\,\xi(\mathbf{y})\right|^{2}\right.\] \[\leqslant 4\,\|\mathcal{H}^{\lambda}\|_{L^{1}}^{2}\,\left\|\xi \right\|_{L^{2}}^{2}\,.\]
Summing up, the previous results allow us to infer that the third term in (4.2) defines a bounded operator from \(H^{1}_{0}(\mathrm{B}_{a}^{c})\) to \(L^{2}(\mathrm{B}_{a}^{c})\).
_3) On the fourth term in (4.2)._ Retracing the arguments described in step _3)_ of the proof of Lemma 3.1, it can be shown that the term under analysis is a bounded operator in \(L^{2}(\mathrm{B}_{a}^{c})\). More precisely, decomposing \(\xi\) into spherical harmonics \(Y_{\ell,m}\) and exploiting the explicit representation (A.8) for \(g^{\lambda}(\mathbf{X},\mathbf{X}^{\prime})\), from the summation formula (3.7) and the orthonormality of the spherical harmonics we deduce
\[\int_{\mathrm{B}_{a}^{c}}\!\!\mathrm{d}\mathbf{y}\,\left|\int_{\mathrm{B }_{a}^{c}}\!\!\mathrm{d}\mathbf{y}^{\prime}\,g^{\lambda}(\mathbf{0},\mathbf{y};\mathbf{0},\mathbf{y }^{\prime})\xi(\mathbf{y}^{\prime})\right|^{2}\] \[=\frac{4}{\pi^{4}}\int_{a}^{\infty}\!\!\mathrm{d}\mathbf{r}\int_{ \mathbb{S}^{2}}\!\!\mathrm{d}\mathbf{\omega}\,\left|\sum_{\ell=0}^{\infty}\frac{ \ell+2}{2\ell+1}\,\frac{\mathrm{I}_{\ell+2}\big{(}\sqrt{\lambda}\,a\big{)}}{ \mathrm{K}_{\ell+2}\big{(}\sqrt{\lambda}\,a\big{)}}\frac{\mathrm{K}_{\ell+2} \big{(}\sqrt{\lambda}\,r\big{)}}{r}\sum_{|m|\leqslant\ell}Y_{\ell,m}(\mathbf{ \omega})\int_{a}^{\infty}\!\!\mathrm{d}\mathbf{r}^{\prime}\,\,K_{\ell+2}\big{(} \sqrt{\lambda}\,r^{\prime}\big{)}\,\,\xi_{\ell,m}(\mathbf{r}^{\prime})\right|^{2}\] \[=\frac{4}{\pi^{4}}\int_{a}^{\infty}\!\!\mathrm{d}\mathbf{r}\,\sum_{ \ell=0}^{\infty}\frac{(\ell+2)^{2}}{(2\ell+1)^{2}}\,\frac{\mathrm{I}_{\ell+2}^{ 2}\big{(}\sqrt{\lambda}\,a\big{)}}{\mathrm{K}_{\ell+2}^{2}\big{(}\sqrt{\lambda }\,a\big{)}}\frac{\mathrm{K}_{\ell+2}^{2}\big{(}\sqrt{\lambda}\,r\big{)}}{r^{2 }}\sum_{|m|\leqslant\ell}\left|\int_{a}^{\infty}\!\!\mathrm{d}\mathbf{r}^{\prime}\, \,K_{\ell+2}\big{(}\sqrt{\lambda}\,r^{\prime}\big{)}\,\,\xi_{\ell,m}(\mathbf{r}^{ \prime})\right|^{2}.\]
Then, recalling that the function \(t\mapsto t^{\gamma}\,K_{\nu}(t)\) (\(t\in\mathbb{R}_{+}\), \(\nu>0\)) is decreasing and noting that the map \(t\mapsto I_{\nu}(t)\,K_{\nu}(t)\) (\(t\in\mathbb{R}_{+}\), \(\nu>0\)) is decreasing as well with \(\lim_{t\to 0^{+}}I_{\nu}(t)\,K_{\nu}(t)=\frac{1}{2\nu}\) (see [3] and [26, Eq.
10.29.2 and SS10.37, together with SS10.30(i)]), we obtain
\[\int_{\mathbb{B}^{c}_{a}}\left|\int_{\mathbb{B}^{c}_{a}}\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(\psi_{\mu}=G_{12}^{\mu}\xi_{\mu}\), for some suitable \(\xi_{\mu}\in L^{2}(B_{a}^{c})\), so that \(\Psi_{\mu}\) is decomposed in terms of the so-called Faddeev components. To simplify the notation, from now on we drop the dependence on \(\mu\). We look for \(\psi\) depending only on the radial variables \(r=|\mathbf{x}|\) and \(\rho=|\mathbf{y}|\), so with an abuse of notation we set \(\psi=\psi(r,\rho)\). Hence we have
\[\big{(}-\Delta_{\mathbf{x}}-\Delta_{\mathbf{y}}+\mu\big{)}\psi=-\,\frac{1}{r^{2 }}\frac{\partial}{\partial r}\left(r^{2}\,\frac{\partial\psi}{\partial r} \right)-\frac{1}{\rho^{2}}\frac{\partial}{\partial\rho}\left(\rho^{2}\,\frac{ \partial\psi}{\partial\rho}\right)+\mu\,\psi=0\,,\qquad\mathrm{in}\,\,\,D_{a}\,, \tag{5.2}\]
where
\[D_{a}=\big{\{}(r,\rho)\in\mathbb{R}_{+}\times\mathbb{R}_{+}\,\big{|}\,\,r^{2} +\rho^{2}>a^{2}\big{\}}\,. \tag{5.3}\]
Moreover, we impose the Dirichlet boundary condition
\[\psi(r,\rho)=0\qquad\mathrm{in}\,\,\,\big{\{}(r,\rho)\in\partial D_{a}\, \big{|}\,r^{2}+\rho^{2}=a^{2}\big{\}}\,, \tag{5.4}\]
and the (singular) boundary condition \(\Psi=\frac{\xi(\rho)}{4\pi r}+o(1)\) for \(r\,\to\,0\), see (1.8). Taking into account that \(\xi(\rho)=4\pi\,(r\,\psi)(0,\rho)\), in view of (5.1) the boundary condition reads
\[\lim_{r\,\to\,0}\left[\psi(r,\rho)-\frac{(r\,\psi)(0,\rho)}{r}\right]+2\,\psi \big{(}\tfrac{\sqrt{3}}{2}\,\rho,\tfrac{1}{2}\,\rho\big{)}=0\,,\qquad\mathrm{ for}\,\,\,\rho>a\,. \tag{5.5}\]
It turns out that the function \(\psi\) can be explicitly determined as a solution in \(L^{2}(\mathbb{R}^{6})\) to the boundary value problem (5.2), (5.4), (5.5). We outline the construction in appendix B for convenience of the reader. Here we simply state the result. Let \(K_{\mathrm{is}_{0}}:\mathbb{R}_{+}\,\to\,\mathbb{R}\) be the modified Bessel function of the second kind with imaginary order, where \(s_{0}\) is the unique positive solution of (2.30).
Let \(\{t_{n}\}_{n\in\mathbb{N}}\) be the sequence of positive simple roots of the equation \(K_{\mathrm{is}_{0}}(t)=0\), where \(t_{n}\,\to\,0\) for \(n\,\to\,+\infty\). Taking into account that the asymptotic expansion of \(K_{\mathrm{is}_{0}}(t)\) for \(t\,\to\,0\) is given by [26, Eq. 10.45.7]
\[K_{\mathrm{is}_{0}}(t)=-\sqrt{\tfrac{\pi}{s_{0}\sinh(\pi s_{0})}}\sin\big{(}s_ {0}\log\tfrac{t}{2}-\theta\big{)}+O(t^{2})\,,\qquad\theta=\arg\Gamma(1+\mathrm{ is}_{0})\,, \tag{5.6}\]
one also has the following asymptotic behavior
\[t_{n}=2\,e^{\frac{\theta-n\pi}{s_{0}}}(1+\epsilon_{n})\,,\qquad\quad\mathrm{ with}\,\,\,\epsilon_{n}\to 0\,\,\,\mathrm{for}\,\,\,n\to+\infty\,. \tag{5.7}\]
Then we have that, for each
\[\mu=\mu_{n}=\left(\frac{t_{n}}{a}\right)^{2}\,, \tag{5.8}\]
the boundary value problem (5.2), (5.4), (5.5) has a solution in \(L^{2}(\mathbb{R}^{6})\) given by (2.32), _i.e._,
\[\psi_{n}(r,\rho)=\frac{C_{n}}{4\pi\,r\rho}\,\,\frac{\sinh\big{(}s_{0}\arctan \frac{\rho}{r}\big{)}}{\sinh\big{(}s_{0}\frac{\pi}{2}\big{)}}\,\,K_{\mathrm{is} _{0}}\Big{(}\tfrac{t_{n}}{a}\sqrt{r^{2}+\rho^{2}}\Big{)}\,,\]
where \(C_{n}\) is an arbitrary constant. We define the charge distribution associated to \(\psi_{n}\) as
\[\xi_{n}(\rho):=4\pi\lim_{r\,\to\,0^{+}}r\,\psi_{n}(r,\rho)=\frac{C_{n}}{\rho} \,K_{\mathrm{is}_{0}}\big{(}\tfrac{t_{n}}{a}\,\rho\big{)}\qquad(\rho>a)\,. \tag{5.9}\]
Let us stress that \(\xi_{n}\) actually keeps track of the Dirichlet boundary condition (5.4) for \(\psi_{n}\); in fact, we have \(\xi_{n}(a)=\frac{C_{n}}{a}\,K_{\mathrm{is}_{0}}(t_{n})=0\). With a slight abuse of notation, in the sequel we refer to the function
\[\xi_{n}(\mathbf{y})\equiv\xi_{n}\big{(}|\mathbf{y}|\big{)}\in L^{2}(B_{a}^{c})\,.\]
The next step is to show that the function \(\psi_{n}\) can be written as the potential generated by the charge (5.9) distributed on the hyperplane \(\pi_{12}\), _i.e._ to prove the following lemma.
**Lemma 5.1**.: _For any fixed \(n\in\{1,2,3,\ldots\}\), let \(\Psi_{n}\) and \(\xi_{n}\) be as in Eqs. (2.31) and (5.9), respectively. Then, \(\xi_{n}\in\,\)dom\(\left(\Gamma_{0}^{\mathrm{\mu_{n}}}\right)\) and_
\[\Psi_{n}=G^{\mathrm{\mu_{n}}}\,\xi_{n}\,. \tag{5.10}\]
Proof.: Many of the arguments presented in this proof rely on direct inspection of the explicit expressions (2.31), (2.32) and (5.9). In particular, we shall often refer to a well-known integral representation of the Bessel function \(K_{\mathrm{iso}}\), namely [26, Eq. 10.32.9]
\[K_{\mathrm{iso}}(t)=\int_{0}^{\infty}\!\!\!\mathrm{d}z\ \cos(s_{0}z)\,e^{-t\cosh z} \qquad(t>0)\,. \tag{5.11}\]
Firstly, using (5.11) it is easy to check that \(K_{\mathrm{iso}}\) is smooth on the (open) positive real semi-axis and that it vanishes with exponential rate at infinity, together with all its derivatives. This ensures, in particular, that \(\xi_{n}\in H^{1}\left(B_{\alpha}^{\mathrm{c}}\right)\). To say more, given that all the zeros of \(K_{\mathrm{iso}}\) are simple [26, SS10.21(i)], we have \(\frac{\xi_{n}(\rho)}{\rho-\alpha}\in L^{2}(B_{\alpha}^{\mathrm{c}})\). In view of [20, Example 9.12] and of the previous considerations, we deduce that \(\xi_{n}\in H^{1}_{0}(B_{\alpha}^{\mathrm{c}})\), which implies in turn \(\xi_{n}\in\,\)dom\(\left(\Gamma_{0}^{\lambda}\right)\subset\,\)dom\(\left(\Phi_{0}^{\lambda}\right)\) by Lemma 4.1. Incidentally, we remark that \(G^{\mathrm{\mu_{n}}}\xi_{n}\) is well defined since \(\xi_{n}\in\,\)dom\(\left(\Phi_{0}^{\lambda}\right)\), see (2.5) and (2.22).
Let us now proceed to prove (5.10), to be regarded as an identity of elements in \(L_{s}^{2}(\Omega_{a})\). To this avail it suffices to show that, for all \(\varphi\in\,\)dom\(\left(H_{D}\right)\), there holds
Noting that \(\mu_{n}<0\) belongs to the resolvent set of \(H_{D}\), from (2.22) we deduce \(\left\langle G^{\mathrm{\mu_{n}}}\xi_{n}\big{|}(H_{D}+\mu_{n})\varphi\right\rangle =3\left\langle\xi_{n}\big{|}\tau\varphi\right\rangle\), where \(\tau\) is the Sobolev trace on \(\pi_{12}\) (see (2.23)). Then, the thesis follows as soon as we prove that, for all \(\varphi\in\,\)dom\(\left(H_{D}\right)\),
\[\left\langle\Psi_{n}\big{|}(H_{D}+\mu_{n})\varphi\right\rangle=3\left\langle \xi_{n}\big{|}\tau\varphi\right\rangle.\]
Equivalently, due to the bosonic symmetry (see (2.31)), we must show that
\[\left\langle\psi_{n}\big{|}(H_{D}+\mu_{n})\varphi\right\rangle=\left\langle \xi_{n}\big{|}\tau\varphi\right\rangle. \tag{5.12}\]
As an intermediate step, we henceforth derive (5.12) for all \(\varphi\in\mathcal{D}(\Omega_{a})\), where
\[\mathcal{D}(\Omega_{a}):=\left\{\varphi\in\,\)C\({}^{\infty}\big{(}\,\overline{\Omega}_{a}\big{)}\ \Big{|}\ \varphi\,\!\!\restriction\!\partial\Omega_{a}=0\,,\ \text{with}\ \operatorname{supp}\varphi\ \text{ compact}\right\}.\]
Using Green's second identity, we infer
\[\left\langle\psi_{n}\big{|}(H_{D}+\mu_{n})\varphi\right\rangle= \lim_{\varepsilon\to 0^{+}}\int_{\Omega_{a}\setminus\pi_{12}^{\varepsilon}}\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
one hand, using the explicit expression (2.32) for \(\psi_{n}\) together with the identity (5.11), we get
\[\left|\int_{\partial\pi_{12}^{2}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Similar arguments yield
\[\left|\int_{\mathbb{B}_{\mathfrak{n}}^{\alpha}}\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
## Appendix A On the integral kernel for the Dirichlet resolvent
In this appendix we collect some results regarding the integral kernel \(\mathsf{R}^{\lambda}_{\mathrm{D}}\big{(}\mathbf{X},\mathbf{X}^{\prime}\big{)}\) associated to the Dirichlet resolvent \(\mathsf{R}^{\lambda}_{\mathrm{D}}\coloneqq(\mathsf{H}_{\mathrm{D}}+\lambda)^{-1}\) (\(\lambda>0\)). We recall that throughout the paper we refer to the decomposition (2.2), namely,
\[\mathsf{R}^{\lambda}_{\mathrm{D}}\big{(}\mathbf{X},\mathbf{X}^{\prime}\big{)} =\mathsf{R}^{\lambda}_{0}\big{(}\mathbf{X},\mathbf{X}^{\prime}\big{)}+\mathsf{ g}^{\lambda}\big{(}\mathbf{X},\mathbf{X}^{\prime}\big{)}\,,\]
where \(\mathsf{R}^{\lambda}_{0}\big{(}\mathbf{X},\mathbf{X}^{\prime}\big{)}\) is the resolvent kernel associated to the free Laplacian in \(\mathbb{R}^{6}\) and, for fixed \(\mathbf{X}^{\prime}\in\Omega_{\alpha}\), \(\mathsf{g}^{\lambda}\big{(}\mathbf{X},\mathbf{X}^{\prime}\big{)}\) is the solution of the elliptic problem (2.3), _i.e._,
\[\left\{\begin{array}{ll}(-\Delta_{\mathbf{X}}+\lambda)\mathsf{g}^{\lambda} \big{(}\mathbf{X},\mathbf{X}^{\prime}\big{)}=0&\text{for}\ \ \mathbf{X}\in\Omega_{\alpha}\,,\\ \mathsf{g}^{\lambda}\big{(}\mathbf{X},\mathbf{X}^{\prime}\big{)}=-\,\mathsf{R} ^{\lambda}_{0}\big{(}\mathbf{X},\mathbf{X}^{\prime}\big{)}&\text{for}\ \ \mathbf{X}\in\partial\Omega_{\alpha}\,,\\ \mathsf{g}^{\lambda}\big{(}\mathbf{X},\mathbf{X}^{\prime}\big{)}\ \to\ 0&\text{for}\ \ |\mathbf{X}|\ \to+\infty\,.\end{array}\right.\]
We first remark that by elementary spectral arguments it follows that
\[\mathsf{R}^{\lambda}_{\mathrm{D}}\big{(}\mathbf{X},\mathbf{X}^{\prime}\big{)} =\mathsf{R}^{\lambda}_{\mathrm{D}}\big{(}\mathbf{X}^{\prime},\mathbf{X}\big{)}\,, \qquad\mathsf{R}^{\lambda}_{0}\big{(}\mathbf{X},\mathbf{X}^{\prime}\big{)}= \mathsf{R}^{\lambda}_{0}\big{(}\mathbf{X}^{\prime},\mathbf{X}\big{)}\,,\] (A.1)
which entails, in turn,
\[\mathsf{g}^{\lambda}\big{(}\mathbf{X},\mathbf{X}^{\prime}\big{)}=\mathsf{g}^{ \lambda}\big{(}\mathbf{X}^{\prime},\mathbf{X}\big{)}\,.\]
Regarding \(\mathsf{R}^{\lambda}_{0}\big{(}\mathbf{X},\mathbf{X}^{\prime}\big{)}\), a well known computation yields
\[\mathsf{R}^{\lambda}_{0}\big{(}\mathbf{X},\mathbf{X}^{\prime}\big{)}=\frac{1} {(2\pi)^{6}}\int_{\mathbb{R}^{6}}\mathrm{d}\mathbf{K}\,\frac{\mathrm{e}^{ \mathrm{i}\mathbf{K}\cdot(\mathbf{X}-\mathbf{X}^{\prime})}}{|\mathbf{K}|^{2} +\lambda}=\frac{\lambda}{(2\pi)^{3}}\,\frac{\mathrm{K}_{2}\big{(}\sqrt{\lambda }\,|\mathbf{X}-\mathbf{X}^{\prime}|\big{)}}{|\mathbf{X}-\mathbf{X}^{\prime}|^ {2}}\,,\] (A.2)
where \(\mathsf{K}_{2}\) is the modified Bessel function of second kind, _a.k.a._ Macdonald function. Then, using a noteworthy summation theorem for Bessel functions [14, p. 940, Eq. 8.532 1], for \(|\mathbf{X}|\neq|\mathbf{X}^{\prime}|\) we obtain3
Footnote 3: The identity (A.3) holds, in principle, only for \(|\mathbf{X}|<|\mathbf{X}^{\prime}|\). Yet, it can be readily extended to the whole set \(|\mathbf{X}|\neq|\mathbf{X}^{\prime}|\) using the basic symmetry relation for \(\mathsf{R}^{\lambda}_{0}\big{(}\mathbf{X},\mathbf{X}^{\prime}\big{)}\) in (A.1).
\[\mathsf{R}^{\lambda}_{0}\big{(}\mathbf{X},\mathbf{X}^{\prime}\big{)}=\frac{1}{ 2\pi^{3}}\sum_{\ell=0}^{\infty}\,(\ell+2)\,\mathsf{C}^{2}_{\ell}\bigg{(} \frac{\mathbf{X}\cdot\mathbf{X}^{\prime}}{|\mathbf{X}|\,\mathbf{X}^{\prime}|} \bigg{)}\,\frac{\mathrm{I}_{\ell+2}\big{(}\sqrt{\lambda}\,|\mathbf{X}|\big{)}} {|\mathbf{X}|^{2}}\,\frac{\mathrm{K}_{\ell+2}\big{(}\sqrt{\lambda}\,|\mathbf{X}^ {\prime}|\big{)}}{|\mathbf{X}^{\prime}|^{2}}\,,\] (A.3)
where \(\mathsf{C}^{2}_{\ell}\) are the Gegenbauer (ultraspherical) polynomials defined by the identity [14, SS8.930]
\[\frac{1}{(1-2\mathrm{s}\mathrm{u}+\mathrm{u}^{2})^{2}}=\sum_{\ell=0}^{\infty} \,\mathsf{C}^{2}_{\ell}(\mathsf{s})\,\mathsf{u}^{\ell}\,,\qquad\text{for}\ \ \mathsf{s}\in[-1,1],\ \mathsf{u}\in(-1,1)\,.\] (A.4)
In the last part of this appendix we derive a series representation for the remainder function \(\mathsf{g}^{\lambda}(\mathbf{X},\mathbf{X}^{\prime})\). Without loss of generality, we shall henceforth assume \(\mathbf{X}^{\prime}=(\mathbf{x}^{\prime},\mathbf{y}^{\prime})\) to lie on the \(6^{\mathrm{th}}\) axis, namely \(\mathbf{X}^{\prime}=\mathsf{y}^{\prime}_{3}\,\mathbf{e}_{6}\) with \(\mathbf{e}_{6}=(0,0,0,0,0,0,1)\). To simplify the notation, in the sequel we drop the dependence on \(\mathbf{X}^{\prime}\) and put
\[\mathsf{g}^{\lambda}(\mathbf{X})\equiv\mathsf{g}^{\lambda}(\mathbf{X},\mathbf{X} ^{\prime})\,.\] (A.5)
To proceed, we refer to the classical representation of the Laplace operator in hyper-spherical coordinates [2]. More precisely, let us introduce the set of coordinates \((\mathsf{r},\boldsymbol{\omega})\in\mathbb{R}_{+}\times\mathbb{S}^{5}\) and recall that
\[-\Delta_{\mathbf{X}}=-\,\frac{1}{\mathsf{r}^{5}}\,\partial_{\mathsf{r}}\big{(} \mathsf{r}^{5}\,\partial_{\mathsf{r}}\cdot\big{)}-\Delta_{\mathbb{S}^{5}}\,,\] (A.6)
where \(\Delta_{\mathbb{S}^{5}}\) is the Laplace-Beltrami operator on \(\mathbb{S}^{5}\). The latter operator is essentially self-adjoint in \(\mathrm{L}^{2}(\mathbb{S}^{5})\) and has pure point spectrum consisting of degenerate eigenvalues \(\sigma(-\Delta_{\mathbb{S}^{5}})=\{\ell(\ell+4)\,|\,\ell=0,1,2,\,\dots\}\). Correspondingly, a complete orthonormal set of eigenfunctions is given by the hyper-spherical harmonics \(\mathcal{Y}_{\ell,\boldsymbol{\omega}}\)
with \(\ell\in\mathbb{N}_{0}\) and \(\mathbf{m}=(m_{0},\ldots,m_{4})\in\mathbb{Z}^{5}\) such that \(\ell=m_{0}\geqslant m_{1}\geqslant m_{2}\geqslant m_{3}\geqslant|m_{4}|\geqslant 0\). Taking this into account, we make the following ansatz for the generic solution of the differential equation \((-\Delta x+\lambda)g^{\lambda}=0\):
\[g^{\lambda}(r,\boldsymbol{\omega})\equiv g^{\lambda}\big{(}\mathbf{X}(r, \boldsymbol{\omega})\big{)}=\sum_{\ell,m}\mathcal{R}^{\lambda}_{\ell,m}(r)\, \mathcal{Y}_{\ell,m}(\boldsymbol{\omega})\,.\]
Using the basic identity (A.6), we obtain an ODE for \(\mathcal{R}^{\lambda}_{\ell,m}\). The solutions are of the form
\[\mathcal{R}^{\lambda}_{\ell,m}(r)=\alpha^{(1)}_{\ell,m}\,\frac{\mathrm{I}_{ \ell+2}\big{(}\sqrt{\lambda}\,r\big{)}}{r^{2}}+\alpha^{(K)}_{\ell,m}\,\frac{ \mathrm{K}_{\ell+2}\big{(}\sqrt{\lambda}\,r\big{)}}{r^{2}}\qquad\text{for some constants }\alpha^{(1)}_{\ell,m},\alpha^{(K)}_{\ell,m}\in\mathbb{R}\,.\]
Considering the asymptotic behavior of the Bessel functions \(\mathrm{I}_{\nu},\mathrm{K}_{\nu}\) with large arguments [26, SS10.30(ii)], it is necessary to fix \(\alpha^{(1)}_{\ell,m}=0\) to fulfill the condition \(g^{\lambda}\,\to\,0\) for \(r\,\to\,+\infty\). Furthermore, let us point out that the solution \(g^{\lambda}\) has to be invariant under rotations around the fixed vector \(\mathbf{X}^{\prime}\). Keeping in mind that we chose \(\mathbf{X}^{\prime}\) to lie on the \(6^{\text{th}}\) axis, this means that we have to fix \(\alpha^{(K)}_{\ell,m}=0\) for all \(\mathbf{m}\neq(\ell,0,0,0,0)\) (\(\ell\in\mathbb{N}_{0}\)). The previous arguments, together with a sum rule for the hyper-spherical harmonics \(\mathcal{Y}_{\ell,m}\)[2, p. 1372, Eq. 66], entail
\[g^{\lambda}(r,\boldsymbol{\omega})=\sum_{\ell\,=\,0}^{+\infty}\alpha_{\ell}\,C ^{2}_{\ell}(\boldsymbol{\omega}\cdot\boldsymbol{\epsilon}_{\mathcal{E}})\, \frac{\mathrm{K}_{\ell+2}\big{(}\sqrt{\lambda}\,r\big{)}}{r^{2}}\,,\]
or, equivalently,
\[g^{\lambda}(\mathbf{X})=\sum_{\ell\,=\,0}^{+\infty}\alpha_{\ell}\,C^{2}_{\ell }\left(\frac{\mathbf{X}\cdot\mathbf{X}^{\prime}}{|\mathbf{X}|\,|\mathbf{X}^{ \prime}|}\right)\,\frac{\mathrm{K}_{\ell+2}\big{(}\sqrt{\lambda}\,|\mathbf{X} |\big{)}}{|\mathbf{X}|^{2}}\,.\] (A.7)
Here, \(C^{2}_{\ell}\) are the Gegenbauer polynomials defined by (A.4) and \((\alpha_{\ell})_{\ell\,=\,0,1,2,\ldots}\subset\mathbb{R}\) are suitable coefficients. We now fix these coefficients so as to fulfill the non-homogeneous Dirichlet boundary condition in the second line of (2.3). In view of the identity (A.3), the said boundary condition for \(|\mathbf{X}|=a\) becomes
\[\sum_{\ell\,=\,0}^{+\infty}\alpha_{\ell}\,C^{2}_{\ell}\,\left(\frac{\mathbf{X }\cdot\mathbf{X}^{\prime}}{|\mathbf{X}|\,|\mathbf{X}^{\prime}|}\right)\frac{ \mathrm{K}_{\ell+2}\big{(}\sqrt{\lambda}\,a\big{)}}{a^{2}}=-\,\frac{1}{2\pi^{ 3}}\sum_{\ell\,=\,0}^{\infty}(\ell+2)\,C^{2}_{\ell}\bigg{(}\frac{\mathbf{X} \cdot\mathbf{X}^{\prime}}{|\mathbf{X}|\,|\mathbf{X}^{\prime}|}\bigg{)}\,\frac{ \mathrm{I}_{\ell+2}\big{(}\sqrt{\lambda}\,a\big{)}}{a^{2}}\,\frac{\mathrm{K}_{ \ell+2}\big{(}\sqrt{\lambda}\,|\mathbf{X}^{\prime}|\big{)}}{|\mathbf{X}^{ \prime}|^{2}}\,.\]
Upon varying \(\mathbf{X}\in\partial\Omega_{a}\), this implies
\[\alpha_{\ell}=-\,\frac{1}{2\pi^{3}}\,(\ell+2)\,\frac{\mathrm{I}_{\ell+2}\big{(} \sqrt{\lambda}\,a\big{)}}{\mathrm{K}_{\ell+2}\big{(}\sqrt{\lambda}\,a\big{)}} \,\frac{\mathrm{K}_{\ell+2}\big{(}\sqrt{\lambda}\,|\mathbf{X}^{\prime}|\big{)} }{|\mathbf{X}^{\prime}|^{2}}\,,\]
which, together with Eq. (A.7), ultimately yields
\[g^{\lambda}\big{(}\mathbf{X};\mathbf{X}^{\prime}\big{)}=-\,\frac{1}{2\pi^{3}} \sum_{\ell\,=\,0}^{\infty}\,(\ell+2)\,C^{2}_{\ell}\bigg{(}\frac{\mathbf{X} \cdot\mathbf{X}^{\prime}}{|\mathbf{X}|\,|\mathbf{X}^{\prime}|}\bigg{)}\,\frac{ \mathrm{I}_{\ell+2}\big{(}\sqrt{\lambda}\,a\big{)}}{\mathrm{K}_{\ell+2}\big{(} \sqrt{\lambda}\,a\big{)}}\,\frac{\mathrm{K}_{\ell+2}\big{(}\sqrt{\lambda}\,| \mathbf{X}|\big{)}}{|\mathbf{X}|^{2}}\,\frac{\mathrm{K}_{\ell+2}\big{(}\sqrt{ \lambda}\,|\mathbf{X}^{\prime}|\big{)}}{|\mathbf{X}^{\prime}|^{2}}\,.\] (A.8)
Let us mention the following asymptotic expansions, for any fixed \(s\in[-1,1]\) and \(t>0\)[26, p. 256, Eqs. 10.41.1-2 and p.450, Eq. 18.14.4, together with p.136, Eq. 5.2.5 and p.140, Eq. 5.11.3]:
\[\mathrm{I}_{\nu}(t)\lesssim\frac{1}{\sqrt{2\pi\nu}}\left(\frac{e\,t}{2\nu} \right)^{\!\nu}\,,\qquad\mathrm{K}_{\nu}(t)\lesssim\sqrt{\frac{\pi}{2\nu}} \left(\frac{e\,t}{2\nu}\right)^{\!-\nu}\,,\qquad\mathrm{C}_{\nu}^{2}(s) \lesssim\gamma^{3},\qquad\quad\text{for }\,\nu\,\to\,\infty\,.\]
Taking these into account it is easy to see that, for any fixed \(\mathbf{X},\mathbf{X}^{\prime}\in\Omega_{a}\), the series in Eq. (A.8) behaves as
\[\sum_{\ell\,=\,1}^{\infty}\,t^{3}\,\bigg{(}\frac{a}{|\mathbf{X}|}\bigg{)}^{\! \ell}\,\bigg{(}\frac{a}{|\mathbf{X}^{\prime}|}\bigg{)}^{\!\ell}\,,\]
which suffices to infer that (A.8) makes sense as a pointwise convergent series.
## Appendix B Derivation of \(\mu_{n}\) and \(\psi_{n}\)
Let \(\psi\) be the solution of the boundary value problem (5.2), (5.4), (5.5). If we define the function \(\zeta(r,\rho)=r\,\rho\,\psi(r,\rho)\), then the corresponding problem for \(\zeta\) reads
\[-\frac{\partial^{2}\zeta}{\partial r^{2}}-\frac{\partial^{2}\zeta }{\partial\rho^{2}}+\mu\,\zeta=0 \text{in }\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, |
2304.00302 | Interference induced anisotropy in a two-dimensional dark state optical
lattice | We describe a two-dimensional optical lattice for ultracold atoms with
spatial features below the diffraction limit created by a bichromatic optical
standing wave. At every point in space these fields couple the internal atomic
states in a three-level Lambda coupling configuration. Adiabatically following
the local wavefunction of the resulting dark state yields a spatially uniform
Born-Oppenheimer potential augmented by geometric scalar and vector potentials
appearing due to spatially rapid changes of the wavefunction. Depending on
system parameters, we find that the geometric scalar potential can interpolate
from a 2D analogue of the Kronig-Penney lattice, to an array of tubes with a
zig-zag shaped barrier. The geometric vector potential induces a spatially
periodic effective magnetic field (the Berry's curvature) that can be tuned to
cause destructive interference between neighboring tubes, thereby decoupling
them at a critical point in parameter space. We numerically investigate the
energy spectrum including decay from the excited state, and find that the
adiabatic approximation is sound for strong coupling strengths, leading to
negligible loss in the dark state manifold. Furthermore, the spectrum is
well-described by a non-Hermitian tight binding model with on-site losses, and
hopping characterized by both loss and, surprisingly, gain. | Edvinas Gvozdiovas, Ian B. Spielman, Gediminas Juzeliūnas | 2023-04-01T12:02:25Z | http://arxiv.org/abs/2304.00302v1 | # Interference induced anisotropy in a two-dimensional dark state optical lattice
###### Abstract
We describe a two-dimensional optical lattice for ultracold atoms with spatial features below the diffraction limit created by a bichromatic optical standing wave. At every point in space these fields couple the internal atomic states in a three-level Lambda coupling configuration. Adiabatically following the local wavefunction of the resulting dark state yields a spatially uniform Born-Oppenheimer potential augmented by geometric scalar and vector potentials appearing due to spatially rapid changes of the wavefunction. Depending on system parameters, we find that the geometric scalar potential can interpolate from a 2D analogue of the Kronig-Penney lattice, to an array of tubes with a zig-zag shaped barrier. The geometric vector potential induces a spatially periodic effective magnetic field (the Berry's curvature) that can be tuned to cause destructive interference between neighboring tubes, thereby decoupling them at a critical point in parameter space. We numerically investigate the energy spectrum including decay from the excited state, and find that the adiabatic approximation is sound for strong coupling strengths, leading to negligible loss in the dark state manifold. Furthermore, the spectrum is well-described by a non-Hermitian tight binding model with on-site losses, and hopping characterized by both loss and, surprisingly, gain.
Realizing long-lived strongly correlated quantum matter with ultracold atoms in optical lattices is an ongoing challenge. Despite the now decades old realization of the superfluid to Mott insulator transition in 1D, 2D and 3D [1, 2, 3], there has been little progress in realizing strongly correlated systems such as fractional quantum Hall states. In both cases, interactions are enhanced by reducing the contribution of the kinetic energy: inhibiting tunneling in a deep optical lattice in the case of a Mott insulator, or quenching the kinetic energy with a magnetic field in the case of fractional quantum Hall states. The former case simply localizes particles to lattice sites, producing an uncorrelated insulator. Here we describe a new technique for creating nearly flat bands, even in the presence of strong tunneling, using Aharonov-Bohm like quantum interference between sites.
We consider a 2D extension to existing 1D dark state optical lattices studied theoretically [4, 5, 6, 7, 8], and realized experimentally [9, 10], enabling interference phenomena that are not possible in 1D. As indicated in Figs. 1(a,b), our lattice is created from a pair of orthogonal standing waves and a transverse running wave coupling three internal atomic internal states in a Lambda configuration. The local dark state of this scheme has zero energy and no excited state contribution. Atomic motion introduces geometric scalar and vector potentials [11, 12, 13], as well as non-adiabatic mixing to the excited state. The geometric potentials are maximal at the nodes of the optical standing wave where the atomic dark state changes rapidly. For atoms adiabatically following the dark state in 1D, this gave rise to a Kronig-Penney like lattice with barriers far narrower than the optical wavelength [4, 5, 9]. For specific parameters we find the natural 2D analog of this lattice consisting of square tiles spaced by narrow barriers. However, generically the geometric scalar and vector potentials--the latter quantified by the Berry curvature--can form a lattice of Dirac \(\delta\)-function like needles, or can take on a serpentine appearance, creating an array of undulating tubes. Unexpectedly, we observe that these tubes abruptly decouple at critical points in parameter space where Aharonov-Bohm interference from the geometric vector potential inhibits tunneling. This produces nearly completely flat bands transverse to the tubes with barriers that would otherwise allow substantial tunneling. The band flattening is analogous to the formation of dispersionless Landau levels with the application of a uniform magnetic field.
Figure 1(b) schematically illustrates our proposed experimental geometry. A laser beam traveling along \(\mathbf{e}_{z}\) drives one arm of a \(\Lambda\)-scheme that is then completed by a second arm consisting of four mutually interfering laser beams in the \(\mathbf{e}_{x}\)-\(\mathbf{e}_{y}\) plane. This geometry adds two additional degrees of freedom as compared to the existing 1D dark-state lattices: the relative intensity intensity between the in-plane lasers as well as their relative phase (controlled by displacing a retro-reflection mirror). Changing the relative intensity converts linear barriers in to serpentine ones. Tuning the relative phase morphs the lattice from a 2D Kronig-Penney like lattice of linear barriers to one with needle-like potential maxima. In addition, the phase difference breaks time reversal symmetry, introducing a non-zero Berry curvature.
Our manuscript is organized as follows. We introduce the basic formulation of our 2D \(\Lambda\)-lattice in Sec. I, and identify the associated symmetries in Sec. II. Section III describes our numerical method, and presents our main
results. Lastly, in Sec. IV we conclude with a discussion and outlook.
## I Formulation
### Hamiltonian for the 2D Lambda scheme
We consider ultracold atoms subject to a 2D atom-light interaction
(1)
describing the \(\Lambda\)-type coupling scheme [11; 12; 13] shown in Fig. 1(a). Here the atomic ground states \(\left|1\right\rangle\) and \(\left|2\right\rangle\) are coupled with strength \(\Omega_{j}(\mathbf{r})\) to the excited state \(\left|e\right\rangle\) with detuning \(\Delta\)[14]. The excited state has a spontaneous decay rate \(\Gamma\). Altogether this gives the atomic Hamiltonian
\[\hat{H}=\frac{\hat{\mathbf{p}}^{2}}{2M}+\hat{V}(\mathbf{r}) \tag{2}\]
in terms of the position \(\mathbf{r}=(x,y)\), momentum \(\hat{\mathbf{p}}=-i\hbar\mathbf{\nabla}\), and atomic mass \(M\). The above Hamiltonian is non-Hermitian due to the imaginary contribution \(i\Gamma/2\) arising from time-irreversible decay from the excited state \(\left|e\right\rangle\) in Eq. (1). In Sec. III.2 we numerically demonstrate that losses due to \(i\Gamma/2\) are minimal in the so-called dark state and thus Hermitian dynamics are maintained. Additionally, even the Hermitian contribution to (1) can break time reversal symmetry when any of the \(\Omega_{j}\) coefficients are complex.
The Hamiltonian acts in the space of state-vectors
\[\left|\psi\left(\mathbf{r}\right)\right\rangle=\sum_{j=1,2,e}\psi_{j}\left(\mathbf{r} \right)\left|j\right\rangle\,, \tag{3}\]
containing the atomic internal states \(\left|j\right\rangle\) and the associated wave-functions \(\psi_{j}\left(\mathbf{r}\right)\) for atomic center of mass motion. The corresponding full abstract state vector would be given by \(\left|\psi\right\rangle=\int d\mathbf{r}\left|\psi\left(\mathbf{r}\right)\right\rangle \otimes\left|\mathbf{r}\right\rangle\), with \(\left|\psi\left(\mathbf{r}\right)\right\rangle=\left\langle\mathbf{r}\right|\psi\).
### New basis with dark state
We now re-express \(\left|\psi\left(\mathbf{r}\right)\right\rangle\) in a basis containing a long-lived dark state in which geometric potentials with sub-wavelength features can emerge. A dark state is a (generally position-dependent) superposition of atomic ground states for which \(\hat{V}(\mathbf{r})\left|D\right\rangle=0\). Therefore, in such a basis, \(\hat{V}(\mathbf{r})\) contributes no potential energy, no coupling terms, and no spontaneous decay for \(\left|D\right\rangle\). This allows the two coupling arms in Fig. 1(a) to be driven on resonance with \(\left|e\right\rangle\) without loss from \(\left|D\right\rangle\).
Here we consider orthogonal dark
\[\left|D(\mathbf{r})\right\rangle=\frac{1}{\Omega}\left[\left.\Omega_{2}(\mathbf{r}) \left|1\right\rangle-\Omega_{1}(\mathbf{r})\left|2\right\rangle\right.\right] \tag{4}\]
and bright
\[\left|B(\mathbf{r})\right\rangle=\frac{1}{\Omega}\left[\left.\Omega_{1}(\mathbf{r}) \left|1\right\rangle+\Omega_{2}(\mathbf{r})\left|2\right\rangle\right.\right] \tag{5}\]
state superpositions, where \(\Omega=\sqrt{\left|\Omega_{1}\right|^{2}+\left|\Omega_{2}\right|^{2}}\) is an averaged coupling strength. Unlike \(\left|D(\mathbf{r})\right\rangle\), the bright state couples to the excited state \(\left|e\right\rangle\).
In the basis of dark, bright and excited states, the state vector is
\[\left|\psi\left(\mathbf{r}\right)\right\rangle=\psi_{\text{D}}\left(\mathbf{r}\right) \left|D(\mathbf{r})\right\rangle+\psi_{\text{B}}\left(\mathbf{r}\right)\left|B(\mathbf{r} )\right\rangle+\psi_{\text{E}}\left(\mathbf{r}\right)\left|e\right\rangle\,, \tag{6}\]
where \(\psi_{\text{D}}\left(\mathbf{r}\right)\), \(\psi_{\text{B}}\left(\mathbf{r}\right)\) and \(\psi_{\text{E}}\left(\mathbf{r}\right)\) are wave-functions for the atomic center of motion in the corresponding internal states.
The atom-light coupling operator \(\hat{V}(\mathbf{r})\) is diagonalized by the trio of dressed states \(\left|D\right\rangle\) and \(\left|\pm\right\rangle\), where \(\left|\pm\right\rangle\) are superpositions of \(\left|B\right\rangle\) and \(\left|e\right\rangle\) only. When these states depend on position, they are not eigenstates of the full Hamiltonian \(\hat{H}\) due to the kinetic energy \(\hat{\mathbf{p}}^{2}/(2M)\); this leads to geometric potentials for the projected dynamics in each dressed state [13; 15]. In the present case, geometric potentials are introduced by the spatially varying coupling strengths \(\Omega_{1,2}(\mathbf{r})\).
Figure 1: a), Lambda(\(\Lambda\)) coupling scheme. Two atomic ground states \(\left|1\right\rangle\) and \(\left|2\right\rangle\) are laser-coupled with detuning \(\Delta\) to an excited state \(\left|e\right\rangle\) (with spontaneous decay rate \(\Gamma\)) with strengths \(\Omega_{1,2}(\mathbf{r})\). b), Schematic. \(\Omega_{1}(\mathbf{r})\) is a plane wave traveling along \(\mathbf{e}_{z}\) and \(\Omega_{2}(\mathbf{r})\) consists of two orthogonal standing waves created by the pictured interfering beams. A movable mirror imparts a controllable phase shift on the retro-reflected beam traveling along \(\mathbf{e}_{x}-\mathbf{e}_{y}\). c), Geometric scalar \(U\) and d), Berry curvature \(\mathbf{B}\) for motion in the dark state computed for \(\epsilon=0.4\), \(\epsilon_{c}=1\) and \(\chi=\pi/4\).
### Effective potentials for adiabatic dark state
When the total Rabi frequency \(\Omega\) at every point in space greatly exceeds the characteristic energy of the atomic center of mass motion, the atoms will adiabatically follow their initial dressed state with negligible transitions to the other dressed states. For dark-state atoms, the state vector (6) can be approximated as
\[\ket{\psi\left(\mathbf{r}\right)}\approx\psi_{\rm D}\left(\mathbf{r}\right)\ket{D(\mathbf{r} )}\,. \tag{7}\]
The validity of this approximation for 1D dark state lattices has been extensively studied [4; 5; 6]. We correspondingly arrive at the 2D adiabatic Hamiltonian in the dark state manifold [12; 13]
\[\hat{H}_{\rm D}=\frac{1}{2M}(-i\hbar\mathbf{\nabla}-\mathbf{A}_{\rm D})^{2}+U_{\rm D }\,, \tag{8}\]
where \(U_{\rm D}\) and \(\mathbf{A}_{\rm D}\) are the geometric scalar and vector potentials. Because our focus is on the dark state manifold, we suppress the subscript D in what follows. The scalar potential
\[U_{\rm D}\equiv U(\mathbf{r})=\frac{\hbar^{2}}{2M}\frac{\left(\mathbf{\nabla}\xi^{*} \right)\cdot\left(\mathbf{\nabla}\xi\right)}{\left(1+\left|\xi\right|^{2}\right)^{ 2}} \tag{9}\]
is plotted in Fig. 1(c) and Fig. 2. Here we introduce \(\xi(\mathbf{r})\equiv\Omega_{2}(\mathbf{r})/\Omega_{1}(\mathbf{r})\), the complex valued ratio of coupling strengths in Eq. (4). The geometric vector potential
\[\mathbf{A}_{\rm D}\equiv\mathbf{A}(\mathbf{r})=i\hbar\frac{\xi^{*}\mathbf{\nabla}\xi- \xi\mathbf{\nabla}\xi^{*}}{2\left(1+\left|\xi\right|^{2}\right)} \tag{10}\]
is non-zero only when \(\xi(\mathbf{r})\) has an imaginary component. Lastly, the geometric magnetic field shown in Fig. 1(d) is the curl of the vector potential
\[\mathbf{B}(\mathbf{r})=\mathbf{\nabla}\times\mathbf{A}=i\hbar\frac{\left(\mathbf{\nabla} \xi^{*}\right)\times\left(\mathbf{\nabla}\xi\right)}{\left(1+\left|\xi\right|^{2} \right)^{2}}\,. \tag{11}\]
We confirm the adiabatic assumption for our 2D lattice in Sec. III.2 by comparing numerical results from the full non-Hermitian Hamiltonian (2) to the adiabatic approximation (Fig. 4).
### Sub-wavelength effective potentials
We now describe a configuration of couplings \(\Omega_{1,2}(\mathbf{r})\) shown in Fig. 1(b) which yield rapid changes in the dark state wavefunction (4), parameterized by \(\mathbf{\nabla}\xi\) [entering Eqs. (9)-(11)], resulting in 2D geometric potentials with features below the optical diffraction limit. The first laser field \(\Omega_{1}(\mathbf{r})=\Omega_{p}\,\exp(ik_{\rm R}z)\) coupling \(\ket{1}\) to \(\ket{e}\) is a plane wave traveling along \(\mathbf{e}_{z}\) with amplitude \(\Omega_{p}\) and wavevector \(k_{\rm R}=2\pi/a\). We consider atoms that are tightly confined in the \(z=0\) plane, such that \(\exp(ik_{\rm R}z)\approx 1\). The second coupling field \(\Omega_{2}(\mathbf{r})\) results from a crossed pair of standing waves with amplitudes \(\Omega_{c}^{(\pm)}\) in the \(\mathbf{e}_{x}\)-\(\mathbf{e}_{y}\) plane [16; 17; 18]; a controllable path-length-difference \(d\) for the retro-reflected field along \((\mathbf{e}_{x}-\mathbf{e}_{y})/\sqrt{2}\) introduces a phase \(\chi=d/a\) which breaks time-reversible symmetry and allows for non-zero Berry curvature.
After making the rotating wave approximation (RWA) and phase shifting \(\ket{2}\) by \(e^{-i\chi/2}\), the Rabi frequencies of the coupling fields are
\[\Omega_{1}(\mathbf{r}) =\Omega_{p}\,, \tag{12}\] \[\Omega_{2}(\mathbf{r}) =\sum_{\pm}\pm\Omega_{c}^{(\pm)}e^{\mp i\chi/2}\cos(k_{\rm R}x\pm k _{\rm R}y)\,.\]
Inserting (12) into (9)-(11), the explicit forms of the geometric potentials for this choice of coupling fields are
\[\frac{U(\mathbf{r})}{E_{\rm R}} =\frac{\beta_{+}^{2}+\epsilon_{c}^{2}\beta_{-}^{2}}{\alpha^{2}}2 \epsilon^{2}(1+\epsilon_{c}^{2})\,,\] \[\frac{\mathbf{A}(\mathbf{r})}{\hbar k_{\rm R}} =\frac{\sin\left(2k_{\rm R}y\right)\mathbf{e}_{x}+\sin\left(2k_{ \rm R}x\right)\mathbf{e}_{y}}{\alpha}\epsilon_{c}\sin\chi\,, \tag{13}\] \[\frac{B_{z}(\mathbf{r})}{\hbar k_{\rm R}^{2}} =\frac{\cos\left(2k_{\rm R}x\right)-\cos\left(2k_{\rm R}y\right)} {\alpha^{2}}2\epsilon^{2}(1+\epsilon_{c}^{2})\epsilon_{c}\sin\chi\,,\]
where \(E_{\rm R}=\hbar^{2}k_{\rm R}^{2}/(2M)\) is the single photon recoil energy, and \(\mathbf{B}(\mathbf{r})=B_{z}(\mathbf{r})\,\mathbf{e}_{z}\) implying that the magnetic field is orthogonal to the \(\mathbf{e}_{x}\)-\(\mathbf{e}_{y}\) plane. Here we defined a factor
\[\alpha(\mathbf{r})=\epsilon^{2}(1+\epsilon_{c}^{2})+\eta_{+}^{2}+\epsilon_{c}^{2} \eta_{-}^{2}-2\epsilon_{c}\eta_{+}\eta_{-}\cos\chi\]
present in (13), and
\[\eta_{\pm}(\mathbf{r})=\cos\left(k_{\rm R}x\pm k_{\rm R}y\right),\qquad\beta_{\pm}( \mathbf{r})=\sin\left(k_{\rm R}x\pm k_{\rm R}y\right),\]
as well as the ratios of the laser field amplitudes
\[\epsilon=\frac{\Omega_{p}}{\sqrt{\Omega_{c}^{(+)^{2}}+\Omega_{c}^{(-)^{2}}}} \qquad\text{and}\qquad\epsilon_{c}=\frac{\Omega_{c}^{(-)}}{\Omega_{c}^{(+)}}. \tag{14}\]
The ratios \(\epsilon\) and \(\epsilon_{c}\) determine how rapidly the internal structure of the dark state changes near the zeros of \(\Omega_{2}(\mathbf{r})\), allowing control of both the height and spatial extent of the effective potentials in Eqs. (13).
The scalar potential \(U\) is plotted Fig. 2 for different values of \(\chi\) and \(\epsilon_{c}\) (\(B_{z}\), not shown, is graphically very similar to \(U\)). The peak values of the scalar and magnetic fields
\[U_{\rm max}=\frac{2E_{\rm R}}{\epsilon^{2}}\qquad\text{and}\qquad B_{\rm max}( \mathbf{r})=\frac{4\hbar k_{\rm R}^{2}\epsilon_{c}\sin\chi}{\epsilon^{2}(1+\epsilon_{ c}^{2})}\]
are proportional to \(2/\epsilon^{2}\), and thus diverge as \(\epsilon\to 0\). Additionally, \(B_{\rm max}\) depends on \(\epsilon_{c}\) and \(\chi\), reaching a maximum value with \(\epsilon_{c}=1\) and \(\chi=\pi/2\). We also numerically computed the full width at half maximum of the maxima of \(U\) and \(\mathbf{B}\) along their thinnest direction (as seen in Fig. 2(d), this direction has no particular association with \(\mathbf{e}_{x}\) or \(\mathbf{e}_{y}\)). We find that when \(\epsilon\ll 1\), i.e., \(\Omega_{2}\gg\Omega_{1}\)
the lattice has tall sub-wavelength barriers that can be further tuned by adjusting \(\epsilon_{c}\) and \(\chi\).
The ratio \(\epsilon_{c}\) determines the degree of serpentine bending in the geometric potential. This leads to an effective 1D to 2D transition; for example (with \(\chi=0\)), the potential transitions from a brick-like structure (with holes at the crossing points) at \(\epsilon_{c}=1\) [Fig. 2(a)], to an an array of modulated walls [\(\epsilon_{c}=0.5\) in Fig. 2(d)], finally arriving at straight 1D walls [\(\epsilon_{c}=0\)]. Additionally, the lattices shown in Fig. 2(d)-(f) can be rotated by 90 degrees by replacing \(\epsilon_{c}\to 1/\epsilon_{c}\).
When \(\chi=\pi/2\) and \(\epsilon_{c}=1\) [Fig. 2(c)], the magnetic field and scalar potential reduce to a 2D array of needle-like peaks. Moreover, with \(\epsilon\to 0\) the 2D integral of \(U\) over the peak converges to \(E_{\rm R}a^{2}/2\pi=\hbar^{2}\pi/M\). As such, even for \(\epsilon\to 0\) the surface integral of the scalar potential does not diverge, leading to a 2D array of Dirac \(\delta\) function potentials--a 2D Dirac comb--with strength \(E_{\rm R}a^{2}/2\pi\). The same applies to the magnetic field \({\bf B}\) with strength \(2\pi\hbar\).
Just as in the 1D case, intensity imbalances between the different arms of the \(\Omega_{2}({\bf r})\) field ultimately limit the minimum width of the barriers [9]. Since our primary focus is on interference effects rather than minimizing the barrier widths, this is not a significant consideration in this work.
In the next Section we consider the symmetries of the atom-light coupling which impose requirements on the eigensolutions of both the full and dark state Hamiltonians. These symmetries will be later utilized in the numerical treatment to unfold the energy bands.
## II Symmetries of the Hamiltonian
Including the couplings \(\Omega_{1,2}\) in Eq. (12), the full Hamiltonian (2) is invariant with respect to spatial shifts along \(\mathbf{e}_{x}\) and \(\mathbf{e}_{y}\) by the lattice constant \(a\), i.e., \(\hat{H}(x+a,y)=\hat{H}(x,y+a)=\hat{H}(x,y)\), so that
\[\left[\hat{H},\exp\left(-\frac{i\,\mathbf{a}_{l}\cdot\hat{\bf p}}{\hbar}\right) \right]=0\,,\quad\text{with}\quad l=1,2\,, \tag{15}\]
with elementary unit vectors
\[\mathbf{\mathrm{a}}_{1}=a\,\mathbf{e}_{x},\qquad\quad\text{and}\qquad\quad\mathbf{ \mathrm{a}}_{2}=a\,\mathbf{e}_{y}\,. \tag{16}\]
Interestingly, the dark state geometric potentials are symmetric with regards to translations by \(a/2\), as evident in Figs. 1(c,d) and Fig. 2. By contrast, the full Hamiltonian (2) does not obey this symmetry. The couplings \(\Omega_{1}\) and \(\Omega_{2}\) are symmetric and anti-symmetric, respectively, with the \(a/2\) spatial shifts
\[\begin{split}\Omega_{1}(x+a/2,y)&=\Omega_{1}(x,y+a /2)=\Omega_{1}(x,y)\,,\\ \Omega_{2}(x+a/2,y)&=\Omega_{2}(x,y+a/2)=-\Omega_{2}( x,y)\,.\end{split} \tag{17}\]
Thus the Hamiltonian \(\hat{H}\) commutes with two combined shift operators
\[\hat{T}_{\mathbf{\mathrm{a}}_{l}/2}=\hat{U}\exp\left(-\frac{i\,\mathbf{\mathrm{a}}_{l} \cdot\hat{\bf p}}{2\hbar}\right)\,,\quad\text{where}\quad l=1,2\,, \tag{18}\]
and
\[\hat{U}=\left|2\right\rangle\left\langle 2\right|-\left|e\right\rangle \left\langle e\right|-\left|1\right\rangle\left\langle 1\right|,\quad\text{ with}\quad\hat{U}^{2}=\hat{I}\,. \tag{19}\]
The operator (18) combines a spatial translation by \(\mathbf{\mathrm{a}}_{l}/2\) with a \(\pi\) phase-flip of the states \(\left|e\right\rangle\) and \(\left|1\right\rangle\). Thus the
square of the combined operator \(\hat{T}_{\mathbf{a}/2}^{2}=e^{-i\,\mathbf{a}_{l}\cdot\mathbf{p}/\hbar}\) returns to a state-independent spatial shift by \(a\). The Hamiltonian \(\hat{H}\) and the combined shift operator \(\hat{T}_{\mathbf{a}_{l}/2}\) therefore share a set of eigenstates following the Bloch ansatz
\[\left|\psi_{s}^{(\boldsymbol{q})}(\boldsymbol{r})\right\rangle=e^{i\boldsymbol {q}\cdot\boldsymbol{r}}\left|g_{s}^{(\boldsymbol{q})}(\boldsymbol{r})\right\rangle\,, \tag{20}\]
with
\[\hat{H}\left|\psi_{s}^{(\boldsymbol{q})}(\boldsymbol{r})\right\rangle=E_{s}( \boldsymbol{q})\left|\psi_{s}^{(\boldsymbol{q})}(\boldsymbol{r})\right\rangle\,, \tag{21}\]
and
\[\hat{T}_{\mathbf{a}_{l}/2}\left|\psi_{s}^{(\boldsymbol{q})}(\boldsymbol{r}) \right\rangle=e^{i\boldsymbol{q}\cdot\mathbf{a}/2}\left|\psi_{s}^{(\boldsymbol {q})}(\boldsymbol{r})\right\rangle\,, \tag{22}\]
with eigenenergy \(E_{s}(\boldsymbol{q})\), crystal momentum \(\boldsymbol{q}\), and dark state band index \(s=1,2,3,...\). In what follows we focus on the lowest band with \(s=1\) and therefore omit the band index [19].
The periodic part of the Bloch solution (20) satisfies
(23)
for a spatial shift of a half of the lattice constant. Expanding in terms of atomic internal states gives
\[\left|g^{(\boldsymbol{q})}(\boldsymbol{r})\right\rangle=\sum_{j=e,1,2}g_{j}^{ (\boldsymbol{q})}\left(\boldsymbol{r}\right)\left|j\right\rangle\,, \tag{24}\]
subject to the conditions
\[g_{j}^{(\boldsymbol{q})}(\boldsymbol{r}+\mathbf{a}_{l}/2)=-g_{j}^{(\boldsymbol {q})}(\boldsymbol{r})\quad\text{for }j=e,1\,, \tag{25}\]
and
\[g_{j}^{(\boldsymbol{q})}(\boldsymbol{r}+\mathbf{a}_{l}/2)=g_{j}^{(\boldsymbol {q})}(\boldsymbol{r})\quad\text{for }j=2\,. \tag{26}\]
The Bloch ansatz given by Eq. (20) is characterized by a 2D crystal momentum \(\boldsymbol{q}=q_{x}\mathbf{e}_{x}+q_{y}\mathbf{e}_{y}\) covering an extended Brillouin zone (BZ) with \(q_{x,y}\in[-k_{\text{R}},k_{\text{R}})\), a four fold increase in area compared to the BZ of a square lattice with period \(a\). Correspondingly, the area of the unit cell is reduced by a factor of four [20].
We note that when \(\chi=\pi/2\), the Hamiltonian supports an additional symmetry with respect to the spatial shifts by \(\mathbf{a}_{1}/4\pm\mathbf{a}_{2}/4\), as can be seen in Fig. 2(c,f). In this particular case the BZ can be further unfolded into a rhombus.
For \(\chi=0\) (and neglecting the decay rate \(\Gamma\)), the Hamiltonian \(\hat{H}\) obeys time reversal symmetry. In that case a simultaneous complex conjugation and inversion of the quasi-momentum leaves the eigenvalue equation (21) unchanged, giving
\[E^{(-\boldsymbol{q})}=E^{(\boldsymbol{q})}\quad\text{and}\quad g_{j}^{(- \boldsymbol{q})}(\boldsymbol{r})=\left[g_{j}^{(\boldsymbol{q})}(\boldsymbol{r} )\right]^{*}\,. \tag{27}\]
We numerically confirmed that this condition is well maintained for atomic dynamics in the dark state manifold where atomic decay is suppressed. Note that even for \(\chi\neq 0\) the condition \(E^{(-\boldsymbol{q})}=E^{(\boldsymbol{q})}\) holds because complex conjugation does not change \(\left|\Omega_{2}\right|\) and thus does not alter the energy spectra plotted in Figs. 3, 4.
## III Numerical results
We now describe our numerical method for obtaining the energy spectra and present our findings [21].
### Numerical method
The band structure is most easily solved by factoring out the plane wave component \(e^{i\boldsymbol{q}\cdot\boldsymbol{r}}\) in the Bloch eigenfunction (20)
\[\hat{H}^{(\boldsymbol{q})}\left|g^{(\boldsymbol{q})}(\boldsymbol{r})\right\rangle =E^{(\boldsymbol{q})}\left|g^{(\boldsymbol{q})}(\boldsymbol{r})\right\rangle\,, \tag{28}\]
transforming the Hamiltonian from Eq. (21) to
\[\hat{H}^{(\boldsymbol{q})}(\boldsymbol{r})=\frac{1}{2M}\left(-i\hbar \boldsymbol{\nabla}+\hbar\boldsymbol{q}\right)^{2}+\hat{V}(\boldsymbol{r})\,. \tag{29}\]
We obtain the energy spectrum numerically using the Fourier representation of the eigen-value equation (28) and of the periodic Bloch functions
\[\left|g^{(\boldsymbol{q})}(\boldsymbol{r})\right\rangle=\sum_{n_{x},n_{y}=-N}^ {N}e^{ik_{R}(xn_{x}+yn_{y})}\left|g^{(\boldsymbol{q})}(n_{x},\,n_{y})\right\rangle\,, \tag{30}\]
with \((2N+1)^{2}\) Fourier components. The resulting right-handed matrix eigenvalue problem
\[E^{(\boldsymbol{q})}\,g_{n_{x}n_{y}j}^{(\boldsymbol{q})}=\left[H^{(\boldsymbol {q})}\right]_{n_{x}n_{y}j}^{n_{x}^{\prime}n_{y}^{\prime}j^{\prime}}\,g_{n_{x}n _{y}j}^{(\boldsymbol{q})} \tag{31}\]
is encoded with the combined set of indices \((n_{x},n_{y},j)\) including both the Fourier components \((n_{x},\,n_{y})\) and the atomic internal states \(j\). The Hamiltonian-matrix \([H^{(\boldsymbol{q})}]\) is sparsely populated with \(3^{2}(2N+1)^{4}\) elements and a typical filling ratio \(\lesssim 10^{-5}\) for \(N\approx 100\). We use shift inversion to amplify solutions near the bottom of the dark state manifold using libraries optimized for sparse matrix diagonalization [22; 23]. Section II implies that certain Fourier components of the periodic Bloch function \(g_{j}^{(\boldsymbol{q})}(n_{x},\,n_{y})\) must be zero to unfold the BZ; we strictly enforce this condition by zeroing out some of matrix elements as described in Appendix A.
Diagonalization of the adiabatic dark state Hamiltonian (8) is much more challenging numerically due to many non-zero Fourier components associated with the effective potentials \(U\) and \(\mathbf{A}\). Although the adiabatic dark state Hamiltonian-matrix \([H_{D}^{(\boldsymbol{q})}]\) is 9 times smaller with \((2N+1)^{4}\) elements, for \(N\approx 100\) it has a filling ratio of \(\lesssim 0.04\), making it significantly more dense. By removing Fourier components with negligible amplitudes, we reduce the filling ratio to \(\lesssim 0.01\).
### Energy dispersions
The real and imaginary parts of the energy dispersion describing the lowest Bloch band in the dark manifold
are plotted in Fig. 3 for four combinations of \(\epsilon_{c}\) and \(\chi\) (we avoid \(\chi=\pi/2\), corresponding to Figs. 2(c,f), which results in a gapless energy dispersion). The real part is qualitatively different for each combination of parameters in Fig. 3: in (a) the dispersion is reminiscent of that of a 2D square lattice; in (b) the curvature near \(\mathbf{q=0}\) has become anisotropic and tiny local minima have appeared at the corners of the BZ; in (c) the dispersion at \(\mathbf{q=0}\) has become a saddle point and the energies at the corners of the BZ continue to fall; the trend is completed in (d) where the dispersion has a global maximum at \(\mathbf{q=0}\).
The imaginary part (blue contours) in Fig. 3 results from a small admixture of the excited state and is everywhere negative; it contains no contribution from the gauge field \(\mathbf{A}\), and thus quantifies only anti-Hermitian losses. From the perspective of the dark state adiabatic potentials, this admixture results from non-adiabatic coupling to the bright states. However, despite the large value of \(\hbar\Gamma/E_{\mathrm{R}}=1000\) used in Fig. 3, we observe a relatively small population transfer into the excited state: for our parameters the imaginary energy can be as low as \(8.6\times 10^{-6}E_{\mathrm{R}}\) at the center of the BZ [Fig. 3(b)]. For a wide range of \(\epsilon_{c}\) and \(\chi\), the imaginary part of the energy averaged over the BZ is \(\approx-0.01E_{\mathrm{R}}\). The excited state occupation probability (and therefore losses) is further reduced at blue-detuning \(\Delta>0\), by reducing sharpness in the potential peaks with a larger value of \(\epsilon\), or by increasing the \(\Omega\)'s, as observed for 1D dark state lattices [4; 5; 6; 7; 8; 9].
Figure 4 compares the real part of the ground dark band dispersion computed using the full Hamiltonian [Eq. (2)] in (a,c), with that computed using the adiabatic approximation [Eq. (8)] in (b,d). In each case we computed the band structure for \(\chi=0\), \(\epsilon_{c}=1\) and \(\chi=1.4\) rad, \(\epsilon_{c}=0.09\) to highlight regimes where the adiabatic approximation is at its best [in (a,b)] and worst [in (c,d)], respectively. We find that these dispersions are visually indistinguishable even in the presence of needle-like barriers [Fig. 2(c)]. Quantitatively, the largest discrepancy is at the edge of the BZ, where losses (absent in the adiabatic approximation) are maximal. Conversely, states with crystal momentum \(\mathbf{q}\approx\mathbf{0}\) retain near-perfect adiabaticity even in the worst case scenario [Fig. 4(c,d)]. The regions of validity of the adiabatic approximation are analogous to those in 1D systems [5; 6; 7; 8; 9]: \(\Delta=0\); large \(\Omega\)'s, non-infinitesimal \(\epsilon\), and small, but non-zero \(\Gamma\) to avoid bright-dark resonances [4], all tend to reduce leakage from the dark state. The influence on adiabaticity of \(\chi\) and \(\epsilon_{c}\) is non-trivial and the parameters used in (c) reflect the global maximum in the \(\chi\) and \(\epsilon_{c}\) parameter space.
Generally, the relationship between losses and the parameters \(\epsilon\), \(\chi\) and \(\epsilon_{c}\) is complicated: all of these parameters have a significant influence on the non-adiabatic corrections. Overall, losses averaged over the BZ are minimal when \(\epsilon_{c}=1\) and \(\chi=0\); in this limit the effective magnetic field vanishes, leaving only the scalar potential which takes the form of an array of 2D cages with gaps as shown in Fig. 2(a) with the resulting energy dispersion
Figure 3: Real (Hermitian) and imaginary (anti-Hermitian) parts of the energy dispersion (represented by a non-linear colormap and blue contour lines, respectively) describing the lowest dark Bloch band. The parameters for (a,b,c,d) match that of Figs. 2(a,d,e,b). Other parameters are \(\epsilon=0.1\), \(\Delta=0\), \(\hbar\Omega_{p}=2000\,E_{\mathrm{R}}\), \(\hbar\Gamma=1000\,E_{\mathrm{R}}\) and \(N=250\). The maximum absolute value of the anti-Hermitian part of the energy is \(\approx 0.022E_{\mathrm{R}}\) in part c) at the corners of the BZ; the minimum is \(\approx 8.6\cdot 10^{-6}E_{\mathrm{R}}\) in part b) for \(\mathbf{q=0}\); the average over parts a)–d) is \(\approx 0.01E_{\mathrm{R}}\).
in Fig. 3(a) and Fig. 4(a,b). This scenario also yields a highly flat ground band with a large band gap (\(\approx 3E_{\rm R}\) with \(\epsilon\ll 1\)). In this limit, the dark state Hamiltonian (8) can effectively be approximated as a sum of two orthogonal 1D potentials with sub-wavelength barriers:
\[\hat{H}_{D}(\mathbf{r})\approx\hat{H}_{D}^{\rm(sep)}(x)+\hat{H}_{D}^{\rm(sep)}(y)\,, \tag{32}\]
giving energy bands
\[E(\mathbf{q})=E^{\rm(sep)}(q_{x})+E^{\rm(sep)}(q_{y})\,. \tag{33}\]
The 1D Hamiltonian
\[\hat{H}_{D}^{\rm(sep)}(x)=\frac{\hat{p}_{x}^{2}}{2M}+U_{D}^{\rm(sep)}(x) \tag{34}\]
contains a Kronig-Penney like potential
\[U_{D}^{\rm(sep)}(x)=\frac{E_{\rm R}\epsilon_{\rm sep}^{2}\,\cos^{2}\left(k_{ \rm R}x\right)}{\left[\epsilon_{\rm sep}^{2}+\sin^{2}\left(k_{\rm R}x\right) \right]^{2}}\,, \tag{35}\]
which appears in 1D analogues of our setup [4; 9], with \(\Omega_{2}(x)=\Omega_{c}\sin\left(k_{\rm R}x\right)\) and \(\epsilon_{\rm sep}=\Omega_{p}/\Omega_{c}\). The real part of the energy scales approximately as \(s_{x}^{2}+s_{y}^{2}\), with positive integers \(s_{x}\) and \(s_{y}\). For a sufficiently deep lattice with \(\epsilon\lesssim 0.2\), we numerically confirmed that the separable (32) and non-separable (8) Hamiltonians give similar energy dispersions when \(\epsilon\approx\epsilon_{\rm sep}\). Such an effective scenario depicts an array of gapless 2D cages with a constant wall height equal to half of the maximum height of the true scalar potential (13). Finally, we note that the full Hamiltonian (2) cannot be treated this way due to its internal structure.
### Non-Hermitian tight binding model
We express the band structure, such as shown in Fig. 3, as a Fourier transform [24; 25; 26]:
\[E^{(\mathbf{q})}=\sum_{m_{x}=-\infty}^{\infty}\,\sum_{m_{y}=-\infty}^{\infty}J_{m_ {x},\,m_{y}}\exp\!\left(-i\mathbf{q}\,\cdot\,\delta\mathbf{R}_{m_{x},\,m_{y}} \right), \tag{36}\]
where \(J_{\mathbf{\rm m}}\) describing hopping with range \(\mathbf{m}\equiv(m_{x},m_{y})\) has both real and imaginary parts. The element \(J_{0,0}\) is the on-site energy. These complex tight binding parameters describe conventional tunneling, and as is well established in photonic systems, can incorporate both gain and loss [27; 28; 29; 30; 31; 32]. We obtain the tight binding parameters from the Fourier transform
\[J_{m_{x},\,m_{y}}=k_{\rm R}^{-2}\int_{\rm BZ}E^{(\mathbf{q})}\exp\!\left(i\mathbf{q} \,\cdot\,\delta\mathbf{R}_{m_{x},\,m_{y}}\right)\mathrm{d}\mathbf{q} \tag{37}\]
of the numerically obtained band structure \(E^{(\mathbf{q})}\). For any value of \(\chi\) and \(\epsilon_{c}\), we find that \(\mathrm{Re}\,J_{m_{x},\,m_{y}}\) fully describes the band structure of the Hermitian part of the Hamiltonian, and therefore \(\mathrm{Im}\,J_{m_{x},\,m_{y}}\) fully accounts for the anti-Hermitian contribution of \(i\Gamma/2\).
We begin by commenting on the impact of the available physical parameters \(\epsilon\), \(\epsilon_{c}\), \(\chi\). Starting with the square lattice scenario depicted in Fig. 2(a), such that \(\epsilon_{c}=1\) and \(\chi=0\); \(\epsilon\) determines the lattice depth, simultaneously modifying all of the hopping parameters while moving between shallow (\(\epsilon\to 1\)) and deep lattice (\(\epsilon\to 0\)) regimes. Next, tuning \(\chi\to\pi/2\) delocalizes atoms as the scalar potential walls shrink to point-like barriers as shown in Fig. 2(c); this can be counteracted by moving away from \(\epsilon_{c}=1\) towards \(\epsilon_{c}\to 0\) or \(\epsilon_{c}\to\infty\), restoring the longitudinal extent of barriers as they approach 1D walls, as can be seen by comparing Figs. 2(b,e). Alternatively, the spatial extent of barriers can be restored by increasing \(\epsilon\) slightly, consequently reducing their sharpness.
Figure 5: a), Hopping directions in a periodic array of tubes. The left(right) half illustrates odd(even) order tunneling between lattice sites. b), Hermitian (real) part of hopping amplitudes \(J_{m_{x},m_{y}}\) corresponding to the directions in part a) and describing the ground state in the dark manifold vs \(\epsilon_{c}\) with \(\chi=0.2\) rad. Dashed(solid) lines mark negative(positive) values. c), Real energy dispersion for the thin vertical line in b) with energy represented by the colormap in Fig. 3. d), Combinations of \(\epsilon_{c}\) and \(\chi\) for which the real part of \(J_{0,1}\) vanishes. The orange point coincides with c) and the orange line marks \(\chi=0.2\) rad used in b)–c). Other parameters for b)–d) are \(\epsilon=0.1\), \(\Delta=0\), \(\hbar\Omega_{p}=2000\,E_{\rm R}\), \(\hbar\Gamma=1000\,E_{\rm R}\) and \(N=260\).
The dependence of the real part of the hopping parameters on \(\epsilon_{c}\) is shown in Fig. 5(b) (with \(\epsilon=0.1\) and \(\chi=0.2\)), and the corresponding imaginary part is plotted in Fig. 6. Many of the hopping parameters are identical due to the symmetries discussed in Sec. II--leading to \(J_{m_{x},m_{y}}=J_{m_{y},m_{x}}=J_{-m_{x},-m_{y}}=J_{-m_{y},-m_{x}}\)--and are thus omitted (this includes the non-Hermitian part). Furthermore, when \(\epsilon_{c}=1\) the lattice becomes symmetrical with respect to \(x\) and \(y\) leading to \(J_{\pm m_{x},\pm m_{y}}=J_{\pm m_{y},\pm m_{x}}\).
### Band flattening
Figure 5(b) shows our main finding: in a narrow region of \(\epsilon_{c}\) (\(\epsilon_{c}\approx 0.498\), thin vertical line) the real part of the nearest-neighbor (NN) hopping \(J_{1,0}\) vanishes, and the remaining odd hopping terms such as \(J_{2,-1}\), \(J_{2,1}\), \(J_{3,-2}\) approach zero. The remaining even-order tunneling processes become dominant, with a leading contribution from \(J_{1,-1}=J_{-1,1}\) describing diagonally oriented tunneling within tubes (magenta); the next leading contributions are \(J_{1,1}\) and \(J_{2,-2}\), giving coupling between next nearest neighboring tubes, and longer range tunneling within tubes, respectively. This effectively describes an array of nearly decoupled tubes represented by the highly anisotropic energy dispersion in Fig. 5(c). The odd terms \(J_{2,1}\) and \(J_{2,-1}\) couple neighboring tubes, but are weaker than \(J_{1,-1}\) by up to 4 orders of magnitude. As we describe in Sec. III.5, this results from Aharonov-Bohm like quantum interference from the geometric vector potential.
Similarly, one can observe the decoupling point by fixing \(\epsilon_{c}\) and tuning \(\chi\). In fact, for every \(\chi\gtrsim 0.05\pi\), two values of \(\epsilon_{c}\) give the decoupled tube scenario [Fig. 5(d)]; these are related by \(\epsilon_{c}\to 1/\epsilon_{c}\) and result from identical lattices rotated by 90 degrees. These two branches merge at \(\epsilon_{c}=1\) where the lattice is symmetric with respect to 90 degree rotations. A more detailed discussion on these issues is presented in Sec. III.5.
We also examined the energy band gap at the special points. For \(\chi=0.2\) rad, \(\epsilon=0.1\) and \(\epsilon_{c}\approx 0.498\), both the indirect and direct energy gaps are \(\approx 1.25E_{\rm R}\). They can be made wider (with an upper limit of \(3E_{\rm R}\)) while maintaining the weakly coupled tube scenario by approaching \(\epsilon_{c}\to 1\), and by reducing \(\chi\) and \(\epsilon\).
We now turn to the non-Hermitian part of the energy where our findings are no less interesting. The imaginary contribution to the on-site energy \({\rm Im}\,J_{0,0}\approx-0.01E_{\rm R}\) is always negative, describing on-site atom loss. However, the imaginary part of the energy is nearly zero for crystal momentum \(\mathbf{q}=\mathbf{0}\), implying that the sum of the imaginary tight binding parameters \(J_{m_{x},m_{y}}\) entering Eq. (36) is nearly zero. In Fig. 6 we demonstrate that this results from hopping matrix elements with imaginary components of both signs - an example of tight-binding gain-loss balance also observed in 1D dark state lattices with decay [8], which appears despite a strictly lossy Hamiltonian (2).
### Quantum interference
Here we qualitatively explain the suppression of intertube tunneling by considering trajectories linking neighboring tubes as sketched in Fig. 7(a,b). A quantitatively complete path integral description involving the sum over all paths is not needed to understand the basic origin of the suppression.
Figure 6: Anti-Hermitian contribution to the tight binding parameters \(J_{m_{x},m_{y}}\) shown in Fig. 5(a,b). Dashed(solid) lines mark negative(positive) values.
To this end, we consider simple ray-like paths connecting the centers of neighboring lattice sites that undergo Snell's law type refraction at the potential barriers (for this argument we do not consider the reduction in transmission amplitude due to reflections). We compute the phase difference
\[\phi_{B}=\frac{1}{\hbar}\left(\int_{C_{1}}\mathbf{A}\cdot\mathrm{d}\mathbf{r}-\int_ {C_{2}}\mathbf{A}\cdot\mathrm{d}\mathbf{r}\right)=\frac{1}{\hbar}\oint_{C}\mathbf{ A}\,\mathrm{d}\mathbf{r} \tag{38}\]
associated with paths \(C_{1}\) (red) and \(C_{2}\) (blue) that combine to encircle the tall barrier. As illustrated in Fig. 7(a,b), these together form a closed contour \(C\). The accumulated phase \(\phi_{B}\) is thus the line integral of \(\mathbf{A}\) along \(C\) (equal to the integral of \(\mathbf{B}\) within \(C\) by Stokes' theorem); when \(\phi_{B}=\pi+2\pi n\), for integer \(n\), these two paths destructively interfere, suppressing tunneling.
We investigated two families of contours.
* Dashed: these cross the scalar potential barrier at its minimum [Fig. 7(a)] and fully enclose the magnetic field peak [Fig. 7(b)].
* Solid: these paths are derived from the dashed contour by symmetrically moving the left and right corners along the scalar potential wall until \(\phi_{B}=\pi\).
Contour (A) was selected to minimize the potential energy cost of the path at the expense of increased length and kinetic energy owing to the larger corner angle. For the parameters used in Fig. 5(c), this contour results in \(\phi_{B}\approx 1.3\pi\): larger than needed for destructive interference. Indeed, Fig. 7(c) plots the area enclosed by these trajectories as a function of \(\epsilon_{c}\), and shows that the optimal trajectory (B) is always reduced in size. This indicates that the representative (i.e. saddle point) trajectory minimizes a combination of potential and kinetic energy. Furthermore, at \(\epsilon_{c}=1\) (the point where the lattice has 90 degree rotational symmetry) the trajectories' areas are maximized, and as shown in Fig. 7(d) the barrier height (blue) and width (red) pertaining to contour (B) are minimized. This corresponds to paths with the most extreme trade-off: minimal potential energy and maximal kinetic.
Our argument qualitatively describes first order tunneling such as \(J_{1,0}\). More generally, this description also explains suppression of only odd-order tunneling processes. As an example, consider the even-order hopping parameters \(J_{1,1}\) and \(J_{2,0}\). \(J_{1,1}\) tunneling is achieved by a single classical path that cuts through the scalar potential minima (thus without any option for interference effects). We can explain \(J_{2,0}\) in terms of a stacked pair of solid trajectories, but in this case \(\phi_{B}=2\pi\), leading to constructive interference. In general, even order tunneling terms are associated with even integer multiples of \(\pi\) (either constructive interference, or none at all) and odd-order trajectories have odd-integer multiples of \(\pi\) (destructive interference). Lastly, \(J_{1,-1}\) dominates because its path is completely unobstructed.
Moreover, we observe destructive interference of NN tunneling in the higher energy bands of the dark manifold for similar parameter values, supporting the generic applicability of our classical ray model.
## IV Discussion and Outlook
The 2D lattice featuring sub-wavelength structures considered here yields highly tunable geometric scalar and vector potentials with minimal spontaneous emission. The scalar potential can yield: a 2D square lattice with sub-wavelength barriers, an array of Delta function-like peaks (a 2D Dirac comb), or a lattice of interacting zigzag tubes. Furthermore, the band structure is greatly affected by the geometric vector potential where tunneling between tubes can be suppressed due to Aharonov-Bohm type destructive interference.
These lattices can be used to realize novel many body phases. When tunneling is suppressed in conventional deep lattices, the associated maximally localized Wannier orbitals are very strongly confined to individual lattice sites. In the present case, both intra- and inter-tube interactions are enhanced even at near-zero inter-tube tunneling, owing to the relatively shallow barriers and concomitantly extended Wannier orbitals.
From a broader perspective, this technique can create lattices with features well below the optical diffraction limit wherever the interfering laser beams in the \(\mathbf{e}_{x}\)-\(\mathbf{e}_{y}\) plane approach zero. Changing the number and intersection angles of these in-plane beams therefore allows for a range of lattice geometries, including quasi-crystalline. In addition, the dark state lattice discussed here can also be extended to disordered configurations by using an optical speckle field for \(\Omega_{2}\) in Fig. 1 rather than a standing wave potential. The resulting dark states feature disordered geometric potentials, including a disordered magnetic field. From our observation that the synthetic magnetic field can be used to destroy many of the hopping parameters, one can expect that a disordered magnetic field could create non-trivial tunneling paths. In the broader context of localization in 2D disordered systems [33], the localization properties of such a time-reversal symmetry breaking disorder potential is unclear [34].
###### Acknowledgements.
The authors thank E. Gutierrez, S. Subhankar and E. Benck for carefully reading the manuscript. This work was supported by the Research Council of Lithuania (Grant No. S-MIP-20-36). IBS acknowledges support by the National Institute of Standards and Technology, and the National Science Foundation through the Quantum Leap Challenge Institute for Robust Quantum Simulation (Grant No. OMA-2120757). GJ and IBS conceptualized the work; EG carried out all numerical simulations and analytical derivations, and
created all the figures. All authors contributed equally to writing the manuscript.
## Appendix A Unfolding the BZ
Here we explain our numerical recipe for obtaining the Hamiltonian-matrix \([H^{(\mathbf{q})}]\) that describes the unfolded BZ. Using the symmetries discussed in Sec. II, we modify the Hamiltonian-matrix by zeroing out some of the matrix elements.
The conditions for non-zero matrix elements are determined from Eqs. (19), (25) and (26). We first define \(a/2\) symmetry constants \(\mathcal{M}_{j}\) for each internal state. After choosing \(\hat{U}\) according to Eq. (19), \(\mathcal{M}_{j}\) are given by:
\[\mathcal{M}_{j}=\begin{cases}0&\text{for }j=2\\ 1&\text{for }j=e,1\end{cases}\,. \tag{30}\]
Thus \(\mathcal{M}_{j}\) is even for \(j=2\) and odd for \(j=e,1\) following Eqs. (25)-(26). It then follows, for example in the case of \(j=2\), that the even Fourier components \(g_{2}^{(\mathbf{q})}(2n_{x},2n_{y})\) describing \(g_{2}^{(\mathbf{q})}(\mathbf{r})\) must be non-zero since it is an even function with regards to shifts by \(a/2\) in \(\mathbf{e}_{x}\) and \(\mathbf{e}_{y}\), giving \(\mathcal{M}_{2}=0\). The opposite is true for \(j=e,1\). This can be written mathematically as
\[\begin{split}& g_{j}^{(\mathbf{q})}(n_{x},n_{y})\neq 0\,,\quad\text{if} \\ &(-1)^{\mathcal{M}_{j}+n_{x}}=1\quad\text{and}\quad(-1)^{ \mathcal{M}_{j}+n_{y}}=1\,.\end{split} \tag{31}\]
The non-zero Hamiltonian-matrix elements are then
\[\begin{split}&\left[H^{(\mathbf{q})}\right]^{n_{x}^{\prime}n_{y}^{ \prime}j^{\prime}}\neq 0\,,\quad\text{if}\\ &(-1)^{\mathcal{M}_{j}+n_{x}}=1\quad\text{and}\quad(-1)^{\mathcal{ M}_{j}+n_{y}}=1\quad\text{and}\\ &(-1)^{\mathcal{M}_{j^{\prime}}+n_{x}^{\prime}}=1\quad\text{and} \quad(-1)^{\mathcal{M}_{j^{\prime}}+n_{y}^{\prime}}=1\,.\end{split} \tag{32}\]
We thus arrive at a new Hamiltonian-matrix with eigensolutions characterized by the extended BZ \(q_{x,y}\in[-k_{\text{R}},k_{\text{R}})\). The Fourier-space eigenvectors (30) diagonalizing this matrix are not truncated: all of the \(3(2N+1)^{2}\) Fourier components (including the ones equal to zero) describe the solution, and so the numerically obtained eigenvectors for each \(\mathbf{q}\)-value describe a real space unit cell of area \(a^{2}\) (as though the BZ were not unfolded). We note that this method of zeroing out matrix elements is sub-optimal - a better solution would be to truncate the Fourier space, reducing the size of the Hamiltonian-matrix.
The same filtering procedure is valid for the adiabatic Hamiltonian-matrix \([H_{D}^{(\mathbf{q})}]\). It is described by one internal state \(j=j_{D}\) - the dark state (4) - which is invariant with respect to the combined shift operator \(\hat{T}_{\mathbf{a}_{1}/2}\) defined in (18):
\[\hat{T}_{\mathbf{a}_{1}/2}\left|D(\mathbf{r})\right>=\left|D(\mathbf{r})\right>\,, \tag{33}\]
therefore one has \(\mathcal{M}_{D}=0\) (\(g_{D}^{(\mathbf{q})}\) is an even function, see also Fig. 1(c,d)).
|
2306.16319 | Statistics of Long-Range Force Fields in Random Environments: Beyond
Holtsmark | Since the times of Holtsmark (1911), statistics of fields in random
environments have been widely studied, for example in astrophysics, active
matter, and line-shape broadening. The power-law decay of the two-body
interaction, of the form $1/|r|^\delta$, and assuming spatial uniformity of the
medium particles exerting the forces, imply that the fields are fat-tailed
distributed, and in general are described by stable L\'evy distributions. With
this widely used framework, the variance of the field diverges, which is
non-physical, due to finite size cutoffs. We find a complementary statistical
law to the L\'evy-Holtsmark distribution describing the large fields in the
problem, which is related to the finite size of the tracer particle. We
discover bi-scaling, with a sharp statistical transition of the force moments
taking place when the order of the moment is $d/\delta$, where $d$ is the
dimension. The high-order moments, including the variance, are described by the
framework presented in this paper, which is expected to hold for many systems.
The new scaling solution found here is non-normalized similar to infinite
invariant densities found in dynamical systems. | Avraham Samama, Eli Barkai | 2023-06-28T15:51:05Z | http://arxiv.org/abs/2306.16319v1 | # Statistics of Long-Range Force Fields in Random Environments: Beyond Holtsmark
###### Abstract
Since the times of Holtsmark (1911), statistics of fields in random environments have been widely studied, for example in astrophysics, active matter, and line-shape broadening. The power-law decay of the two-body interaction, of the form \(1/|r|^{\delta}\), and assuming spatial uniformity of the medium particles exerting the forces, imply that the fields are fat-tailed distributed, and in general are described by stable Levy distributions. With this widely used framework, the variance of the field diverges, which is non-physical, due to finite size cutoffs. We find a complementary statistical law to the Levy-Holtsmark distribution describing the large fields in the problem, which is related to the finite size of the tracer particle. We discover bi-scaling, with a sharp statistical transition of the force moments taking place when the order of the moment is \(d/\delta\), where \(d\) is the dimension. The high-order moments, including the variance, are described by the framework presented in this paper, which is expected to hold for many systems. The new scaling solution found here is non-normalized similar to infinite invariant densities found in dynamical systems.
## I Introduction
In 1919, Holtsmark considered the problem of the distribution of force fields in the context of the chaotic motion of charged particles in a plasma [1]. Similarly, Chandrasekhar and Von Neumann examined the distribution of gravitational forces in the universe [2]. The basic question studied was the following: in an infinite system/universe with uniformly distributed charges/masses, what is the distribution of forces projected along the \(z\)-axis, \(F_{z}\), acting on a tracer located on the origin [3]? This distribution peaks at \(F_{z}=0\), and its mean is zero due to symmetry, however the interesting aspect of the solution is that the variance of the field diverges, which is argued to be unphysical (see below) [1; 2; 3; 4; 5; 6; 7; 8]. Mathematically, Holtsmark's problem is related to the generalized central limit theorem [9], _i.e._ Levy statistics, in which the distribution is fat-tailed [10; 11]. The full connection to Levy's stable laws, with truly vast applications [12; 13; 14; 15; 16], is only seen by extending the original works [1; 2] to include other force fields beyond Coulomb and Newton's gravitation law (see below). The statistical law discovered by Holtsmark and others is related to the power-law decay in space of forces acting between two bodies [3]. Hence, the applications of this basic model and its extensions are found in many fields, for example, plasma physics [17], astrophysics [2; 3; 18], swimming micro-organisms [19], glassy systems [20; 21], forces in systems composed of dipoles [6; 7], NMR [22], Olbers paradox [18], and in-homogeneous line-shape broadening [23].
Notwithstanding previous works, here we introduce a complementary statistical law to those famous problems. Strictly speaking, the diverging variance of the force field is unphysical, as the original theory neglects an important excluded volume effect, namely originally the size of the tracer is taken to be zero. Mathematically, one goal of this letter is to use tools from infinite ergodic theory [23; 24; 25; 26; 27; 28; 29; 30; 31] to find a complementary statistical law for the force distribution. At the center of infinite ergodic theory stands the infinite invariant density [32; 33], which is a non-normalized function, hence its name. We show how this function can be used to describe the statistics of the force field, when the tracer size is finite and the density of bath particles is low. Importantly, in the field of active matter, this tool describes the largest forces in the problem, and these are crucial for current-day studies.
Recently, considerable work has been devoted to active transport, for example, self-propelled colloids, and biological swimming microorganism [34; 35]. The phenomenology of these systems is extremely rich, but one aspect of the problem is the "static force fields akin to Holtsmark distribution" [8; 16; 36; 19] that in turn controls the dynamical features of the motion. The interaction is mediated by long-ranged hydro-dynamical dipole force fields [34], but it has a cutoff length scale \(a\) defined below, just like other realistic forces. This cutoff scale is of great importance, as the forces the tracer experiences cannot be arbitrarily large. Hence, as mentioned, the Holtsmark approach that yields a diverging variance of the force needs modifications. While the standard treatment assumes mono-scaling, namely, that the statistics of the force field is determined by a single scale, which is the Holtsmark scale defined below, we will show how the field distribution exhibits bi-scaling [37; 38; 39], accompanied by a sharp statistical transition. This study is important for a vast number of systems [2; 3; 4; 5; 6; 8], and with some modifications for ones driven by long-range active forces [34; 40]. Simply enough, the infinite density found in this letter describes the statistical properties of the largest forces in the problem, and these are important in the study of extreme events in
many systems.
## II Model
Consider a rigid tracer with a radius \(a\) that is centered in a sphere with a volume \(V\to\infty\) in dimension \(d=2\), or \(d=3\). There are \(N\to\infty\) randomly uniformly distributed bath particles inside the system, resulting in a finite overall density, \(\rho=N/V\)[3; 4; 5; 13]. The bath particles are treated as size-less charges, dipoles, masses, etc., and they cannot overlap with the rigid body on the origin. This in turn implies that we are considering the low density limit of the model where spatial correlations in the bath are neglected. Each particle applies a force on the tracer that decays like a power-law with the distance. The Cartesian axes in \(d=3\) are denoted as \((x,y,z)\) and in \(d=2\), \((x,z)\), so the total force toward the \(z\)-axis, denoted as \(F_{z}\), is obtained by adding all the \(z\) components of the forces applied on the tracer by all the \(N\) particles,
\[F_{z}=\sum_{i=1}^{N}(F_{i})_{z}=\sum_{i=1}^{N}\frac{C\cos(\theta_{i})}{|{\bf r} _{i}|^{\delta}}, \tag{1}\]
where \(C\) is a constant determined by the type of the particles. For Coulomb force, \(\delta=2\) and \(C=q^{2}/4\pi\epsilon_{0}\epsilon_{r}\), where \(\epsilon_{r}\) is the dielectric constant in a medium, and \(\epsilon_{0}\) is the one in vacuum. The opening angles from the \(z\)-axis and the distance from the tracer to particle \(i\) are denoted by \(\theta_{i}\) and \(r_{i}\), respectively, and the force-law exponent is denoted by \(\delta\).
The force \(F_{z}\) is clearly a random variable, as it is a sum of many random contributions. The basic question is what is the force probability density function (PDF), \(P(F_{z},\xi)\), where we define \(\xi=(\rho^{1/d}a)^{\delta}\)? In particular we focus on the limit of small \(\xi\), and the new finding of our work deal mainly with the large forces, as mentioned in the introduction. An explanation about the simulation of the model is provided in Appendix A.
## III Characteristic function and moments
This problem contains two force scales: \(F_{c}=C/a^{\delta}\) which is the maximal force exerted on the tracer by a single bath particle, (the subscript "\(c\)" stands for cutoff), as one can see from Eq. (1) by simply inserting \(r=a\) and \(\theta=0\). The second is \(F_{H}=\rho^{\delta/d}C\), which is the force scale studied by Holtsmark and others. The pair of force scales and connection of the problem to fat tailed distributions, with cutoffs, imply that we can use bi-scaling ideas [37; 41] namely, we will find two limiting laws for the distribution of forces. The relationship between \(F_{H}\) and \(F_{c}\), is characterized by
\[\xi^{\alpha}=\left(\frac{F_{H}}{F_{c}}\right)^{\alpha}=\rho a^{d},\text{where} \quad\alpha=\frac{d}{\delta}<2 \tag{2}\]
and our interest as mentioned is in the limit of small \(\xi\). Consider the characteristic function of \(F_{z}\), given by
\[\langle e^{ikF_{z}}\rangle=\left[\frac{\int_{V}dV\exp\left(ik\frac{C\cos( \theta)}{|{\bf r}|^{\delta}}\right)}{V}\right]^{N}, \tag{3}\]
where we used Eq. (1) and the fact that the bath particles are uniformly distributed in space. Since \(V\to\infty\) and \(N\to\infty\), we must interpret this integral. Therefore, we apply the following trick of adding and subtracting 1, so the characteristic function, also known as the Fourier
Figure 1: \(F_{c}^{1+\alpha}P(F_{z},\xi)\) versus \(\tilde{F}=F_{z}/F_{c}\) obtained from numerical simulation (dotted line) and compared with the infinite density (solid line) and the inverse Fourier transform of the characteristic function (dashed-line) found in Eq. (16) and Eq. (6) for \(\alpha=3/2\), respectively. Clearly the infinite density is indeed a complementary statistical law, indicating that the Holtsmark’s law is only part of the story. In particular the infinite density has a cut-off at \(|\tilde{F}|=1\), which refers to the rare events. Thus, this law describes the large forces in the problem. For small \(\tilde{F}\), the solution is non-integrable at the origin in the limit \(F_{c}\to\infty\). We chose \(C=1\), such that \(F_{c}=1/a^{\delta}\), the density \(\rho=3/4\pi\), the number of particles \(N=8000\), and the dimension \(d=3\).
transform of \(P(F_{z},\xi)\), denoted
\[\langle e^{ikF_{z}}\rangle=\int_{-\infty}^{\infty}P(F_{z},\xi)e^{ikF_{z}}dF_{z}, \tag{4}\]
is written as [22]
\[\langle e^{ikF_{z}}\rangle = \exp\left[-\rho\Omega_{d}\int_{0}^{\theta_{d}}f_{d}(\theta)d\theta\right.\] \[\left.\int_{a}^{\infty}\left(1-e^{ik\frac{C\cos(\theta)}{r\lambda }}\right)r^{d-1}dr\right],\]
where we used the mentioned excluded volume effect. \(\Omega_{d}\) is the result of the integral over the azimuth angle in dimension \(d\), such that \(\Omega_{2}=1\), and \(\Omega_{3}=2\pi\). \(\theta_{d}\) is the polar angle, hence \(\theta_{2}=2\pi\), and \(\theta_{3}=\pi\). \(f_{2}(\theta)=1\), and \(f_{3}(\theta)=\sin(\theta)\).
### The infinite density for dimension three
We first study the \(d=3\) case. Using Eq. (II.1), the characteristic function for this case is
\[\langle e^{ikF_{z}}\rangle= \tag{6}\] \[\exp\left[-\frac{4\pi\xi^{\alpha}\left[{}_{1}F_{2}\left(-\frac{ \alpha}{2};1-\frac{\alpha}{2},\frac{3}{2};-\frac{|F_{z}k|^{2}}{4}\right)-1 \right]}{3}\right],\]
where \({}_{1}F_{2}\) is the generalized hypergeometric function [42]. The moments of \(F_{z}\) can be determined using the following equation:
\[\langle F_{z}^{2n}\rangle=(-1)^{n}\left.\frac{d^{2n}\langle e^{ikF_{z}} \rangle}{dk^{2n}}\right|_{k=0} \tag{7}\]
After performing straightforward calculations (refer to Appendix B for details), the variance can be expressed as:
\[\langle F_{z}^{2}\rangle=\frac{4\pi\alpha\xi^{\alpha}F_{c}^{2}}{9(2-\alpha)}, \tag{8}\]
Thus, the cutoff force scale, \(F_{c}\), governs the behavior of the variance. Similarly, the fourth moment is
\[\langle F_{z}^{4}\rangle=F_{c}^{4}\left(\frac{4\pi\alpha\xi^{\alpha}}{15(4- \alpha)}+3\left(\frac{4\pi\alpha\xi^{\alpha}}{9(2-\alpha)}\right)^{2}\right), \tag{9}\]
and for \(\xi\ll 1\), the leading term of the fourth moment is of order of \(\xi^{\alpha}\). This is not a coincidence, since the leading term of the sixth moment,
\[\langle F_{z}^{6}\rangle=F_{c}^{6}\left(\frac{4\pi\alpha\xi^{\alpha}}{21(6- \alpha)}+\frac{16\pi^{2}\alpha^{2}\xi^{2\alpha}}{9(2-\alpha)(4-\alpha)}+15 \left(\frac{4\pi\alpha\xi^{\alpha}}{9(2-\alpha)}\right)^{3}\right), \tag{10}\]
has an asymptotic behavior of \(\xi^{\alpha}\) for \(\xi\ll 1\). The same can be done for the \(2n\)-th moment. Hence, for a non-negative integer \(n\), the \(2n-th\) moment for \(\xi\ll 1\) is
\[\langle F_{z}^{2n}\rangle\sim\frac{4\pi\alpha\xi^{\alpha}}{3(2n-\alpha)(2n+1)} F_{c}^{2n}. \tag{11}\]
As discussed in the introduction, the second moment \(\langle F_{z}^{2}\rangle\) diverges as \(F_{c}\rightarrow\infty\), but \(F_{c}\) is finite as long as \(a>0\), so moments of the force field do not diverge. Odd moments are zero due to symmetry. Notice that, the expression in Eq. (11) diverges if we analytically continue \(n\) and set it to approach \(\alpha/2\) (\(n\rightarrow\alpha/2\)) from above, and if we set \(n=0\), we get a negative result, which violates the normalization condition.
## IV Infinite density
Our next goal is to find a function that generates the moments presented in Eq. (11). This function is called the infinite density. Using Eq. (11) and \([(2n-\alpha)(2n+1)]=[(2n-\alpha)^{-1}-(2n+1)^{-1}]/(1+\alpha)\), we search for a non-negative symmetric function \(P_{A}(F_{z},\xi)\) such that
\[\langle F_{z}^{2n}\rangle_{A} = 2\int_{0}^{\infty}F_{z}^{2n}P_{A}(F_{z},\xi)dF_{z} \tag{12}\] \[= \frac{4\pi\alpha\xi^{\alpha}F_{c}^{2n}}{3(1+\alpha)}\left[\frac{1 }{2n-\alpha}-\frac{1}{2n+1}\right],\]
where "\(A\)" stands for asymptotic, since the approach is valid for \(\xi\ll 1\). Naively, \(P_{A}(F_{z},\xi)\) is a PDF since it gives the moments of the force field, but this as we show soon is simply wrong. Recall the Mellin transform [43] for a polynomial function, \(f(x)\), that satisfies \(f(|x|<1)=|x|^{b}\) and otherwise zero,
\[\{Mf\}(s)\equiv\int_{0}^{\infty}x^{s-1}f(x)dx=\frac{1}{s+b}. \tag{13}\]
Using Eq. (12), we see that in our case \(s=2n+1\). Therefore, by applying the inverse Mellin transform on Eq. (12), we find that
\[P_{A}(F_{z},\xi)= \tag{14}\] \[\begin{cases}\frac{2\pi\alpha F_{c}^{2n}}{3(1+\alpha)F_{c}^{2+ \alpha}}\left[\left(\frac{|F_{z}|}{F_{c}}\right)^{-1-\alpha}-1\right],&|F_{z}| <F_{c}\\ 0,&|F_{z}|\geq F_{c}.\end{cases}\]
Clearly, this function is not normalizable since \(P_{A}(F_{z},\xi)\sim|F_{z}|^{-(1+\alpha)}\) for small force fields, hence it is not a PDF. It is easily verified with simple integration that Eq. (14) gives the moments found in Eqs. (11,12). Mathematically, we are interested in the limit where both \(F_{c}\to\infty\) and \(F_{z}\to\infty\), hence we denote \(\tilde{F}=F_{z}/F_{c}\). This corresponds to the large forces in the problem. We present the solution using a natural scale, namely, we define
\[\mathcal{I}_{\alpha}(\tilde{F})=F_{c}^{1+\alpha}P_{A}(F_{z},\xi) \tag{15}\]
and \(\mathcal{I}_{\alpha}(\tilde{F})\) is the infinite density of \(\tilde{F}\), where the name comes from the fact that \(\int_{-\infty}^{\infty}\mathcal{I}_{\alpha}(\tilde{F})d\tilde{F}=\infty\). With the usage of Eq. (15) we find
\[\mathcal{I}_{\alpha}(\tilde{F})=\begin{cases}\frac{2\pi\alpha F_{H}^{\alpha}} {3(1+\alpha)}\left(\frac{1}{|\tilde{F}|^{1+\alpha}}-1\right)&|\tilde{F}|<1\\ 0&|\tilde{F}|\geq 1.\end{cases} \tag{16}\]
While we may use Eq. (16) to obtain force moments, the remaining question is how can it be measured, at least in principle? Clearly, \(\mathcal{I}_{\alpha}(\tilde{F})\sim\tilde{F}^{-(1+\alpha)}\) for small \(\tilde{F}\), the question is what is the physical meaning of this non-normalized solution? Namely, how is the infinite density related to the normalized probability density of the force field.
We realize that moments of the force field can be obtained not only from the infinite density, instead we may use the probability density of the force itself, namely \(P(F_{z},\xi)\). From here, we reach the conclusion
\[\mathcal{I}_{\alpha}(\tilde{F})=\lim_{F_{c},F_{z}\to\infty}F_{c}^{1+\alpha}P(F _{z},\xi), \tag{17}\]
namely, the normalized density \(P(F_{z},\xi)\) is related to \(\mathcal{I}_{\alpha}(\tilde{F})\). It is important to emphasize that while Eq. (17) is an exact statement that holds in a limit, for finite \(F_{c}\) the theory holds as a valid approximation, as we now demonstrate.
In numerous areas of physics, the observed tracer is typically small and fulfills the condition \(\rho a^{d}\ll 1\), indicating a significantly low density [1; 2; 20]. This condition holds true, for instance, in a two-dimensional system, such as a disk, where a tiny tracer with a small radius \(a\) is positioned at its center. The tracer is encompassed by positively/negatively charged particles. Here we demonstrate this relation for the case studied by Holtsmark, where \(a\) is finite though small.
We simulated the random force field for the case \(\delta=2\) and \(d=3\) and obtained \(P(F_{z},\xi)\), (see details in Appendix A). Recall that \(\delta=2\) implies a gravitational of Coloumb type or force fields. The results (dotted line) are compared with the theory in Fig. 1. After re-scaling, using Eq. (17), we compare numerical data to the exact one, namely, the inverse Fourier transform of Eq. (6) (black solid line), and to \(\mathcal{I}_{\alpha}(\tilde{F})\) found in Eq. (16) (dashed line). The figure demonstrates perfect agreement between statistics of the simulated field and the non normalized solution, \(\mathcal{I}_{\alpha}(\tilde{F})\). Notice, for \(\tilde{F}=1\), there is a cut-off indicating that the largest total force is of the order \(F_{c}\), which is equivalent to the largest force exerted by a single particle in the vicinity of the tracer [44; 45]. The next step is to consider another scaling solution of the problem, found when \(a=0\), corresponding to the original work of Holtsmark.
### Levy-Holtsmark statistics
The characteristic function, Eq. (6), for the case of \(a=0\) is given by
\[\tilde{P}(k,0)=\exp\left[-\mu_{d,\alpha}|F_{H}k|^{\alpha}\right], \tag{18}\]
where \(\mu_{d,\alpha}\) is a dimension-less constant given by
\[\mu_{d,\alpha}=\pi\begin{cases}\frac{\Gamma\left(1-\frac{\alpha}{2}\right)}{2 ^{\alpha}\Gamma\left(1+\frac{\alpha}{2}\right)}&d=2\\ \frac{4\cos\left(\frac{\alpha\pi}{2}\right)\Gamma\left(1-\alpha\right)}{3(1+ \alpha)}&d=3.\end{cases} \tag{19}\]
Eq. (18) is the Fourier transform of the well-known Levy stable distribution function, \(P(F_{z},0)=L_{\alpha}(F_{z})\), for \(0<\alpha<2\). Here \(L_{\alpha}(F_{z})=L_{\alpha}(-F_{z})\) from symmetry. From Eq. (18) we see that force scale defined above Eq. (2), \(F_{H}\), determines the width of the distribution of the force field, when \(a=0\), namely, \(\xi=0\). For \(d=3\) and \(\delta=2\), namely \(\alpha=3/2\), by applying the inverse Fourier transform over Eq. (18), we recover the Holtsmark distribution, which is a special case of the Levy stable distribution. The following question arises: what is the connection between the Levy-Holtsmark distribution and the infinite density? This question is answered next.
### Relation of infinite density and Levy statistics
Applying an inverse Fourier transform \((\mathcal{F}^{-1})\) on the characteristic function yields the Levy distribution of \(F_{z}\), where \(P(F_{z},0)=L_{\alpha}(F_{z})=\mathcal{F}^{-1}\left[\exp\left(-\mu_{d,\alpha}| F_{H}k|^{\alpha}\right)\right]\). The function \(L_{\alpha}(F_{z})\) is tabulated in programs like _Mathematica_ and hence easy to plot. We now notice that the Levy density for large \(F_{z}\) matches the solution we found here, namely, the infinite density for small \(F_{z}\). We have by using the large
limit of \(L_{\alpha}(F_{z})\) and the small \(F_{z}\) limit of Eq. (16)
\[\begin{split} L_{\alpha}(F_{z})&\sim\frac{2\pi\alpha F _{H}^{\alpha}}{3(1+\alpha)}\frac{1}{|F_{z}|^{1+\alpha}},\\ P(F_{z},\xi)&\sim\frac{\mathcal{I}_{\alpha}(F_{z})} {F_{c}^{1+\alpha}}\sim\frac{2\pi\alpha F_{H}^{\alpha}}{3(1+\alpha)}\frac{1}{|F _{z}|^{1+\alpha}},\end{split} \tag{20}\]
hence, the two solutions match as they should. In other words, the Levy distribution accurately describes the center part of \(P(F_{z},\xi)\) in the limit of a small but finite \(a\), whereas our solution accurately describes the large \(F_{z}\) limit. As mentioned in the introduction, the study of large forces is crucial, and that regime is described by the infinite density found here.
## V Sharp statistical transition
We now show how the moments of the force field exhibit bi-linear scaling with a sharp transition found when the order of the moments is modified.
Consider the absolute value of the moment, denoted as \(\langle|F_{z}|^{q}\rangle\), where \(q\) gets any non-negative real value. Since the Levy/Holtsmark method of calculating the moments fails for \(q>\alpha\), mathematically because the moments diverge in that regime and physically since the assumption of absence of excluded volume \(a=0\) cannot be used, we employ the infinite density. Hence, \(\langle|F_{z}|^{q}\rangle\) is given by the integral,
\[\langle|F_{z}|^{q}\rangle\sim\frac{1}{F_{c}^{1+\alpha}}\int_{-F_{c}}^{F_{c}}|F _{z}|^{q}\mathcal{I}_{\alpha}(F_{z})dF_{z},\quad q>\alpha. \tag{21}\]
Unlike Eq. (12) now \(q\) is not necessarily an integer. The moments for \(q<\alpha\) cannot be calculated by the infinite density, since the latter does not describe well the small force fields and the normalization condition \(q=0\) case, hence they are found using the Levy's distribution. The infinite density and the Levy distribution are complementary, thus each succeeds where the other one fails. Hence, \(\langle|F_{z}|^{q}\rangle\) for \(q<\alpha\) is obtained by solving the integral,
\[\langle|F_{z}|^{q}\rangle=\int_{-\infty}^{\infty}|F_{z}|^{q}L_{\alpha}(F_{z}) dF_{z}. \tag{22}\]
The final solution for \(\langle|F_{z}|^{q}\rangle\) is
\[\begin{split}\langle|F_{z}|^{q}\rangle\sim\begin{cases}M_{q> \alpha}F_{H}^{\alpha}F_{c}^{q-\alpha}&q>\alpha\\ M_{q<\alpha}F_{H}^{q}&q<\alpha,\end{cases}\end{split} \tag{23}\]
where the amplitudes \(M_{q}\) are
\[\begin{cases}M_{q>\alpha}=\frac{4\pi}{\delta(q-\alpha)(q+1)}\\ M_{q<\alpha}=\frac{(\mu_{d,\alpha})^{\frac{\alpha}{2}}\Gamma\left(1-\frac{q}{ \alpha}\right)}{\cos\left(\frac{\pi q}{2}\right)\Gamma(1-q)}.\end{cases} \tag{24}\]
There is a clear divergence of \(\langle|F_{z}|^{q}\rangle\) for \(q\to\alpha\) from above and below, thus the moments exhibit a transition,
Figure 2: We present numerical results for the moments amplitude \(M_{q}\) versus \(q\) (shapes). In the limit of large \(F_{c}\), these converge to the theoretical prediction Eq. (24) and Eq. (38) (see Appendix C) for \(d=3,2\), respectively. In the large limit of \(F_{c}\), these moments diverges as \(q\to\alpha\), as shown. At the top (bottom) graph \(\rho=3/4\pi\) (\(\rho=1/\pi\)), \(\alpha=3/2\) (\(\alpha=1\)), and \(d=3\) (\(d=2\)), respectively. In both graphs we chose \(C=1\), such that \(F_{c}=1/a^{\delta}\). The plots show how \(q>\alpha\) corresponds to the infinite density scaling, while \(q<\alpha\) to the Holtsmark-Lévy law.
which is an indication of a transition between statistical laws of weak fields (Levy/Holtsmark) and strong fields (infinite density).
### The infinite density for dimension two
As in the three-dimensional case we start with the calculation of the moments \(\langle F_{z}^{2n}\rangle\), where \(n\) is a non-negative integer. With the usage of the characteristic function, found using Eq. (5),
\[\langle e^{ikF_{z}}\rangle=\exp\left[-\pi\xi^{\alpha}\left({}_{1}F_{2}\left[- \frac{\alpha}{2};1,1-\frac{\alpha}{2};-\frac{|F_{c}k|^{2}}{4}\right]-1\right) \right], \tag{25}\]
and Eq. (7), the \(2n\)-th moment for the dimensionless variable \(\tilde{F}=F_{z}/F_{c}\) is
\[\langle\tilde{F}^{2n}\rangle\sim\frac{\sqrt{\pi}\alpha\xi^{\alpha}}{2n-\alpha }\left(\frac{2\Gamma(\alpha)}{\Gamma(\frac{\alpha}{2})}+\underbrace{\frac{ \Gamma(n+\frac{1}{2})}{\Gamma(n+1)}-\frac{\Gamma\left(\frac{1+\alpha}{2} \right)}{\Gamma\left(1+\frac{\alpha}{2}\right)}}_{g_{\alpha}(n)}\right), \tag{26}\]
where we used \(\xi\ll 1\) as before. In Appendix C, we present the first three non-zero moments for \(d=2\). Our next goal is to find the infinite density that generates the moments in Eq. (26). For that aim, we use the Mellin transform, with Eq. (13) and the identity
\[g_{\alpha}(n)=\frac{\alpha-2n}{\sqrt{\pi}(1+\alpha)}\int_{-1}^{1}{}_{2}F_{1} \left[\frac{1}{2},\frac{1+\alpha}{2};\frac{3+\alpha}{2},\tilde{F}^{2}\right] \tilde{F}^{2n}d\tilde{F}, \tag{27}\]
where \(g_{\alpha}(n)\) is defined in Eq. (26). Therefore, the infinite density is
\[\mathcal{I}_{\alpha}(\tilde{F})=\] \[\begin{cases}\alpha F_{H}^{\alpha}\left[\frac{\sqrt{\pi}\Gamma \left(\frac{1+\alpha}{2}\right)}{\alpha\Gamma\left(\frac{\alpha}{2}\right)| \tilde{F}|^{1+\alpha}}-\frac{{}_{2}F_{1}\left[\frac{1}{2},\frac{1+\alpha}{2}; \frac{3+\alpha}{2},\tilde{F}^{2}\right]}{(1+\alpha)}\right]&|\tilde{F}|<1\\ 0&|\tilde{F}|\geq 1,\end{cases} \tag{28}\]
where \({}_{2}F_{1}\) is the Gaussian hypergeometric function [42]. Of course, one may insert Eq. (28) in Eq. (21), with \(q=2n\) and verify Eq. (26). The infinite density in two and three dimension are clearly different from each other, but both satisfy the relations in Eqs. (15, 17). Still in both dimensions the infinite density has similar behavior for \(\tilde{F}\to 0\), _i.e._, \(\mathcal{I}_{\alpha}(\tilde{F})\sim\tilde{F}^{-(1+\alpha)}\), so this is a non-normalized function.
In Fig 2, we compare the theoretical result of the amplitudes of the absolute value of the force moments, obtained from Eq. (24) and Eq. (28) (see Appendix C), with the simulation for the Holtsmark (\(d=3\), \(\alpha=3/2\)) and Cauchy (\(d=2\), \(\alpha=1\)) cases, respectively. From this figure we see that for \(q>\alpha\), the moments are obtained by the infinite density function, and for \(q<\alpha\) by the Levy distribution. As \(F_{c}\) gets larger (\(a\) smaller) so does the peak at \(q=\alpha\), hence the data converges to the theoretical result. The figures clearly illustrate that by studying different orders of moments, we reveal different scales of the problem, accompanied by a sharp transition, found at \(q_{c}=\alpha\).
### Extension of this work
In many stochastic models, the noise is described by Levy statistics [46]. Levy noise cannot be realized in physical systems in its exact mathematical form, instead semi-truncated Levy noise is used, for various far from equilibrium systems, including active swimmer suspension [16; 19; 35], actomyosin networks [47], and cultured cell [48]. Here we showed using a static model, that indeed the forces are truncated, and that this truncation is related to finite size effects, namely to the radius \(a\), and more importantly this cutoff is at least in a static description deeply related to the infinite density concept. Since the far tail of the distribution of the random force is important for the enhancement of active diffusion, our work may impact the whole field. The remaining challenge is to see how the statistical laws found here for a basic static model translate into a dynamical picture [34].
## VI Summary
To conclude, we found a non-trivial behavior of the moments of the force field for Holtsmark-like problems. A transition controlled by the order of the moments is observed at a critical value of \(q_{c}=\alpha=d/\delta.\) The \(q\) moments of order \(q>q_{c}\) are described by the cutoff force scale \(F_{c}\), which is determined by a single bath particle in the vicinity of the tracer, so \(\langle|F|^{q}\rangle\propto(F_{c})^{q-\alpha}\). In contrast, low order moments \(q<q_{c}\) are given by the Holtsmark force scale. The amplitudes of these moments, \(M_{q}\), diverge in the vicinity of the transition point \(q_{c}\). Further, the low order moments are determined by the Levy-Holtsmark law, while the higher order moments, namely \(q>q_{c}\), are determined by the infinite density found here. The Levy-Holtsmark distribution and the infinite density are complementary scaling laws of the problem. The infinite density in Eqs. (28,16) describes the distribution of forces, \(F_{z}\), for large forces. These are in many applications important, as large forces can lead to violent effects. More mathematically, in the limit
where both \(F_{c}\) and \(F_{z}\) are large, we get a limit theorem that is complimentary to the well-known Holtsmark distribution. This is found using the small density limit, where the assumptions of the model are valid. The PDF of forces, when properly re-scaled, yields the infinite density, and importantly, the latter describes the large forces in the problem (see Fig. 1). As such, the infinite density is an essential part of this problem, exactly like the well known Levy-Holtsmark distribution.
###### Acknowledgements.
This work was supported by the Israel Science Foundation grant 1614/21
## Appendix A Simulation of the model
Here we give an explanation about the simulation of the model. We scattered uniformly \(N\) particles in a \(d=2,3\) dimensional sphere with outer radius denoted as \(R\). The force each particle exerts on a rigid body tracer with a radius of \(a\) located in the center is then measured (bath particles are excluded from the volume with radius \(a\ll R\)). By summing all the observed forces, one obtains the total force field applied on the rigid tracer presented in Eq. (1). By repeating this process many times, the distribution of the total force, \(P(F_{z},\rho a^{d})\), is obtained. We also obtained the PDF by performing numerical inverse Fourier transform of Eq. (5), using _Mathematica_. In both Fig. 1 and Fig. 2, \(R=20\), which is much greater than \(a\).
## Appendix B Derivation of the force moments in three dimension
The exact result of the force moments in dimension \(d=3\) are given here. Since the odd moments are vanishing, we deal only with the even ones, starting with the variance. We expand the Hyper geometric function presented in Eq. (6) as a Taylor series and find
\[\langle e^{ikF_{z}}\rangle=\exp\Biggl{(}\sum_{n=1}^{\infty}\frac{4\pi\alpha \xi^{\alpha}F_{c}^{2n}}{3(2n-\alpha)(2n+1)}\frac{(ik)^{2n}}{(2n)!}\Biggr{)}. \tag{29}\]
The expansion of Eq. (29) in a Taylor series,
\[\langle e^{ikF_{z}}\rangle=1+\sum_{m=1}^{\infty}\frac{1}{m!}\left(\sum_{n=1}^{ \infty}\frac{4\pi\alpha\xi^{\alpha}F_{c}^{2n}}{3(2n-\alpha)(2n+1)}\frac{(ik)^ {2n}}{(2n)!}\right)^{m}, \tag{30}\]
with the usage of Eq. (7), gives all the exact moments. We provided only the first three non-zero exact moments in the paper.
## Appendix C Derivation of the force moments in two dimension
We start by expanding Eq. (25) in a Taylor series
\[\langle e^{ikF_{z}}\rangle=1+\sum_{m=1}^{\infty}\frac{1}{m!}\left(\sum_{n=1}^{ \infty}\frac{\alpha\sqrt{\pi}\xi^{\alpha}F_{c}^{2n}\Gamma(n+\frac{1}{2})}{(2n -\alpha)\Gamma(n+1)}\frac{(ik)^{2n}}{(2n)!}\right)^{m}. \tag{31}\]
Using Eqs. (7,31), the variance is
\[\langle F_{z}^{2}\rangle=\frac{\pi\alpha\xi^{\alpha}}{2(2-\alpha)}F_{c}^{2}. \tag{32}\]
We find the fourth moment, \(\langle F_{z}^{4}\rangle\), and the sixth moments, \(\langle F_{z}^{6}\rangle\), in the same way
\[\langle F_{z}^{4}\rangle=\left[\frac{3\pi\alpha\xi^{\alpha}}{8(4-\alpha)}+3 \left(\frac{\alpha\pi\xi^{\alpha}}{2(2-\alpha)}\right)^{2}\right]F_{c}^{4}, \tag{33}\]
\[\langle F_{z}^{6}\rangle=\] \[\left[\frac{5\pi\alpha\xi^{\alpha}}{16(6-\alpha)}+\frac{45\alpha ^{2}\pi^{2}\xi^{2\alpha}}{16(4-\alpha)(2-\alpha)}+15\left(\frac{\alpha\pi\xi^ {\alpha}}{2(2-\alpha)}\right)^{3}\right]F_{c}^{6}. \tag{34}\]
For \(\xi\ll 1\), the leading term in Eq. (33) is
\[\langle F_{z}^{4}\rangle\sim\frac{3\pi\alpha\xi^{\alpha}}{8(4-\alpha)}F_{c}^ {4}. \tag{35}\]
Similarly, the \(2n\)-th moment is
\[\langle F_{z}^{2n}\rangle=\frac{\alpha\sqrt{\pi}\xi^{\alpha}\Gamma\left(n+ \frac{1}{2}\right)}{(2n-\alpha)\Gamma(n+1)}F_{c}^{2n}+O(\xi^{2\alpha}). \tag{36}\]
Using the rescaled variable \(\tilde{F}=F_{z}/F_{c}\), we get
\[\langle\tilde{F}^{2n}\rangle=\frac{\alpha\sqrt{\pi}\xi^{\alpha}\Gamma\left(n+ \frac{1}{2}\right)}{(2n-\alpha)\Gamma(n+1)}+O(\xi^{2\alpha}), \tag{37}\]
which yields Eq. (26).
Our next goal is to find the moments of the absolute force value that are mentioned in Eq. (23), \(\langle|F_{z}|^{q}\rangle\). We do so by employing the infinite density presented in Eq. (28) for \(q>\alpha\) and \(L_{\alpha}(F_{z})\) for the moments \(q<\alpha\), using \(\langle|F_{z}|^{q}\rangle=\int_{-\infty}^{\infty}|F_{z}|^{q}L_{\alpha}(F_{z}) dF_{z}\), which yield
Eq. (23). The amplitudes, \(M_{q}\), for the case of \(d=2\) are
\[\begin{cases}M_{q>\alpha}=\frac{\alpha\sqrt{\pi}\epsilon^{\alpha}\Gamma\left( \frac{\pi\pm 1}{2}\right)}{(q-\alpha)\Gamma\left(1+\frac{q}{2}\right)},\\ M_{q<\alpha}=\frac{\left(\mu_{d,\alpha}\right)^{\frac{\alpha}{2}}\Gamma\left( 1-\frac{q}{2}\right)}{\cos\left(\frac{\pi\alpha}{2}\right)\Gamma\left(1-q \right)},\end{cases} \tag{38}\]
with \(\mu_{d,\alpha}\) defined in Eq. (19). Eq. (38) is used to describe how the finite size simulations converge to the asymptotic prediction in Fig. 2 presented in the letter.
|
2305.17557 | Fair Clustering via Hierarchical Fair-Dirichlet Process | The advent of ML-driven decision-making and policy formation has led to an
increasing focus on algorithmic fairness. As clustering is one of the most
commonly used unsupervised machine learning approaches, there has naturally
been a proliferation of literature on {\em fair clustering}. A popular notion
of fairness in clustering mandates the clusters to be {\em balanced}, i.e.,
each level of a protected attribute must be approximately equally represented
in each cluster. Building upon the original framework, this literature has
rapidly expanded in various aspects. In this article, we offer a novel
model-based formulation of fair clustering, complementing the existing
literature which is almost exclusively based on optimizing appropriate
objective functions. | Abhisek Chakraborty, Anirban Bhattacharya, Debdeep Pati | 2023-05-27T19:16:55Z | http://arxiv.org/abs/2305.17557v1 | # Fair Clustering via Hierarchical Fair-Dirichlet Process
###### Abstract
The advent of ML-driven decision-making and policy formation has led to an increasing focus on algorithmic fairness. As clustering is one of the most commonly used unsupervised machine learning approaches, there has naturally been a proliferation of literature on _fair clustering_. A popular notion of fairness in clustering mandates the clusters to be _balanced_, i.e., each level of a protected attribute must be approximately equally represented in each cluster. Building upon the original framework developed in Chierichetti et al. [16], this literature has rapidly expanded in various aspects. In this article, we offer a novel model-based formulation of fair clustering, complementing the existing literature which is almost exclusively based on optimizing appropriate objective functions. We first rigorously define a notion of fair clustering in the population level under a model mis-specified framework, with minimal assumptions on the data-generating mechanism. We then specify a Bayesian model equipped with a novel hierarchical prior specification to encode the notion of balance in resulting clusters, and whose posterior targets this population-level object. A carefully developed collapsed Gibbs sampler ensures efficient computation, with a key ingredient being a novel scheme for non-uniform sampling from the space of binary matrices with fixed margins, utilizing techniques from optimal transport towards constructing proposals. Impressive empirical success of the proposed methodology is demonstrated across varied numerical experiments, and benchmark data sets. Importantly, the benefits of our approach are not merely limited to the specific model we propose - thinking from a generative modeling perspective allows us to provide concrete guidelines for prior calibration that ensures desired distribution of balance _a-priori_, develop a concrete notion of optimal recovery in the fair clustering problem, and device schemes for principled performance evaluations of algorithms.
_Keywords--_ Balance, Disparate impact, Generative modeling, Kullback-Leibler projection, Fixed margin binary matrices, Optimal transport.
## 1 Introduction
Algorithmic decision-making is increasingly employed in critical aspects affecting human lives, e.g credit, employment, education, criminal justice; and hence fairness in machine learning [21, 34, 33] has emerged as a primary pillar of recent research focus. Discrimination
refers to unfavourable treatment of entities due to the membership to certain demographic groups that are determined by factors referred to as _protected attributes_. Hence, the goal of group fairness is to design algorithms that make fair decisions devoid of discrimination due to membership to label of a protected attribute. Noticeably, in the nascent stages of the quest for fairness in machine learning algorithms, much of the literature were targeted towards supervised learning techniques and an obvious need was felt to consolidate the notions of fairness in the context of unsupervised learning problems, such as clustering. Chierichetti et al. [16] introduced the concept of _balance_ as a criterion for fair clustering; defining a clustering mechanism to be fair if the resulting clusters share a common ratio of data points representing individuals belonging to the different groups of the protected attribute; and discussed both the \(k\)-center and the \(k\)-median problems with two labels of the protected attribute, i.e the _two-color_ case. Subsequently, a number of follow up works have attempted to extend this to a _multi-color_ case [13], where the protected attribute has more than two labels. Esmaeili et al. [22] further generalized these by assuming imperfect knowledge of group membership through probabilistic assignments. Bera et al. [9] extended the scope by allowing the user to specify parameters that controls the extent of fair representation, general \(\mathbb{L}_{p}\) norm of the clustering objective, and by considering the case where individuals can lie in multiple protected groups. At the same time, the mechanisms for ensuring fairness in various other avatars of clustering, e.g spectral clustering [27], correlation clustering [3], hierarchical clustering [2], have emerged. Other notions of fairness in the context of clustering have been looked at too, e.g individual fairness [26, 30, 14], proportional fairness [15]. Perhaps unsurprisingly, fairness in clustering has been studied in conjunction with other pressing aspects of modern machine learning as well, e.g privacy [36], data summarization [25], robustness [8], to name a few. For a comprehensive review of the literature, interested readers are encouraged to browse this website on fair-clustering, and refer to the recent comprehensive review article Dwork et al. [21].
### Our Contributions
* **(1)** We take a novel model-based approach to tackle the problem of clustering under balance constraints. Thinking from a generative modeling perspective allows us to develop a concrete notion of _optimal recovery_ in this problem, and subsequently device a scheme for principled _performance evaluation_ of algorithms.
* **(2)** We specify a hierarchical model equipped with a novel prior specification, termed as hierarchical fair Dirichlet process, to encode the notion of balance in resulting clusters; provide concrete guidelines for the prior calibration that ensures a desired distribution of _balance_ a-priori; and develop a collapsed Gibbs sampler that exploits a carefully designed gamma prior specification on the shared concentration parameter ensuring rapid mixing and scalability.
* **(3)** The most prominent computational bottleneck in our approach involving sampling cluster membership indices subject to balance constraints, which in turn poses a difficult non-uniform sampling problem in the space of binary matrices with fixed margins. This is carefully navigated via proposing a novel weighted rectangular loop scheme
equipped with an integer-valued optimal-transport based proposal mechanism. This scheme may be of independent importance in many applications in neurophysiology, sociology, psychometrics and ecology [12, 1, 41].
* non-isotropic covariances within clusters, mixed data type, censoring, missing values, variable selection [38], covariate adjustment, etc.
## 2 Bayesian Clustering with Fairness Constraints
### Probabilistic Framework
Let \(\mathbb{Z}_{\geqslant 0}\) (resp. \(\mathbb{R}_{\geqslant 0}\)) denote the set of non-negative integers (reals). For a positive integer \(t\), denote \([t]:=\{1,\ldots,t\}\), and let \(\Delta^{t-1}\subset\mathbb{R}^{t}\) denote the \((t-1)\)-dimensional probability simplex, i.e., \(\Delta^{t-1}=\{x\in\mathbb{R}_{\geqslant 0}^{t}\,:\,\langle 1_{t},x\rangle=1\}\). For two probability distributions \(p_{1},p_{2}\) with same support, the Kullback-Leibler divergence from \(p_{1}\) to \(p_{2}\) is \(\operatorname{KL}(p_{1}\,||\,p_{2})=E_{p_{1}}\log(p_{1}/p_{2})\); and the Symmetrised Kullback-Liebler divergence between \(p_{1}\) and \(p_{2}\) is \(\operatorname{KL}_{\text{sym}}(p_{1},p_{2})=\operatorname{KL}(p_{1}\,||\,p_ {2})+\operatorname{KL}(p_{2}\,||\,p_{1})\). A \(t\)-dimensional random vector \(u\) follows a Dirichlet distribution with concentration parameter \(\boldsymbol{\alpha}\in\mathbb{R}_{+}^{t}\), denoted \(\boldsymbol{u}\sim\operatorname{Dir}(\boldsymbol{\alpha})\), if \(u_{i}=v_{i}/\sum_{j=1}^{t}v_{j}\) with \(v_{i}\stackrel{{ ind}}{{\sim}}\text{Gamma}(\alpha_{i},1)\) for \(i\in[t]\).
Suppose we observe data \(\{(\boldsymbol{x}_{i},a_{i})\}_{i=1}^{N}\), where \(\boldsymbol{x}_{i}\) denotes the \(d\)-variate observation for the \(i\)-th data unit, and \(a_{i}\) the label of the protected attribute. For each \(a\), let \(\{\boldsymbol{x}_{i}^{(a)}\}_{i=1}^{N_{a}}\) denote the observations corresponding to the \(a\)-th level of the protected attribute, where \(N_{a}=\sum_{i=1}^{N}\boldsymbol{1}(a_{i}=a)\) and \(\sum_{a=1}^{r}N_{a}=N\). The goal of fair clustering is to assign the data points \(\{(\boldsymbol{x}_{i},a_{i})\}_{i=1}^{N}\) into clusters \(\boldsymbol{C}=(C_{1},\ldots,C_{K}),\,\dot{\bigcup}C_{k}=[N]\), respecting the notion of balance [16], presented next.
**Definition 1** (Balance, [16]).: _Given \(\{(\boldsymbol{x}_{i},a_{i})\in\mathcal{X}\times\mathcal{A},\ i=1\ldots,N\}\) such that \(a_{i}=a\) for \(i=\sum_{j=1}^{a-1}N_{j}+1,\ldots,\sum_{j=1}^{a}N_{j}\) where \(a\in[r]\) and \(N_{0}\) = 0, the balance in \(C_{k}\) is defined as \(\text{Balance}(C_{k})=\min_{1\leqslant j_{1}<j_{2}\leqslant r}\left\{|C_{kj_{ 1}}|/|C_{kj_{2}}|,|C_{kj_{2}}|/|C_{kj_{1}}|\right\}\) where \(|C_{kj}|\) denote the number of observations in \(C_{k}\) with \(a=j\). The overall balance of the clustering is \(\text{Balance}(\boldsymbol{C})=\min_{k=1,\ldots,K}\text{Balance}(C_{k})\). The higher this measure is for a clustering configuration, the fairer is the clustering._
With the above definition of balance, Chierichetti et al. [16] introduced the concept of _fairlets_ that are minimal fair sets which approximately preserve the clustering objective of choice; and demonstrated that any fair clustering problem can be reduced to first finding a fairlet decomposition of the data via solving a _minimum cost flow_ problem, and then resorting to classical clustering algorithms, e.g k-means, k-center. While the follow-up works [13, 22, 36, 25] greatly increased the scope of fair clustering from various aspects, a precise statistical framework is largely missing in the literature, which in turn makes it difficult to define the notion of the _true fair clustering configuration_ at the _population level_, theoretically study
optimal discovery/clustering consistency under fairness constraints, and device principled performance measures to compare algorithms. This genuinely ordains a concrete probabilistic treatment to study clustering with balance constraints, and the following discussion precisely targets to achieve this.
We assume that \(\{(x_{i},a_{i})\}_{i=1}^{N}\) are independent copies of the random vector \((X,A)\in\mathcal{X}\times A\subset\mathbb{R}^{d}\times\mathcal{A}\); \(\mathcal{A}=[r]\) without loss of generality. A weight vector \(\mathbf{\xi}\in\Delta^{r-1}\) records the population proportions of the different labels of the protected attribute, which is assumed to be completely known. The generative mechanism for \((X,A)\) is hypothesised to be governed by \(A\sim\mathcal{P}_{A}^{\star}\equiv\text{Multinomial}(1,\mathbf{\xi}),\;(X,Z)\mid A \sim\mathcal{P}_{X,Z|A}^{\star}\equiv\mathcal{P}_{X|Z,A}^{\star}\times\mathcal{ P}_{Z|A}^{\star}\), where the probability measure \(\mathcal{P}_{X,A,Z}^{\star}\) is unknown and the random variable \(Z\) is the latent/unobserved clustering index. In practice, we shall only observe independent copies of \((X,A)\); and the goal is to learn the marginal generative mechanism \(\mathcal{P}_{Z}^{\star}\) of the clustering index \(Z\), such that the clustering is _balanced_. To consolidate the notion of _balance_ at population level, we first introduce the space of joint probability measures of \((X,Z,A)\) denoted by \(\mathbb{P}\); and then define a subset \(\mathbb{P}^{(R)}=\left\{\mathcal{P}_{X,Z,A}\in\mathbb{P}\;:\;\mathcal{P}_{A|Z} =\mathcal{P}_{A}\right\}\subset\mathbb{P}\) that collects all possible generative models under which every label of the protected attribute are equally likely to appear within each cluster. However, aiming for exact balance in the clusters may not be desired in practice, and in order to provide additional flexibility in specifying the notion of balance, we introduce \(\mathbb{P}_{\varepsilon}^{(R)}=\left\{\mathcal{P}_{X,Z,A}\in\mathbb{P}\;:\; \text{KL}(\mathcal{P}_{A}\times\mathcal{P}_{Z}\mid\mid\mathcal{P}_{A,Z}) \leq\varepsilon\right\}\). The user-defined quantity \(\varepsilon\geq 0\) controls the extent of departure from balance, and we say that \((A,Z)\) satisfies \(\varepsilon\)_-balance_ under \(\mathcal{P}_{X,Z,A}\) iff \(\text{KL}(\mathcal{P}_{A}\times\mathcal{P}_{Z}\mid\mid\mathcal{P}_{A,Z}) \leq\varepsilon\). By definition, we have \(\mathbb{P}_{0}^{(R)}=\mathbb{P}^{(R)}\), and it refers to the class of generative mechanisms where strict balance is maintained. The quantity at the heart of our definition of balance, \(\text{KL}(\mathcal{P}_{A}\times\mathcal{P}_{Z}\mid\mid\mathcal{P}_{A,Z})\), is the _mutual information_[17] between \((A,Z)\), and is indeed a key pivot to test for departure of the joint distribution of \((A,Z)\) from the independence table [10, 4]. Noteworthy, for a fixed \(\varepsilon\), the true generative model \(\mathcal{P}^{\star}\) may not belong to \(\mathbb{P}_{\varepsilon}^{(R)}\). However, since we wish to ensure that our clustering procedure is \(\varepsilon\)-_balanced_, the inferential goal constitutes finding the "best" approximation of the true generative model \(\mathcal{P}^{\star}\) within the restricted class \(\mathbb{P}_{\varepsilon}^{(R)}\). Readers may associate this framework to that of maximum likelihood estimation under model misspecification [42]; or variational inference [11], where we intend to find the "best" approximation of an unknown probability distribution within a well-behaved variational family. We appeal to the extensive literature on these topics, in order to ensure principled treatment of our inferential goal. To that end, two definitions are in order.
**Definition 2** (Pseudo True Fair Clustering Distribution, PT-Fcd).: _For fixed \(\varepsilon\geq 0\), the Pseudo-true clustering distribution restricted to \(\mathbb{P}_{\varepsilon}^{(R)}\) is defined as,_
\[\mathcal{P}_{Z_{\text{PT-FCD}}}^{\varepsilon}=\text{argmin}_{\mathcal{P}_{A,Z, X}\in\mathbb{P}_{\varepsilon}^{(R)}}\text{KL}(\mathcal{P}_{Z|X,A}^{\star}\mid \mid\mathcal{P}_{Z|X,A}), \tag{2.1}\]
_where \(\mathcal{P}_{Z|X,A}\) denotes the conditional distribution of \(Z\mid X,A\) obtained on marginalization from a \(\mathcal{P}_{A,Z,X}\in\mathbb{P}_{\varepsilon}^{(R)}\). Further, the probability distribution of the number of unique elements induced from \(\mathcal{P}_{Z_{\text{PT-FCD}}}^{\varepsilon}\) in Definition 2.1 is referred to as the pseudo-true distribution of number of clusters._
Note \(\mathcal{P}_{Z_{\text{PT-FCD}}}^{\varepsilon}\) represents the _KL projection_ of the true \(\mathcal{P}_{Z|X,A}^{\star}\) within the \(\varepsilon\)-_balanced_ class
of generative models, and the minimization problem in (2.1) can be recasted as
\[\max_{\mathcal{P}_{A,Z,X}\in\mathbb{P}}\bigl{[}\mathbb{E}_{\mathcal{P}_{Z|X,A}^{ \star}}(\log\mathcal{P}_{Z|X,A})-\lambda_{\varepsilon}\mathrm{KL}(\mathcal{P}_ {A}\times\mathcal{P}_{Z}\mid\mid\mathcal{P}_{A,Z})\bigr{]},\]
where \(\lambda_{\varepsilon}\geq 0\) is a Lagrange's multiplier that depends on \(\varepsilon\). This dual formulation would be crucial in the proof of Theorem 1 below.
### Restricted Posterior Maximization
In this section, we develop the general template of a data-driven procedure that targets to optimally recover the population level quantity \(\mathcal{P}_{Z_{\mathrm{PT-FCD}}}^{\varepsilon}\). To keep parity with sub-section 2.1, suppose \(\mathbb{Z}\) is the collection of all possible clustering configurations, and \(\mathbb{Z}^{(fair)}\) is the subset of \(\mathbb{Z}\) where _balance_ is maintained. That is, \(\mathbb{Z}^{(fair)}=\bigl{\{}\mathbf{z}\in\mathbb{Z}:\mathcal{C}_{\mathbf{z}}\text{ is balanced}\bigr{\}}\), where \(\mathcal{C}_{\mathbf{z}}\) is the clustering configuration induced by \(\mathbf{Z}=\mathbf{z}\). A related relaxed notion of \(\mathbb{Z}^{(fair)}\) constitutes defining \(\mathbb{Z}^{(fair)}(\varepsilon)=\bigl{\{}\mathbf{z}\in\mathbb{Z}:\mathrm{KL}( \mathcal{P}_{\mathbf{A}}\times\mathcal{P}_{\mathbf{Z}}\mid\mid\mathcal{P}_{\mathbf{A}, \mathbf{Z}})\leq\varepsilon\bigr{\}}\), where \(\varepsilon\geq 0\) is a hyper-parameter, and we have \(\mathbb{Z}^{(fair)}(0)=\mathbb{Z}^{(fair)}\).
**Definition 3** (Maximum-a-Posteriori fair cluster, MAP-FC).: _The modal clustering configuration obtained from the conditional distribution of \(\mathbf{Z}\mid\mathbf{X},\mathbf{A}\) restricted to \(\mathbb{Z}^{(fair)}(\varepsilon)\), denoted by \(\mathcal{P}_{\mathbf{Z}|\mathbf{X},\mathbf{A}}^{(fair)}(\mathbf{z}\mid\mathbf{x},\mathbf{a})\equiv \mathcal{P}_{\mathbf{Z}|\mathbf{X},\mathbf{A}}(\mathbf{z}\mid\mathbf{x},\mathbf{a})\times\mathbf{1}_{ \mathbb{Z}^{(fair)}(\varepsilon)}(\mathbf{z})\), is termed as the Maximum-a-Posteriori fair clustering configuration, i.e,_
\[\mathbf{z}_{\mathrm{MAP-FC}}^{\varepsilon}=\mathrm{argmax}_{\mathbf{z}\in\mathbb{Z}^{( fair)}(\varepsilon)}\mathcal{P}_{\mathbf{Z}|\mathbf{X},\mathbf{A}}(\mathbf{z}\mid\mathbf{x},\mathbf{a}). \tag{2.2}\]
The maximization problem in (2.2) can be expressed equivalently as \(\mathrm{argmax}_{\mathbf{z}\in\mathbb{Z}(\varepsilon)}\mathcal{P}_{\mathbf{Z}|\mathbf{X},\mathbf{A}}(\mathbf{z}\mid\mathbf{x},\mathbf{a})-\lambda_{\varepsilon}\mathrm{KL}(\mathcal{ P}_{\mathbf{A}}\times\mathcal{P}_{\mathbf{Z}}\mid\mid\mathcal{P}_{\mathbf{A},\mathbf{Z}})\), where \(\lambda_{\varepsilon}\) is the Lagrange's multiplier. Further, the number of unique elements in \(\mathbf{z}_{\mathrm{MAP-FC}}^{\varepsilon}\) in Definition 2.2 is referred to as the _Maximum-a-Posteriori number of clusters_. In principle, we can potentially target \(z_{\mathrm{MAP-FC}}^{\varepsilon}\) via a two-staged post-processing scheme, where we first fit a flexible Bayesian model and obtain samples from the unconstrained posterior of \(\mathbf{Z}\mid\mathbf{X}\). Then, as a post-processing step, we sort the posterior samples according to their posterior probability and report the clustering configuration with the highest posterior probability while satisfying the balance constraint. However, such an approach is clearly computationally prohibitive, and we provide a novel get-around via proposing a HFDP prior in Section 3 that encodes the notion of approximate balance by shrinking towards \(\mathbb{Z}^{(fair)}(\varepsilon)\), with the degree of shrinkage controlled by hyperparameters which can be appropriately calibrated. This entirely avoids the need for the aforesaid post-processing, as the posterior samples automatically encode fairness with high probability.
Note that, \(\mathbf{X}\mid\mathbf{A}\) assumes a mixture distribution of the form \(\mathbf{X}\mid\mathbf{A}\sim\sum_{\mathbf{z}}\zeta_{\mathbf{a},\mathbf{z}}\mathcal{P}_{\mathbf{X}|\mathbf{ Z},\mathbf{A}}\), where \(\mathcal{P}_{\mathbf{Z}|\mathbf{A}}(\mathbf{Z}=\mathbf{z}\mid\mathbf{A}=\mathbf{a})=\zeta_{\mathbf{a},\mathbf{z}}\) and \(\sum_{\mathbf{z}}\zeta_{\mathbf{a},\mathbf{z}}=1\)\(\forall\)\(\mathbf{a}\in[r]^{N}\). Then, with a slight abuse of notation, we have \(\mathcal{P}_{\mathbf{Z}|\mathbf{X},\mathbf{A}}^{(fair)}(\mathbf{z}\mid\mathbf{x},\mathbf{a})=[\zeta_{ \mathbf{a},\mathbf{z}}\mathcal{P}_{\mathbf{X}|\mathbf{Z},\mathbf{A}}(\mathbf{x}\mid\mathbf{z},\mathbf{a}) \mathcal{P}_{\mathbf{A}}(\mathbf{a})\times\mathbf{1}_{\mathbb{Z}^{(fair)}(\varepsilon)}( \mathbf{z})]/\mathcal{G}_{N}\), where \(\mathcal{G}_{N}=\sum_{\mathbf{z}}[\zeta_{\mathbf{a},\mathbf{z}}\mathcal{P}_{\mathbf{X}|\mathbf{Z},\mathbf{A}}(\mathbf{x}\mid\mathbf{z},\mathbf{a})\mathcal{P}_{\mathbf{A}}(\mathbf{a})]\times\mathbf{1 }_{\mathbb{Z}^{(fair)}(\varepsilon)}(\mathbf{z})\). Given a cluster configuration \(\hat{\mathbf{z}}\), define
\[\mathrm{Fair-Score}(\hat{\mathbf{z}}): =\log\bigl{[}\zeta_{\mathbf{a},\hat{\mathbf{z}}}\mathcal{P}_{\mathbf{X}|\mathbf{Z},\mathbf{A}}(\mathbf{x}\mid\hat{\mathbf{z}},\mathbf{a})\mathcal{P}_{\mathbf{A}}(\mathbf{a})]\times \mathbf{1}_{\mathbb{Z}^{(fair)}(\varepsilon)}(\hat{\mathbf{z}})\bigr{]} \tag{2.3}\]
Clearly, Fair-Score as defined above is maximized at \(\mathbf{z}_{\mathrm{MAP-FC}}^{\varepsilon}\). An empirical version, replacing population quantities by data-driven estimates, can be used to compare different
clustering methods. The Fair-Score thus serves as a criterion towards a principled assessment of the clustering performance as it strikes a balance between posterior maximization while maintaining fairness. For a flavor, refer to Figure S3 which illustrates that the modal clustering obtained from the HFDP procedure proposed in Section3 achieves higher fair-score compared to existing methods [16, 13, 22].
In summary, Definition 2.1 introduces a notion of \(\varepsilon\)-balanced clustering at the population level under a model mis-specified framework, with minimal assumptions on the data-generating mechanism. In parallel, having observed samples \(\{(\mathbf{x}_{i},a_{i})\}_{i=1}^{N}\), Definition 2.2 discusses a restricted posterior maximization framework that targets the population-level object, of which the HFDP introduced in Section 3 is a specific example. To reconcile the notions of clustering with balance constraints at population level (2.1) and its sample counterpart (2.2), an asymptotic equivalence result follows. To that, let us define a map \(\mathcal{R}:[K]^{N}\to[0,1]^{K}\) that takes a clustering configuration \(\mathbf{z}\in[K]^{N}\) as input and outputs the relative proportion of cluster indices \((1/N)[\sum_{i=1}^{N}\mathbf{1}(z_{i}=1),\ldots,\sum_{i=1}^{N}\mathbf{1}(z_{i}= K)]^{\mathrm{T}}\). The marginal posterior of \((\mathbf{Z}\mid\mathbf{X},\mathbf{A})\) along with the reparameterization \(\mathbf{z}\to\mathcal{R}(\mathbf{z})\), yields the marginal posterior of \((\mathcal{R}(\mathbf{Z})\mid\mathbf{X},\mathbf{A})\), denoted by \(P_{\mathcal{R}(\mathbf{Z})\mid\mathbf{X},\mathbf{A}}\).
**Theorem 1** (An equivalence result).: _In Definitions 2.1 and 2.2, for any fixed \(\varepsilon\geq 0\), under clustering consistency conditions,_
\[\lim_{N\to\infty}\left[\frac{P_{\mathcal{R}(\mathbf{Z})\mid\mathbf{X},\mathbf{A}}( \mathcal{R}(\mathbf{z}_{\mathrm{MAP-FC}}^{\varepsilon})\mid\mathbf{x},\mathbf{a})}{\sum_{ \mathbf{z}}P_{\mathcal{R}(\mathbf{Z})\mid\mathbf{X},\mathbf{A}}(\mathcal{R}(\mathbf{z})\mid\mathbf{x}, \mathbf{a})\mathbf{1}_{\mathbb{Z}^{(fair)}(\varepsilon)}(\mathbf{z})}\right]=\mathcal{ P}_{Z_{\mathrm{PT-FCD}}}^{\varepsilon}.\]
We defer the proof to the supplementary material, and discuss the key take-aways from the Theorem 1. Firstly, the equivalence result demonstrates that, given data \(\{(\mathbf{x}_{i},a_{i})\}_{i=1}^{N}\), we can carefully construct probability models such that the resulting modal \(\varepsilon\)-balanced clustering configuration accurately recovers the Kullback-Liebler projection of the true population clustering distribution within the \(\varepsilon\)-balanced class, as sample size diverges to \(\infty\). Secondly, on the practical front, Theorem 1 justifies the utility of the quantity in (2.3) as a principled measure for clustering performance comparison.
## 3 Model and Prior Specification
Let \(\mathcal{Z}_{N,K}=\{\mathbf{z}=(z_{1},\ldots,z_{N}):z_{i}\in[K]\text{ for all }i\in[N]\}\) denote the space of all clustering configurations of \(N\) observations into \(K\) clusters. Any \(\mathbf{m}=(m_{1},\ldots,m_{K})\in\mathbb{Z}_{\geq 0}^{K}\) such that \(\sum_{k=1}^{K}m_{k}=N\) will be called a a _cluster occupancy vector_. Given such \(\mathbf{m}\), let \(\mathcal{Z}_{N,K,\mathbf{m}}=\{z\in\mathcal{Z}_{N,K}\::\sum_{i=1}^{N}\mathbf{1}(z_ {i}=k)=m_{k},\ i\in[\![N]\!]\}\) denote all clustering configurations with cluster occupancy vector \(\mathbf{m}\). Note that, \(\mathcal{Z}_{N,K,\mathbf{m}}\) can be uniquely characterised by the space of \(N\times K\) binary cluster membership matrices with fixed column-sum \(\mathbf{m}\) and row-sum \(\mathbf{1}_{N}\). This is crucially exploited for posterior sampling; see around equation (S.2.1). Next, define a function \(\mathrm{rd}:\mathbb{N}\times\Delta^{t-1}\to\mathbb{Z}_{\geq 0}^{t}\), so that for a positive integer \(n\in\mathbb{N}\) and a probability vector \(\mathbf{u}\in\Delta^{t-1}\), \(\mathbf{v}=\mathrm{rd}(n,\mathbf{u})\) is given by \(v_{i}=\mathrm{round}(nu_{i})\) for \(i\in[\![t-1]\!]\), where "round" denotes the rounding function to the nearest integer, and \(v_{t}=n-\sum_{i=1}^{t-1}v_{i}\geq 0\). Clearly, \(\langle 1_{t},\mathrm{rd}(n,\mathbf{u})\rangle=n\) for any \(\mathbf{u}\). We use this \(\mathrm{rd}(\cdot)\) function to create a novel prior on cluster occupancy vectors below.
We now detail pieces of a hierarchical model to carry out Bayesian clustering with fairness constraints. The lowest level hyperparameters of our model are \((K,g,b)\), where \(K\) is an upper bound on the number of clusters, and \(g,b\) are positive parameters. Given these hyperparameters, we sample a global weight vector \(\mathbf{\beta}\in\Delta^{K-1}\) and a concentration parameter \(\alpha_{0}\in(0,\infty)\) as in (3.1) below. Next, given \(\alpha_{0}\) and \(\mathbf{\beta}\), we independently sample a weight vector \(\mathbf{w}^{(a)}\) corresponding to each level of the attribute \(a\in[r]\) from \(\operatorname{Dir}(\alpha_{0}\,\mathbf{\beta})\).
\[\mathbf{w}^{(a)}\,|\,\alpha_{0},\mathbf{\beta}\overset{ind.}{\sim} \operatorname{Dir}(\alpha_{0}\,\mathbf{\beta}),\,a\in[r];\;\mathbf{\beta}\,|\,g\sim \operatorname{Dir}(g/K,\ldots,g/K);\;\alpha_{0}\,|\,g,b\ \sim\ \operatorname{Gamma}(g,\;b). \tag{3.1}\]
The concentration parameter \(\alpha_{0}\) dictates how tightly the \(\{\mathbf{w}^{(a)}\}_{a=1}^{r}\) concentrate around \(\mathbf{\beta}\), and is critical in enabling a notion of balance in our model-prior specification. Figure 1 provides a visual recipe for calibration of the prior on \(\alpha_{0}\) with respect to the lowest level hyperparameters \((g,b)\). In particular, given \((g,b)\) and \(K=2\), we obtain prior draws of \((\mathbf{w}^{(1)},\mathbf{w}^{(2)})\) according to the equation block (3.1), and present the distribution _a-priori_ of \(\operatorname{KL}(w^{(1)}\;||\;w^{(2)})\) and balance between \((\mathbf{w}^{(1)},\mathbf{w}^{(2)})\).
Having drawn \(\mathbf{w}^{(a)}\) for each label of the attribute, we draw the cluster configuration \(\mathbf{z}^{(a)}\) for the \(a\)th sub-population in the following hierarchical manner to maintain strict control over the cluster sizes,
\[\mathbf{z}^{(a)}\;|\;\mathbf{m}^{(a)}\overset{ind.}{\sim}\operatorname{Unif}(\mathcal{ Z}_{N_{a},K,\mathbf{m}^{(a)}}),\quad\mathbf{m}^{(a)}=\operatorname{rd}\bigl{(}N^{(a)}, \mathbf{w}^{(a)}\bigr{)},\quad a\in[r], \tag{3.2}\]
where we first set the cluster occupancy vector \(\mathbf{m}^{(a)}=\operatorname{rd}\bigl{(}N^{(a)},\mathbf{w}^{(a)}\bigr{)}\) for each \(a\in[r]\), and then draw \(\mathbf{z}^{(a)}\) uniformly from \(\mathcal{Z}_{N_{a},K,\mathbf{m}^{(a)}}\). This layer in our model is critically different from a usual specification in a hierarchical Dirichlet process prior [39], and is necessary in our endeavour to embed the idea of balance in the resulting clusters. We call the induced prior on \(\mathbf{m}^{(a)}\) implied by the hierarchy \(\mathbf{m}^{(a)}=\operatorname{rd}\bigl{(}N^{(a)},\mathbf{w}^{(a)}\bigr{)},w^{(a)} \sim\operatorname{Dir}(\alpha_{0}\,\mathbf{\beta})\) a _lifted Dirichlet_ prior with parameters \(N^{(a)},\alpha_{0},\beta\). Compared to the more standard Dirichlet-Multinomial prior, where \(\mathbf{m}^{(a)}\) is additionally sampled from a Multinomial distribution with total count \(N^{(a)}\) and probability vector \(\mathbf{w}^{(a)}\sim\operatorname{Dir}(\alpha_{0}\,\mathbf{\beta})\), the randomness in a lifted Dirichlet prior is entirely controlled by \(\mathbf{w}^{(a)}\), enabling tighter control on the cluster sizes across \(a\). For large \(N^{(a)}\), a lifted Dirichlet distribution provides a close approximation to a Dirichlet-Multinomial due
Figure 1: _The first two plots, respectively, present the distribution of the \(\operatorname{KL}(w^{(1)}\;||\;w^{(2)})\), and balance between \((\mathbf{w}^{(1)},\mathbf{w}^{(2)})\), for varying \(b\) with fixed \(g\). The final two plots present the same quantities, now with varying \(g\) with fixed \(b\). In practice, we vary \((b,g)\) until we obtain the desired distribution of balance a-priori, and thus this provide a visual recipe for prior calibration._
to concentration of measure of Multinomials [35]. In Figure 2, we offer a visual illustration of this phenomenon in the \(K=2\) case, where we set \(\gamma_{k}=\alpha_{0}\beta_{k}\) for \(k=1,2\), and plot a heatmap of symmetrized KL distances between lifted Beta and Beta-Binomial distributions against \((\gamma_{1},\gamma_{2})\) for four different choices of \(N\). Clearly, as \(N\) increases, the approximation uniformly improves across values of the Beta hyperparameters \(\gamma_{1}\) and \(\gamma_{2}\).
The final part of our hierarchical model constitutes specifying component-wise distributions:
\[\begin{split}&\mathbf{x}_{i}^{(a)}\mid z_{i}^{(a)}=k,\mathbf{\phi}_{k}^{(a) }\stackrel{{\text{ind.}}}{{\sim}}f_{obs}(\cdot\mid\mathbf{\phi}_{k} ^{(a)}),\quad i\in[N_{a}],\ a\in[r]\\ &\mathbf{\phi}_{k}^{(a)}\mid\mathbf{\phi}^{(a)}\stackrel{{ \text{ind.}}}{{\sim}}f_{pop}(\cdot\mid\mathbf{\phi}^{(a)}),\quad k\in[K],\ a \in[r],\quad\mathbf{\phi}^{(a)}\stackrel{{ i.i.d}}{{\sim}}f_{atom}, \quad a\in[r],\end{split} \tag{3.3}\]
where \(\mathbf{\phi}_{k}^{(a)}\) are parameters specific to \(k\)-th cluster and \(a\)-th attribute, which are drawn independently from a common population. The equations (3.1) - (3.3) complete the specification of our hierarchical model, which we call a _Hierarchical Fair-Dirichlet process_ (HFDP). For concreteness, we focus on a Gaussian generative model with a conjugate NIW prior on the mean-covariance in (3.3), i.e., set \(f_{obs}(\cdot\mid\mathbf{\phi}_{k}^{(a)})=\mathrm{N}_{d}\big{(}\mathbf{\mu}_{k}^{(a)}, \ \Sigma_{k}^{(a)}\big{)}\) where \(\phi_{k}^{(a)}=(\mathbf{\mu}_{k}^{(a)},\ \Sigma_{k}^{(a)})\); and \(f_{pop}(\cdot\mid\mathbf{\phi}^{(a)})=\mathrm{NIW}(\mathbf{\mu}_{0}^{(a)},\ \lambda_{0}^{(a)},\ \Lambda_{0}^{(a)},\ \nu_{0}^{(a)})\), where \(\phi^{(a)}=\mathbf{\mu}_{0}^{(a)},\ \lambda_{0}^{(a)},\ \Lambda_{0}^{(a)},\ \nu_{0}^{(a)})\) with \(\mathbf{\mu}^{\star},\ \Lambda^{\star}\) suitably fixed. Finally, we collect \(\mathbf{\mu}^{(a)}=\{\mathbf{\mu}_{1}^{(a)},\ldots,\mathbf{\mu}_{K}^{(a)}\}\) and \(\Sigma^{(a)}=\{\Sigma_{1}^{(a)},\ldots,\Sigma_{K}^{(a)}\}\) where \(a\in[r]\). Under the above model and prior specification, our goal is to learn the clustering indices \(\{\mathbf{z}^{(a)},a\in[r]\}\) respecting the notion of approximate balance, and avoid the need for statistically inefficient posterior sample sanitation, such as the one described in text leading up to Theorem 1 in section 2.1. Extensions to non-Gaussian likelihoods are straightforward.
## 4 Posterior Computation
Sampling from the the posterior obtained from (3.1)-(3.3) encounters challenges due to (a) the lack of conjugacy of the shared weight vector \(\beta\) and \(\alpha_{0}\) and (b) sampling from the cluster occupancy vectors \(\mathbf{m}^{(a)}\) in large discrete spaces with combinatorial complexity. The bottleneck (a) is alleviated by noting that the Gamma prior on \(\alpha_{0}\) allows us to break the dependence of the shared mixing proportions \(\beta\). This permits independent updates of certain log-concave random variables in a block, adapting a recently developed blocked Gibbs sampler for the hierarchical Dirichlet processes [19]. The second bottleneck is carefully circumnavigated via a
Figure 2: _Heatmap of the symmetrized KL distance between a lifted Beta distribution and a Beta-Binomial distribution against the Beta parameters \(\gamma_{1}\) and \(\gamma_{2}\). As \(N\) increases, the symmetrized KL uniformly gets smaller, implying the distributions get closer over a wide range of parameter values._
novel scheme for non-uniform sampling from the space of binary matrices with fixed margins, utilizing techniques from integer-valued optimal transport towards constructing proposals. Starting with the joint distribution of all parameters described in equations (3.1) - (3.3), we analytically integrate out cluster specific parameters \(\{\mathbf{\phi}_{k},\ k\in[K]\}\) and shared concentration parameter \(\alpha_{0}\), wherever possible, to ensure improved mixing. The details of the derivation of the sampler are deferred to the supplementary materials, and the key computational bottlenecks and features of the algorithm are presented in sequel.
Letting \(\theta\mid-\) denote the full conditional distribution of a parameter \(\theta\) given other parameters and the data, a collapsed Gibbs sampler (or an MC-EM algorithm), cycles through the following steps. First, we sample from \([\alpha_{0}\mid-]\) via Metropolis-within-Gibbs, sample from \(\mathbf{\beta}\mid-\) adapting [19] to our present setup, and then sample \([\mathbf{w}^{(a)}\mid-]\) from independent Dirichlet distributions. The marginal conditional of clustering indices \([\mathbf{z}^{(a)}\mid-],\ a\in[r]\), integrating out population parameters, is
\[[\mathbf{z}^{(a)}\mid-]\propto\ \prod_{k=1}^{K}\frac{\Gamma_{d}(\nu_{k}^{(a)}/2)\ (\lambda_{0}^{(a)})^{d/2}\ |\Lambda_{0}^{(a)}|^{\nu_{0}^{(a)}/2}}{\Gamma_{d}(\nu_{0}^{(a)}/2)\ (\lambda_{k}^{(a)})^{d/2}\ |\Lambda_{k}^{(a)}|^{\nu_{k}^{(a)}/2}},\quad \mathbf{z}^{(a)}\in\mathcal{Z}_{N_{a},K,\mathbf{m}^{(a)}}\quad a\in[r]. \tag{4.1}\]
Sampling (S.2.1) poses a difficult combinatorial problem and is the most substantial computational bottleneck in our algorithm. We circumnavigate this issue via recasting the problem as a non-uniform sampling task from the space of binary matrices with fixed margin [31, 12, 1], and propose a novel sampling scheme equipped with an MH proposal based on integer-valued optimal transport [40].
### Optimization
As an intermediate step, we first develop a computationally convenient MC-EM algorithm [28], where instead of sampling from \([\mathbf{z}^{(a)}\mid-],\ a\in[r]\), we update the chain with the posterior mode of \([\mathbf{z}^{(a)}\mid-],\ a\in[r]\). In particular, to compute the posterior mode of \([\mathbf{z}^{(a)}\mid-],\ a\in[r]\), we go over the following steps. **(i)** First, for \(a\in[r]\), we calculate the current component specific means and variances \(\mathbf{\mu}_{k,\star}^{(a)},\Sigma_{k,\star}^{(a)}\) for all \(a\in[r],k\in[K]\). **(ii)** Next, we define the \(N_{a}\times K\) cost matrix \(\mathrm{L}^{(a)}=((l_{ik}))=\big{(}\big{(}-\log\mathrm{N}_{d}(\mathbf{x}_{i}^{(a )}\mid\mathbf{\mu}_{k,\star}^{(a)},\Sigma_{k,\star}^{(a)})\big{)}\), column sum vectors \(\mathbf{c}^{(a)}=\mathbf{m}^{(a)}\) and row sum vectors \(\mathbf{r}^{(a)}=\mathbf{1}_{N_{a}}\), where \(\mathbf{1}_{N_{a}}\) is a vector of \(N_{a}\) 1s. **(iii)** Next, given the two vectors \(\mathbf{r}^{(a)},\mathbf{c}^{(a)}\), we define the polytope of \(K\times N_{a}\) binary cluster membership matrices \(\mathrm{U}(\mathbf{r}^{(a)},\mathbf{c}^{(a)}):=\{\mathrm{B}\mid\mathrm{B}\mathbf{1}_{N_{a }}=\mathbf{r}^{(a)};\ \mathrm{B}^{\mathrm{T}}1_{K}=\mathbf{c}^{(a)}\}\), and solve the constrained binary optimal transport problem [40]\(\mathrm{B}^{(a)}=\mathrm{argmin}\mathrm{B}\varepsilon\mathrm{U}(\mathbf{r}^{(a)}, \ \mathbf{c}^{(a)})\langle\mathrm{B},\mathrm{L}^{(a)}\rangle\) where \(\langle\mathrm{B},\mathrm{L}^{(a)}\rangle=\mathrm{tr}(\mathrm{B}^{\mathrm{T}} \mathrm{L}^{(a)})\). Finally, we obtain the modal clustering indices \(\mathbf{z}^{(a)}\) from the binary cluster membership matrices \(\mathrm{B}^{(a)},a\in[r]\), completing the MC-EM algorithm.
### Non-uniform Sampling of Fixed Margin Binary Matrices
We now complete the details of our complete Gibbs sampler which replaces the previous optimization with sampling. Uniform distribution on binary matrices with fixed margins arises in many applications in neurophysiology, sociology, psychometrics and ecology. Both exact [31] and approximate uniform sampling methods from the space of binary matrices with
fixed margins are now well-established. The approximate sampling algorithms are based on checker board swap [12], curve-ball [1] or rectangular loop moves [41]. However, (S.2.1) requires sampling according to a non-uniform probability distribution defined by a weight matrix. To that end, we adapt [41] to introduce rectangular loop moves in the non-uniform case to improve mixing. We explain this adaptation below.
Given fixed row sums \(\mathbf{r}=(r_{1},\ldots,r_{u})^{\mathrm{ T}}\) and column sums \(\mathbf{c}=(c_{1},\ldots,c_{v})^{\mathrm{ T}}\), we denote the collection of all \(u\times v\) binary matrices by \(\mathrm{U}(\mathbf{r},\mathbf{c})\). Further, we denote by \(\Omega=(\omega_{ij})\in[0,\infty)\) a non negative weight matrix representing the relative probability of observing a count of \(1\) at the \((i,j)\)-th cell. Then the likelihood associated with the observed binary matrix \(H\in\mathrm{U}(\mathbf{r},\mathbf{c})\) as \(\mathrm{P}(H)=(1/\kappa)\prod_{i,j}\omega_{ij}^{h_{ij}},\ \kappa=\sum_{H\in\mathrm{U}(\mathbf{r},\mathbf{c})}\ \prod_{i,j} \omega_{ij}^{h_{ij}}\). Let \(\mathrm{U}^{\prime}(\mathbf{r},\mathbf{c})=\{H\in\mathrm{U}(\mathbf{r}, \mathbf{c}):\ P(H)>0\}\) denote the subset of matrices in \(\mathrm{U}(\mathbf{r},\mathbf{c})\) with positive probability. Then, for \(H_{1},H_{2}\in\mathrm{U}^{\prime}(\mathbf{r},\mathbf{c})\), the relative probability of the two observed matrices is \(\mathrm{P}(H_{1})/\mathrm{P}(H_{2})=\prod_{\{i,j:h_{1,ij}=1,h_{2,ij}=0\}}\ \omega_{ij}^{h_{1,ij}}/\prod_{\{i,j:h_{1,ij}=0,h_{2,ij}=1\}}\ \omega_{ij}^{h_{2,ij}}\). With these notations, we are all set to introduce a _Weighted Rectangular Loop Algorithm_ (W-RLA) for non-uniform sampling from the space of fixed margin binary matrices \(H\in\mathrm{U}(\mathbf{r},\mathbf{c})\), given the weight matrix \(\Omega=(\omega_{ij})\in[0,\infty)\). To that end, let us first record that the identity matrix of order \(2\), and the \(2\times 2\) matrix with all zero diagonal entries and all one off-diagonal entries, are referred to as _checker-board_ matrices.
W-RLA is described as follows. **(1)** At iteration \(t=0\), provide an initial binary matrix \(A_{0}\) that respects the margin constraints, and total number of iterations \(T\). **(2)** Increment \(t\to t+1\), and _(a)_ choose one row and one column \((r_{1},c_{1})\) uniformly at random. _(b) If_\(A_{t-1}(r_{1},c_{1})=1\), choose a column \(c_{2}\) at random among all the \(0\) entries in \(r_{1}\), and a row \(r_{2}\) at random among all the \(1\) entries in \(c_{2}\). _Else_ choose a row \(r_{2}\) at random among all the \(1\) entries in \(c_{1}\), and a column \(c_{2}\) at random among all the \(0\) entries in \(r_{2}\). **(3.1)**_If_ the sub-matrix extracted from \(r_{1},r_{2},c_{1},c_{2}\) is a checkerboard unit - obtain \(B_{t}\) from \(A_{t-1}\) by swapping the checker-board; calculate \(p_{t}=\mathrm{P}(B_{t})/[\mathrm{P}(B_{t})+\mathrm{P}(A_{t-1})]\); draw \(r_{t}\sim\mathrm{Bernoulli}(p_{t})\); and _If_\(r_{t}=1\), then set \(A_{t}=B_{t}\); _else_ set \(A_{t}=A_{t-1}\). **(3.2)**_If_ the sub-matrix extracted from \(r_{1},r_{2},c_{1},c_{2}\) is _not_ a checkerboard unit, set \(A_{t}=A_{t-1}\). This completes the description of W-RLA. Theorem 2 ensures the validity of the scheme presented in the W-RLA, i.e, it demonstrates that the Markov chain converges to the correct stationary distribution as the number of iterations \(T\to\infty\). The proof the Theorem 2 is deferred to the supplementary material.
**Theorem 2**.: _Given an weight matrix \(\Omega\) with \(\omega_{ij}>0\), and fixed row-sum and column sum vectors \((\mathbf{r},\mathbf{c})\), the Weighted Rectangular Loop Algorithm (W-RLA) generates a Markov chain with stationary distribution given by_
\[\text{P}(H)=(1/\kappa)\prod_{i,j}\omega_{ij}^{h_{ij}},\ \kappa=\sum_{H\in \text{U}(\mathbf{r},\mathbf{c})}\ \prod_{i,j}\omega_{ij}^{h_{ij}}.\]
In our blocked Gibbs sampler, the final step involves sampling binary cluster membership matrices, given an weight matrix \(\Omega=\mathrm{L}^{(a)}=((l_{ik}))=\big{(}\big{(}-\log\mathrm{N}_{d}(\mathbf{x}_{ i}^{(a)}\mid\mathbf{\mu}_{k,\star}^{(a)},\Sigma_{k,\star}^{(a)}))\), column sum vector \(\mathbf{c}^{(a)}=\mathbf{m}^{(a)}\), row sum vector \(\mathbf{r}^{(a)}=\mathbf{1}_{N_{a}}\), that are all fixed conditional on the remaining model parameters. In order to sample from the full conditional of \([\mathbf{z}^{(a)}\mid-],a\in[r]\), first we obtain a proposal for cluster membership matrix \(\mathrm{B}_{\mathrm{prop}}^{(a)}\) by solving the constrained
binary optimal transport [40] problem \(\mathrm{B}^{(a)}_{\mathrm{prop}}=\mathrm{argmin}_{\mathrm{B}\mathrm{e}\mathrm{U}( \boldsymbol{r}^{(a)},\ \boldsymbol{c}^{(a)})\langle\mathrm{B},\mathrm{L}^{(a)}\rangle}\) as in the optimization scheme, and then mutate the proposal via the rectangular loop scheme to get \(\mathrm{B}^{(a)}_{\mathrm{rec-loop}}\). Finally, we set the updated \(\boldsymbol{z}^{(a)},a\in[r]\) to cluster indicators corresponding to \(\mathrm{B}^{(a)}_{\mathrm{rec-loop}}\) with probability \(\mathrm{P}(B^{(a)}_{\mathrm{rec-loop}})/[\mathrm{P}(B^{(a)}_{\mathrm{rec-loop }})+\mathrm{P}(\mathrm{B}^{(a)}_{\mathrm{prop}})]\) and to cluster indicators corresponding to \(\mathrm{B}^{(a)}_{\mathrm{prop}}\) other-wise.
## 5 Experiments
### Simulation study 1 (Two/Multi-color Case)
We generate data from the model in (3.1)-(3.3) with fixed \((b,g)\) that yields a small \(\alpha_{0}\), which in turn yields low balance in the clusters. We fix the true number of clusters \(K_{true}=2\), and attribute specific sample sizes \(N^{(1)}=N^{(2)}=200\). We consider the cases
* **(A.1) Well-specified case (multivariate normal components).** In the first cluster, the individuals with \(a=1\) are generated from \(N_{2}(\mu_{11},S)\) and individuals with \(a=2\) are generated from \(N_{2}(\mu_{21},S)\). In the second cluster, individuals with \(a=1\) are generated from \(N_{2}(\mu_{12},S)\) and individuals with \(a=2\) are generated from \(N_{2}(\mu_{22},S)\), where \(\mu_{11}=(4,4)^{\prime},\ \mu_{21}=(2,2)^{\prime},\ \mu_{12}=(10,10)^{\prime},\ \mu_{22}=(8,8)^{\prime}\), and \(S=3\times[\rho 11^{\mathrm{T}}+(1-\rho)I_{2}]\) with \(\rho=0.3\).
* **(A.2) Mis-specified case 1 (multivariate t components).** We follow the same scheme as before, except for simulating from multivariate \(t\)-distributions with centers \((\mu_{11},\mu_{21},\mu_{12},\mu_{22})\), scale \(S\), and degrees of freedom 4.
* **(A.3) Miss-specified case 2 (multivariate skew normal components).** We follow the same scheme as before, except for simulating from multivariate skew normal distributions [7] with centers \((\mu_{11},\mu_{21},\mu_{12},\mu_{22})\), scale \(S\), and the skewness parameter
Figure 3: _For one instance in the simulation set up (A.1), the left and middle panel present clustering configuration via HFDP and fair k-means[16], respectively. The differences between the two configurations are marked with black ”*”. Non-optimal exchange of points in fair k-means leads to substantial decrease in fair-score. The right panel presents the clustering configurations obtained via randomly flipping \(10\%\) of the MAP of the HFDP clustering indices, to demonstrate that fair-score is low for non-sensical clustering configurations._
\(\alpha=(1,1)^{\mathrm{ T}}\). In Figure 4, fair-clustering with fairlets [16] (termed as K-Means) is compared with HFDP via repeated simulations both in well-specified and mis-specified set ups. Across all the numerical experiments, the MAP of the HFDP fares substantially better in terms of the fair-score (2.3). Further HFDP estimates the number of clusters automatically, but the number of clusters for the methods in [16, 13] are set deterministically at the value estimated by the MAP of the HFDP.
* **B. Multiple colors.** We generate data from the model in (3.1)-(3.3) with number of colors \(r=4\), sample sizes \(N^{(1)}=N^{(2)}=N^{(3)}=N^{(4)}=200\), true number of clusters \(K_{true}=2\), and large \(b\). In the first cluster, the individuals with \(a=1\) are generated from \(N_{2}((4,4)^{\prime},S)\), the individuals with \(a=2\) are generated from \(N_{2}((2,2)^{\prime},S)\), the individuals with \(a=3\) are generated from \(N_{2}((0,0)^{\prime},S)\), the individuals and with \(a=4\) are generated from \(N_{2}((-2,-2)^{\prime},S)\). In the second cluster, individuals with \(a=1\) are generated from \(N_{2}((10,10)^{\prime},S)\), individuals with \(a=2\) are generated from \(N_{2}((8,8)^{\prime},S)\), individuals with \(a=2\) are generated from \(N_{2}((6,6)^{\prime},S)\), and individuals with \(a=4\) are generated from \(N_{2}((4,4)^{\prime},S)\) where \(S=3\times[\rho 11^{\mathrm{ T}}+(1-\rho)I_{2}]\) with \(\rho=0.3\). Finally, we carry out the clustering via the algorithm in section 4, fixing the values of the hyper-parameters \(g,\mu_{p0},\lambda_{p0},\nu_{p0},\Lambda_{p0}\) such that these parameters do not impact our analysis. In Figure 4, the method in Bohm et al. [13] (termed as K-Means) is compared with HFDP via repeated simulations. Across all the numerical experiments, the MAP of the HFDP fares substantially better in terms of the fair-score (2.3), as well as automatically estimates the number of clusters.
### Simulation study 2 (Imperfect Knowledge of Protected Attribute)
Much of the prior work on fair clustering assumes complete knowledge of group membership. Esmaeili et al. [22] assumed imperfect knowledge of group membership through probabilistic assignments, i.e, for individual \(i\in[\![N]\!]\), the protected label \(a_{i}=a\) with probability \(p_{i}^{(a)}\) (known) and \(\sum_{a=1}^{r}p_{i}^{(a)}=1,\ i\in[\![N]\!]\). The goal is to balance the _expected color_ in each cluster. Our fully probabilistic set up enables us seamlessly achieve this via augmenting every cycle of our computational scheme by an additional sampling step as follows
Figure 4: _Two/Multi-color case. The MAP of the HFDP fares better compared to fair-clustering with fairlets [16] (termed as K-Means) in two color case, and the method in Böhm et al. [13] (termed as K-Means) in multi-color case, in terms of the fair-score (2.3); and automatically estimates the number of clusters._
with probability \(p_{i}^{(a)},\ a\in[r],\ i\in[N]\). To study numerical efficacy, we first generate data exactly as in _simulation 1, case A.1_; then retain the label of \(A\) with probability \(p_{acc}\in\{0.7,0.8,0.9\}\) for each of the individuals, and swap with probability \(1-p_{acc}\). In Figure 5, we compare HFDP with probabilistic fair clustering [22] (termed as K-means) via repeated simulations with varying \(p_{acc}\in\{0.7,0.8,0.9\}\). Across all the numerical experiments, the MAP of the HFDP fares substantially better in terms of the fair-score (2.3).
### Benchmark Datasets
We compare the performance of HFDP relative to the existing methodologies on popular bench mark data sets from the UCI repository [20], considered before in the literature [16, 13, 22]:
* **(1) Diabetes Data.** We chose numeric attributes such as age, time in hospital, to represent points in the euclidean space and gender as the sensitive dimension. We sub-sampled 1000 individuals from the data-set, with proportion of two genders equal to \((0.474,0.526)\), i.e the target balance is \(0.90\). HFDP estimates the modal number of clusters to be \(5\), corresponding balance \(0.898\) and fair-score (2.3) equal to \(-114.74\). Fair clustering with fairlets with fixed \(K=5\) yields value of the fair-score (2.3) equal to \(-651.84\).
* **(2) Portuguese Banking Data.** We chose numeric attributes such as age, balance, and duration to represent points in the Euclidean space, we aim to cluster to balance married and not married clients. We sub-sampled the data set to 1000 records, with proportion married and not married clients equal to \((0.626,0.374)\), i.e the target balance is \(0.60\). HFDP estimates the modal number of clusters to be \(5\), corresponding balance \(0.593\) and the fair-score (2.3) equal to \(-176.85\). Fair clustering with fairlets with fixed \(K=5\) yields the fair-score (2.3) equal to \(-639.12\).
* **(3) Credit Card Data.** We chose numeric attributes such as age, credit limit, to represent points in the euclidean space and marital status (married, unmarried, others) as the sensitive dimension. We sub-sampled 600 individuals from the data-set, such that the target balance is \(0.99\). HFDP estimates the modal number of clusters to be \(3\), corresponding balance \(0.99\) and value of the fair-score (2.3) \(-86.19\). The algorithm in Bohm et al. [13] with fixed \(K=3\) yields value of the fair-score (2.3) equal to \(-271.09\)
Figure 5: _Imperfect knowledge of protected attribute. The MAP of the HFDP fares better compared to probabilistic fair clustering [22] (termed as K-means), in terms of the fair-score (2.3) in repeated simulations with varying \(p_{acc}\)._ |
2306.01665 | SourceP: Detecting Ponzi Schemes on Ethereum with Source Code | As blockchain technology becomes more and more popular, a typical financial
scam, the Ponzi scheme, has also emerged in the blockchain platform Ethereum.
This Ponzi scheme deployed through smart contracts, also known as the smart
Ponzi scheme, has caused a lot of economic losses and negative impacts.
Existing methods for detecting smart Ponzi schemes on Ethereum mainly rely on
bytecode features, opcode features, account features, and transaction behavior
features of smart contracts, which are unable to truly characterize the
behavioral features of Ponzi schemes, and thus generally perform poorly in
terms of detection accuracy and false alarm rates. In this paper, we propose
SourceP, a method to detect smart Ponzi schemes on the Ethereum platform using
pre-trained models and data flow, which only requires using the source code of
smart contracts as features. SourceP reduces the difficulty of data acquisition
and feature extraction of existing detection methods. Specifically, we first
convert the source code of a smart contract into a data flow graph and then
introduce a pre-trained model based on learning code representations to build a
classification model to identify Ponzi schemes in smart contracts. The
experimental results show that SourceP achieves 87.2% recall and 90.7% F-score
for detecting smart Ponzi schemes within Ethereum's smart contract dataset,
outperforming state-of-the-art methods in terms of performance and
sustainability. We also demonstrate through additional experiments that
pre-trained models and data flow play an important contribution to SourceP, as
well as proving that SourceP has a good generalization ability. | Pengcheng Lu, Liang Cai, Keting Yin | 2023-06-02T16:40:42Z | http://arxiv.org/abs/2306.01665v8 | # SourceP: Detecting Ponzi Schemes on Ethereum with Source Code
###### Abstract
As blockchain technology becomes more and more popular, a typical financial scam, the Ponzi scheme, has also emerged in the blockchain platform Ethereum. This Ponzi scheme deployed through smart contracts, also known as the smart Ponzi scheme, has caused a lot of economic losses and negative impacts. Existing methods for detecting smart Ponzi schemes on Ethereum mainly rely on bytecode features, opcode features, account features, and transaction behavior features of smart contracts, and the performance of identifying schemes is insufficient. In this paper, we propose SourceP, a method to detect smart Ponzi schemes on the Ethereum platform using pre-trained models and data flow, which only requires using the source code of smart contracts as features to explore the possibility of detecting smart Ponzi schemes from another direction. SourceP reduces the difficulty of data acquisition and feature extraction of existing detection methods while increasing the interpretability of the model. Specifically, we first convert the source code of a smart contract into a data flow graph and then introduce a pre-trained model based on learning code representations to build a classification model to identify Ponzi schemes in smart contracts. The experimental results show that SourceP achieves 87.2% recall and 90.7% F-score for detecting smart Ponzi schemes within Ethereum's smart contract dataset, outperforming state-of-the-art methods in terms of performance and sustainability. We also demonstrate through additional experiments that pre-trained models and data flow play an important contribution to SourceP, as well as proving that SourceP has a good generalization ability.
blockchain, Ethereum, smart contract, Ponzi scheme, pre-trained model
## I Introduction
Blockchain technology is rapidly evolving as an open ledger of recorded transactions maintained in a distributed network of mutually untrusted peers (i.e., peer-to-peer network), where each peer verifies transactions through a consensus protocol [1, 2, 3, 4]. Blockchain has now developed applications in areas such as the Internet of Things [5], voting systems [6], authentication [7], ledgers [3], disaster relief [8], healthcare [9], and edge computing [10], which have attracted significant interest from industry and academia [11]. Also, blockchain has become a common infrastructure for the emerging metaverse and Web3, and DeFi, DAO, and Non-Fungible Tokens developed from blockchain technology are becoming more and more popular [12].
Ethereum [13] is currently the most popular public blockchain platform with smart contract functionality [14]. Ethereum handles peer-to-peer contracts through the cryptocurrency Ether(ETH) and can store data, run smart contracts, and share information globally. First proposed by Nick Szabo in the mid-1990s [15], smart contracts contain contractual terms that will be automatically executed when predefined conditions are met, stored, replicated, and updated in a distributed blockchain. The combination of blockchain technology and smart contracts makes the dream of a "peer-to-peer market-place" come true, which means that there will be no third-party intervention in transactions between buyers and suppliers via smart contracts in a blockchain marketplace [16].
Smart contracts on the Ethereum platform are Turing-complete, and through Turing-complete smart contracts, users can not only trade in cryptocurrency but also perform arbitrary actions on the blockchain [17]. Because of this, decentralized applications (DApps) [18, 19] consisting of smart contracts are currently growing rapidly, such as CryptoKitties [20] and IDEX [21]. Yet at the same time, smart contracts have given a new opportunity for the spread of Ponzi schemes [22].
Born 150 years ago, Ponzi schemes are a classic financial fraud that lures new users in with the promise of high profits, with the core mechanism of compensating the investors and creators who joined before with the investments of new investors, creating no value themselves [23]. Once no new investors join, the Ponzi scheme quickly collapses, new investors who are not compensated lose their money, and the scam makers and early participants receive illegal income. Smart contracts allow the creators of Ponzi schemes to remain anonymous, while their immutability makes them extremely difficult to change after being deployed to the blockchain, the feature that criminals exploit to deploy scams on smart contract platforms like Ethereum [24]. Such Ponzi schemes deployed on smart contracts are called smart Ponzi schemes. Ponzitracker [25] found close to $3 billion amount of Ponzi schemes in 2022, related to blockchain-based cryptocurrencies. The recent smart Ponzi scheme event on the Forsage website involved more than $300 million [26]. The danger of financial fraud is so serious that reducing smart Ponzi schemes is imminent. Meanwhile, smart contracts are the building blocks of DApps, and if individual smart contracts in DApps are Ponzi schemes, then the DApp may also be dangerous. Therefore, detecting whether smart contracts deployed in Ethereum are smart Ponzi schemes and flagging them are important to prevent financial fraud and maintain the healthy development of DApps and blockchain platforms.
To the best of our knowledge, the existing methods for
detecting smart Ponzi schemes fall into three main categories; the first category of methods uses bytecode or opcodes from smart contracts as features to train classifiers or perform static analysis [22, 27]; the second category of methods uses the transaction behavior features of smart contracts [28, 29]; the third category of methods uses both opcodes features and account features to train the model [4, 30, 31]. However, these three methods face some limitations. Firstly, bytecode and opcode features lack interpretability, and adding or removing some opcodes in smart contracts can be easily done to circumvent detection. Second, it is also difficult to accurately locate the fraudster on the anonymous Ethereum platform, so it is quite difficult to collect the fraudster's transaction behavior characteristics and account characteristics. Third, static analysis methods suffer from recognition lag and require human intervention, increasing the cost of recognition. Finally, in the context of rapid updates to smart contracts, the sustainability and performance of currently available methods for detecting new smart Ponzi schemes have decreased significantly.
**Our Method.** To address these challenges, we propose a smart Ponzi scheme detection approach based on pre-trained models and data flow called SourceP. Pre-trained models in current natural language processing (NLP) techniques have facilitated many tasks [32, 33, 34], thus, it is reasonable to explore the great potential of pre-trained models in smart Ponzi scheme detection. Since smart contracts on Ethereum are typically written in Solidity [35, 36], SourceP will detect Ponzi schemes on smart contracts written in the Solidity language.
SourceP embodies a key innovation point, namely the use of only the source code of smart contracts as a feature. Figure 1 and Figure 2 show the significant differences between existing traditional methods and our method in detecting smart Ponzi schemes through flowcharts. As can be seen from the comparison of the two figures, our approach omits obtaining the bytecode, transaction, and account information of the smart contract, which reduces the amount of data required while avoiding the various problems associated with too many features. The specific implementation steps of SourceP can be seen in Figure 4 and Figure 5. First, the smart contract source code is converted into a data flow, then the source code information is fed into the pre-trained model together with the data flow representing variable dependencies, and finally, the identification results of the smart contract Ponzi scheme are output and evaluated. Ultimately, we find that SourceP outperforms previous methods in smart Ponzi scheme detection.
Ponzi schemes in smart contracts. To the best of our knowledge, SourceP is the first method to detect smart Ponzi schemes using only the source code of smart contracts as features. It introduces a pre-trained model for learning code representations and uses the code semantic information provided by the data flow converted from the source code for detection.
* We conducted extensive experiments to demonstrate that SourceP outperforms the state-of-the-art in detecting smart Ponzi schemes in terms of performance and sustainability under the same conditions, and has a good generalization ability.
* We have exposed the dataset and source code1of this study for other researchers to replicate our methods and evaluation to facilitate future research in this direction.
Footnote 1: [https://github.com/Lpower/SourceP](https://github.com/Lpower/SourceP)
## II Background
### _Ethereum and Smart contract_
Ethereum is the first open-source public blockchain platform that supports advanced and custom smart contracts with the help of a Turing-complete virtual machine called Ethereum Virtual Machine (EVM) [37, 38]. EVM is the environment in which smart contracts run, and each node in the Ethereum network runs an implementation of EVM and executes the same instructions. Smart contracts are written in languages such as Solidity and Serpent [39], and then the smart contract code is compiled into EVM bytecode and deployed on the blockchain for execution. Once the smart contract is deployed on the blockchain, the corresponding bytecode and creation transactions are permanently stored on the blockchain. Ethereum is now becoming the most popular platform for smart contract development and can be used to design various types of decentralized applications (DApps) that can be used for digital rights management, crowdfunding, and gambling [14].
### _Ponzi schemes on smart contract_
The "Ponzi scheme" is a fraudulent investment scam that promises high rates of return with little risk to the investor. Ponzi schemes create returns for early investors by acquiring new investors. It is similar to a pyramid scheme in that it is based on using new investors' money to pay off early investors. Both Ponzi schemes and pyramid schemes eventually bottom out when the influx of new investors dries up and there is not enough money to turn around. Then such scams simply fall apart and all those who have not yet recouped their investments never get them back. Smart contracts provide a perfect breeding ground for the modern fraudster. Whereas traditional financial fraudsters need to worry about the law, third-party institutions, and their public image, smart contracts don't have the same problems.
Ponzi schemes on the blockchain have proliferated. Vasek et al. [40] analyzed the supply and demand of bitcoin-based Ponzi schemes and identified 1780 different Ponzi schemes. Chainalysis [41] investigated cryptocurrency crimes from 2017 to 2019 and found that 92% of cryptocurrency scams were Ponzi schemes. Ponzi schemes on Ethereum are packaged as investment projects or gambling games that promise huge returns to those who invest. Some large-scale Ponzi schemes even build sophisticated websites and run aggressive marketing campaigns to attract investors [26, 42]. It is difficult for investors with little knowledge of blockchain to distinguish the true nature of smart contracts that are disguised as high-yield investment schemes [27].
Bartoletti et al. [24] found that from August 2015 to May 2017, 191 smart Ponzi schemes active on Ethereum had collected nearly $500,000 from more than 2,000 different users. They specifically analyzed 184 real smart Ponzi schemes on Ethereum and classified smart Ponzi schemes into four types, chained-shaped, tree-shaped, waterfall as well and handover, most of which are chained-shaped with 82% of the total, while the other types have only 2% of the total and the rest are The remaining are unclassified other types.
### _Smart Ponzi Scheme Case_
Figure 3 shows an example of a smart Ponzi scheme, a Chain-shaped scheme of smart Ponzi source code fragment written by Roan [43]. Each time a new investor participates in this smart Ponzi scheme, the \(join()\) function is called and then adds the new user to the list of investors with their investment amount, and then transfers 10% of the investment amount to the owner of the smart contract. Then, in a first come, first served order, if the total amount of the pool is already more than twice the amount invested by the first investor who has not been repaid, then he is compensated with twice the amount invested in Ether.
Roan said of the Ponzi scheme, "That's a complete lie, but enough to sucker someone in." [44]
```
1functionjoin()externalpayhole
2{
3users_push(user(msg.sender,msg.value));
4totalsys
code semantic information for code understanding. Moreover, dataflow has a simpler structure compared to AST, which is more efficient when used in models. So SourceP will use the dataflow graph as the input to the model. How to extract the dataflow from the source code will be described in Section III-A.
### _Pre-trained model_
Pre-trained models can benefit a variety of downstream tasks by storing knowledge into huge parameters and fine-tuning them on specific tasks, the rich knowledge implicit in huge parameters has been extensively demonstrated through experimental validation and empirical analysis [49]. Pre-trained models such as BERT [50], ELMo [51], GPT [52], and XLNet [53] have been successful on various tasks. At the same time some pre-trained models for learning code representations have also emerged, such as CodeBERT [54], CuBERT [55], GPT-C [56], and Code-GPT [57]. SourceP uses GraphCodeBERT [45], a pre-trained model for learning code representations trained on the CodeSearchNet [58] dataset, as the main part of the model. The specific method of the pre-trained model used by SourceP will be described in Section III.
## III Our method
**Method overview.** Our method is divided into two main phases: 1) the input normalization phase, which converts the source code of smart contracts into AST and DFG; 2) the smart Ponzi scheme detection phase, which feeds the source code and DFG into a pre-trained model and outputs the final detection results. In the following, the first two subsections describe the details of each phase in detail. The next subsection deals with the functions used to incorporate the DFG into the Transformer. The last subsection is a detailed description of the three pre-training tasks of the pre-trained model.
### _Data Flow Graph Generation_
**Source code to AST.** First, we have the source code \(SC=\{sc_{1},sc_{2},\dots,sc_{n}\}\), then we convert the source code into abstract syntax trees (ASTs). Here we need to use a tool called tree-sitter [59], which can build a source code file into an AST. However, the tree-sitter does not provide official Solidity language support, so we need to use tree-sitter-solidity [60] to convert the Solidity language source code to AST. This tool contains a grammar for tree-sitter which major inspiration and some structures have been taken from tree-sitter-javascript [61]. AST includes the syntax information of the code, and the leaf nodes in the tree are used to identify the sequence of variables that are denoted as \(Var=\{v_{1},v_{2},...,v_{n}\}\).
**AST to DFG.** The data flow is a graph, so we can think of each variable in the AST as a node. The edge connecting two nodes is denoted as \(\varepsilon=\langle v_{i},v_{j}\rangle\), which means that the value of the \(j\)-th variable comes from the \(i\)-th variable. The set of all directed edges in the graph is denoted as \(Edge=\{\varepsilon_{1},\varepsilon_{2},\dots,\varepsilon_{n}\}\). So the final data flow graph is represented as \(\mathcal{G}(SC)=(Var,Edge)\), this is the DFG we have constructed to represent the dependencies between variables of the source code. Figure 4 shows the process of converting the smart Ponzi scheme source code to AST and DFG.
### _Model Structure_
In this section, we will describe the model structure of SourceP in detail. Our method primarily follows GraphCodeBert [45], so the model architecture follows BERT [50] and the multi-layer bidirectional Transformer [62] is the backbone of the model. Figure 5 shows the structure of the whole model.
The input to the model is the data flow graph converted from the source code, since the detection task is different, the paired comments of the input GraphCodeBert do not need to be considered. We modified the input by following Peculiar's [63] method. The difference is that Peculiar uses a crucial data flow graph (CDFG) that retains only the data flow of key nodes as input to the model, but we still choose to use the DFG as input to the model. This is because Peculiar is designed to detect reentrant vulnerabilities in smart contracts and only cares about the dependencies of functions related to reentrancy vulnerabilities, while CDFG can remove redundant data flow information. But the case of smart Ponzi schemes is more complex and it may lurk in any function and variable, so it is necessary to use DFG as the input to the model.
So the input to the model is a sequence \(X=\{[CLS],SC,[SEP],Var\}\). Where \([CLS]\) is a special classification token and \([SEP]\) is a special token used to split the two different data types. The remaining two segments in \(X\), \(SC=\{sc_{1},sc_{2},...,sc_{n}\}\) is the collection of source code and \(Var=\{v_{1},v_{2},...,v_{n}\}\) is the set of variables of data flow graph \(\mathcal{G}(SC)=(Var,Edge)\).
Then the sequence \(X\) will be converted to the input vector \(W^{0}\). For each token in \(X\), the corresponding token and position embedding are summed to construct its input vector. We use a special position embedding for all variables in \(X\) as a way to indicate that these variables are nodes in the data flow. The model then uses the input vector \(W^{0}\) to contextualize the representation through \(N\) transformer layers \(W^{n}=transformer_{n}(W^{n-1}),n\in[1,N]\). In our model, the value of \(N\) is 12. The construction of each transformer layer is the same, and \(U^{n}\) is obtained by first applying a multi-headed self-attention operation[62] and then applying a Layer Normalization operation. The feed-forward layer and a Layer Normalization operation are then used on \(U^{n}\). This way we get the output \(W^{n}\) of the \(n\)-th layer from the input \(W^{n-1}\). Here is the calculation for each transformer layer.
\[U^{n}=LayN(MulA(W^{n-1})+W^{n-1}) \tag{1}\]
\[W^{n}=LayN(FeeF(U^{n})+U^{n}) \tag{2}\]
Where \(LayN\) means a Layer Normalization operation, \(MulA\) means a multi-headed self-attention mechanism and \(FeeF\) is a two-layer feed-forward network. In the \(n\)-th transformer layer, the multi-headed self-attention operation computes the \(\hat{U}^{n}\).
\[Q_{i}=W^{n-1}P_{i}^{Q},\,K_{i}=W^{n-1}P_{i}^{K},\,V_{i}=W^{n-1}P_{i}^{V} \tag{3}\]
\[head_{i}=Softmax(\frac{{Q_{i}{K_{i}}^{T}}}{\sqrt{d_{k}}}+M)V_{i} \tag{4}\]
\[\hat{U}^{n}=[head_{1};...;head_{m}]{P_{n}}^{O} \tag{5}\]
Where the output \(W^{n-1}\in\mathbb{R}^{|X|\times d_{k}}\) of the previous layer is linearly projected onto a triplet of queries, keys, and values using the parameters \(P_{i}^{Q}\),\(P_{i}^{K}\),\(P_{i}^{V}\in\mathbb{R}^{d_{k}\times d_{k}}\) and \(d_{k}\) is the dimension of a head. \(M\in\mathbb{R}^{|I|\times|I|}\) is a mask matrix, \(M_{ij}\) is 0 if the \(i\)-th token is allowed to participate in the \(j\)-th token, otherwise it is -\(\infty\). And \(P_{n}^{O}\in\mathbb{R}^{d_{k}\times d_{k}}\) is the model parameters.
The model finally outputs the predicted label \(\hat{y}\) through a linear classifier and the \(Softmax\) function.
\[\hat{y}=\textit{Softmax}(\hat{U}^{n}) \tag{6}\]
### _Graph-guided masked attention_
In order to incorporate the structure of the data flow graph into the Transformer, GraphCodeBERT [45] proposed this function. The masked attention function avoids the key \(k_{i}\) that the query \(q_{j}\) focuses on by changing the attention score \(q_{j}^{T}k_{i}\) to -\(\infty\) so that the attention weight becomes 0 after using the softmax function. To represent dependencies between variables, a node-query \(q_{v_{i}}\) is allowed to attend to on a node-key \(k_{v_{j}}\) if there is a direct edge from node \(v_{j}\) to node \(v_{i}\) where \(\langle v_{j},v_{i}\rangle\in Edge\) or if \(i=j\). otherwise, attention is masked by adding -\(\infty\) to the attention score. And to represent the relationship between source code tokens and data flow nodes, we first define a set \(Edge^{\prime}\) where \(\langle v_{i},sc_{j}\rangle/\langle sc_{j},v_{i}\rangle\in Edge^{\prime}\) if the variable \(v_{i}\) is identified from the source code token \(sc_{j}\). Then, we allow node \(q_{v_{i}}\) and code \(k_{sc_{j}}\) to attend each other when and only when \(\langle v_{i},sc_{j}\rangle/\langle sc_{j},v_{i}\rangle\in Edge^{\prime}\). We use the following graph-guided masked attention matrix as the mask matrix \(M\) in (4).
\[M_{ij}=\begin{cases}0&\text{if}\ \ q_{i}\in\{[CLS],[SEP]\}\\ &\text{or}\ q_{i},k_{j}\in P\cup SC\\ &\text{or}\ \langle q_{i},k_{j}\rangle\in Edge\cup Edge^{\prime}\\ -\infty&\text{otherwise}\end{cases} \tag{7}\]
### _Pre-Training Tasks_
**Masked Language Modeling.** The task follows Devlin et al.[50] to apply a Masked Language Modeling (MLM) pre-training task. In particular, we randomly sampled 15% of the tokens from the source code and paired annotations. We replace them with a [MASK] token 80% of the time, replace them with random tokens 10% of the time, and leave them unchanged 10% of the time. The goal of MLM is to predict
Fig. 4: The phase of input normalization, i.e., the process of converting source code into data flow. (a) source code fragment of smart Ponzi scheme; (b) conversion of source code into abstract syntax tree (AST); (c) identification of variable sequences in abstract syntax tree (AST); (d) generation of data flow graph by variable sequences.
the original tokens of these sampled tokens, which is effective in previous work [50, 54, 64]. In particular, if the source code context is not sufficient to infer the masked code tokens, the model can make use of the annotation context, encouraging the model to unify the natural language and programming language representations.
**Data Flow Edges Prediction.** The purpose of this task is to learn representations from the data flow. The motivation is to encourage models to learn structure-aware representations that encode "where the value comes from" relationships in order to better understand the code. In particular, we randomly extract 20% of the nodes \(Var_{s}\) in the data flow, mask the direct edges connecting these extracted nodes by adding -\(\infty\) to the masking matrix, and then predict the \(Edge_{\textit{mask}}\) of these masked edges. Formally, the pre-training objective of the task is calculated as (8), where \(Edge_{x}=Var_{s}\times Var\cup Var\times Var_{s}\) is the set of candidates for edge prediction, \(\delta(e_{ij}\in Edge)\) is 1 if \(\langle v_{i},v_{j}\rangle\in Edge\) otherwise 0. The probability \(p_{e_{ij}}\) of the existence of an edge from the \(i\)-th node to the \(j\)-th node is calculated by dot product according to the \(Sigmoid\) function, using the representation of two nodes in the model. To balance the positive-negative ratio of the examples, we sampled the same number of negative and positive samples of \(Edge_{\textit{mask}}\).
\[\begin{split} loss_{EdgePred}=-\sum_{e_{ij}\in Edge_{x}}[ \delta(e_{ij}\in Edge_{mask})logp_{e_{ij}}\\ +(1-\delta(e_{ij}\in Edge_{mask}))log(1-p_{e_{ij}})]\end{split} \tag{8}\]
**Node Alignment.** The purpose of this task is to align the representation between source code and data flow, which is similar to data flow edge prediction. Instead of predicting the edges between nodes, we predict the edges between code tokens and nodes. The motivation is to encourage the model to align variables and source code to the data flow.
## IV Experiments
To evaluate our method, we designed experiments to answer the following research questions (RQs):
* RQ1: How well does SourceP perform in detecting smart Ponzi schemes by relying only on source code features, and how do its precision, recall, and F-score compare to current state-of-the-art methods?
* RQ2: How sustainable is SourceP in detecting smart Ponzi schemes?
* RQ3: How do SourceP's pre-training tasks and data flow affect detection results?
* RQ4: How well does SourceP generalization ability in detecting smart Ponzi schemes?
### _Experiments Setting_
**Datasets.** We used the Ponzi Contract Datasets provided by the XBlock platform [4], a dataset that crawls 6,498 smart contracts on Etherscan [65], and manually read the source code pairs to classify them with reference to the methods used in some previous studies [24, 30], of which 318 smart contracts were manually marked as Ponzi smart contracts and the rest were manually marked as non-Ponzi smart contracts. We retained the source code in the dataset, along with the corresponding serial number of idx and the label. If the value of Label is 1 it means it is a smart Ponzi scheme, if it is 0 it is not.
**Evaluation metrics.** For the experiments in the paper, we will use common precision, recall, and F-score to evaluate the performance of the model.
**Parameter settings.** In the fine-turning step, we set the code length to 256, data flow length to 64, train batch size to 1, eval batch size to 32, learning rate to 2e-5, and used the Adam optimizer to update the model parameters. Depending on the experiment, epochs were set to 3, 5, or 10. Because the dataset has fewer positive category samples in some tasks, we set the thresholds from 0.5 to 0.15 and 0.003, which allows the model to be more inclined to predict positive samples that would otherwise be judged as negative categories.
**Implementation details.** All the experiments were run on a computer with two Intel(R) Xeon(R) Silver 4314 CPUs at 2.4GHz, one GPU at NVIDIA A40 or two GPUs at NVIDIA A30, and 256GB of Memory.
Fig. 5: SourceP’s model structure.
### _Results Summary_
**RQ1: Performance comparison with state-of-the-art methods.** We compare the detection performance of SouceP with the same division on the same dataset as the existing state-of-the-art method according to the comparison method of Zheng et al.[4]. Specifically, all contracts are ranked according to the order of the block height at the time of smart contract creation, and the training set consists of smart Ponzi schemes from the 1st to the 250th and non-Ponzi smart contracts in between. The test set consists of the 251st to the last 341st smart Ponzi scheme and the remaining non-Ponzi scheme smart contracts. Thus, the training set has a total of 5990 smart contracts, while the test set has 508 smart contracts. Such a division, compared to random division, provides a better representation of the model's ability to detect emerging new smart Ponzi schemes when it only has data on earlier smart Ponzi schemes. The compared models include Ridge-NC [4]: a ridge classifier model trained with an N-gram count feature; SVM-NC [4]: an SVM model trained with an N-gram count feature; XGBoost-TF-IDF [4]: an XGBoost model trained model with TF-IDF feature; MulCas [4]: a multi-view cascade combinatorial model; SadPonzi [22]: a semantic-aware system for detecting smart Ponzi schemes. The first three approaches use features extracted from the opcode of a smart contract, Mulcas incorporates Developer Feature on top of that, while SadPonzi detects Ponzi schemes based on the bytecode of a smart contract. The comparison results of different methods are listed in Table I. As can be seen from the results, SourceP shows the best performance in all three metrics. In particular, SourceP gains a 21.3% recall improvement and a 12.9% F-score improvement over the state-of-the-art method while ensuring that precision is also improved. Since the ratio of positive and negative samples is about 1:20, it is reasonable that the model prefers to classify the minority samples as the majority, resulting in a relatively higher precision score than the recall score.
**RQ2: Sustainability of the model compared to other state-of-the-art methods.** Although SourceP has achieved excellent performance on the latest smart Ponzi schemes detection, there is a new problem known as model aging that has attracted widespread attention [66, 67]. In particular, there is a big difference between the early smart Ponzi schemes and the latest Ponzi schemes in smart contracts [4, 24]. To verify the sustainability of SourceP, in this experiment, we divide the dataset into six parts (P0 to P5) according to the height of the created blocks of Ponzi schemes, following the method of Zheng et al. [4] divide the dataset by every 50 smart Ponzi schemes, e.g. block height in the top 50 smart Ponzi schemes and the non-Ponzi contracts among them are P0, while the smart Ponzi schemes with heights from 51 to 100 and the non-Ponzi contracts among them are divided into P1, and so on for the remaining P2, P3, P4, and P5. where P5 is the 251st to the last (314th) smart Ponzi scheme and the non-Ponzi contracts among them. P5 is actually the experiment of RQ1. The detection task is to predict the next dataset using the previous dataset, e.g., P0 and P1 to predict P2, and P0, P1, and P2 to predict P3. Because smart contracts are tamper-evident, a lower block height at creation means an earlier creation time, which is equivalent to our use of an earlier time smart Ponzi scheme to predict future smart Ponzi schemes as a way to verify the sustainability of SourceP. Our comparison models include SadPonzi [22] and MulCas [4], and the results are shown in Table II. It can be seen that SourceP obtains the highest precision and F-score in each part of the experiment, achieving the highest recall in P3 and P5, and the F-score even improves by 39% when detecting the P3 part. Since the new smart Ponzi scheme is deployed based on the ERC-20 token trading contract, the reward of the Ponzi scheme is reflected in the increased value of the Ponzi tokens, and SadPonzi and MulCas experience a degradation in their performance in detecting this new smart Ponzi scheme. This experiment demonstrates that SourceP achieves the best sustainability in detecting smart Ponzi schemes.
**RQ3: Ablation experiments.** We conducted ablation experiments to explore the contribution played by the pre-training task and the data flow in the smart Ponzi scheme detection by removing two pre-training tasks and the data flow respectively and then performing the task of RQ1 to observe the final results. The results are shown in Table III, where \(-w/o\) is an abbreviation for without. From the results, we can see that removing the two pre-training tasks or not using the data flow makes the performance of the model degrade to different degrees, which indicates that they play a role in improving the model.
**RQ4: Generalization ability of SourceP.** In order to verify
the generalization ability of the model, we will divide the data set randomly to experiment. Also, to prevent model overfitting, we divide the validation set separately from the training set. Therefore, the ratio of training set: validation set: test set in this experiment is 7:1:2. Fan et al. [27] and Chen et al. [30, 31] also performed smart Ponzi contract detection with randomly partitioned datasets, and these algorithms used opcode features and account features of smart contracts, including SVM [68], LSTM [69], XGBoOST [30], RF[31], and AI-SPSD [27] algorithms. Since none of these algorithms is directly open source for detecting smart Ponzi schemes using them, we replicated them as much as possible based on the details they provided. We conducted 10 experiments to compare the average results of detecting smart Ponzi schemes with other algorithms in Table IV. Since we used a larger dataset, the method detection performance of several previous algorithms degraded. As we can see from the table, SourceP still gets the best recall, precision, and F-score when dividing the dataset randomly, proving that SourceP has good performance and generalization ability in detecting smart Ponzi schemes.
## V Related Work
### _Ponzi schemes on the blockchain_
The Ponzi scheme is a classic financial fraud [23, 70]. With the development of the Internet, online "High-Yield Investment Programs" (HYIP) became a typical form of Ponzi scheme [71]. Because blockchain technology became increasingly popular, unscrupulous individuals began to deploy such HYIPs on the blockchain, and more and more Ponzi schemes and Internet scammers emerged on the blockchain [72]. Chen et al. [30, 31] first proposed machine learning-based identification of Ponzi schemes in Ethereum smart contracts by extracting features from user accounts and the opcodes of smart contracts, and then building a classification model to detect potential Ponzi schemes as smart contract implementations. Fan et al. [27] based on CatBoost [73], propose the AI-SPSD model to detect newly deployed smart Ponzi schemes from the runtime opcode level promptly. Chen et al. [22] propose SADPonzi, a working on Ethereum smart contract bytecode prototype system that identifies smart Ponzi schemes. Zheng et al. [4] proposed building a multi-view cascade model (MulCas) to identify Ponzi schemes. Ponzi schemes in blockchain platforms are not only present in Ethereum, some works have also focused on Ponzi schemes in Bitcoin and detected them [40, 74, 75, 76].
### _Smart Contract Analysis_
Many smart contract analysis studies have been conducted on smart contract security vulnerabilities in Ethereum, for example, Oyente [77], Osiris [78], Mythril [79], Maian [80], and Manticore [81] use symbolic execution for vulnerability detection. Securify [82] and Zeus [83] use formal verification for vulnerability detection, and Slither [84] and SmartCheck [35] propose to use static analysis for vulnerability detection. ContractFuzzer [85], and ReGuardp [86] propose vulnerability detection using fuzzy testing. Thomas et al.[87] provide an empirical review of these automated analysis tools for smart contracts. Pinna et al.[88] conduct a comprehensive empirical study of smart contracts deployed on the Ethereum blockchain, to provide an overview of the characteristics of smart contracts. In addition, there are empirical studies on the code smell of smart contracts [89], studies on the gas of Ethereum smart contracts [90, 91], and the cloning behavior of the code [92, 93].
### _Pre-Trained Models for Programming Languages_
The emergence of pre-trained models (PTMs) has brought NLP into a new era [94]. Some models such as BERT [50] and GPT [52] have recently achieved great success and have become a milestone in the field of artificial intelligence [49]. Several works have also explored the application of pre-trained models (PTMs) on programming languages, such as Roberta [95], a model pre-training on a text corpus with a Mask Language Model (MLM) learning goal, where RoBERTa(code) is pre-training on code only. CuBERT [55] is the first work to propose CodeBERT [54] pre-training on code-text pairs with MLM and replacement token detection learning goals, and is a representative pre-trained model for multilingual code representation. The major difference between GraphCodeBERT and CodeBERT is the inclusion of AST information [45]. UniXcoder [96] uses a masked attention matrix with prefix adapters to control the behavior of the model and enhances the code representation with cross-modal content such as AST and code annotations.PLBART [97] is an application of the programming language BART [98], which incorporates the advantages of both the bidirectional encoder of the BERT model and the unidirectional left-to-right decoder in GPT. CodeT5 [99] is a unified pre-trained Transformer model that makes better use of code syntax information. Large-scale pre-trained models have also evolved rapidly for code tasks, such as AlphaCode [100] which uses the encoder-decoder
architecture, Code-GPT [57] which uses a 12-layer transformer decoder model, and GPT-C [56] which is designed for the task of Code Completion.
## VI Conclusion and future work
In this paper, we propose a method called SourceP to detect smart Ponzi schemes on Ethereum. To the best of our knowledge, this is the first detection method that uses only the source code of smart Ponzi schemes as features, and the first method to detect smart Ponzi schemes based on pre-trained models and data flows. In detecting smart Ponzi schemes on Ethereum, we experimentally demonstrate that SourceP achieves better performance and sustainability compared to existing state-of-the-art methods. We also design ablation experiments to examine the contribution of pre-trained models and data flows in SourceP. Finally, we experimentally demonstrate that SourceP possesses a good generalization capability. We explore the feasibility of using variable dependencies in source code to detect smart Ponzi schemes, avoiding some of the drawbacks of traditional smart Ponzi scheme detection methods. We revealed the potential of pre-trained models for smart Ponzi scheme detection, exposing the dataset and source code we used to aid future research in this direction. We believe that detecting smart contracts as soon as possible after they are deployed can effectively reduce the financial losses from Ponzi schemes on Ethereum and maintain a healthy ecology of the blockchain community.
In our future work, the first step is to expand the dataset, which still has few labeled smart Ponzi schemes, and expanding the dataset has a great effect on improving the model. As mentioned in this paper, new types of smart Ponzi schemes are more difficult to detect, so more data on new types of smart Ponzi schemes are urgently needed. Or construct a Ponzi scheme dataset consisting of multiple smart contracts, to detect whether the business logic consisting of multiple contracts is a scheme. Exploring the use of pre-trained models and other features in smart contracts for Ponzi scheme detection is also a viable direction. Given the excellent performance shown by pre-trained models and data flow in smart Ponzi scheme detection, we will try to explore the use of the method for other blockchain security tasks to advance blockchain security technology.
|
2305.16891 | Generalization Guarantees of Gradient Descent for Multi-Layer Neural
Networks | Recently, significant progress has been made in understanding the
generalization of neural networks (NNs) trained by gradient descent (GD) using
the algorithmic stability approach. However, most of the existing research has
focused on one-hidden-layer NNs and has not addressed the impact of different
network scaling parameters. In this paper, we greatly extend the previous work
\cite{lei2022stability,richards2021stability} by conducting a comprehensive
stability and generalization analysis of GD for multi-layer NNs. For two-layer
NNs, our results are established under general network scaling parameters,
relaxing previous conditions. In the case of three-layer NNs, our technical
contribution lies in demonstrating its nearly co-coercive property by utilizing
a novel induction strategy that thoroughly explores the effects of
over-parameterization. As a direct application of our general findings, we
derive the excess risk rate of $O(1/\sqrt{n})$ for GD algorithms in both
two-layer and three-layer NNs. This sheds light on sufficient or necessary
conditions for under-parameterized and over-parameterized NNs trained by GD to
attain the desired risk rate of $O(1/\sqrt{n})$. Moreover, we demonstrate that
as the scaling parameter increases or the network complexity decreases, less
over-parameterization is required for GD to achieve the desired error rates.
Additionally, under a low-noise condition, we obtain a fast risk rate of
$O(1/n)$ for GD in both two-layer and three-layer NNs. | Puyu Wang, Yunwen Lei, Di Wang, Yiming Ying, Ding-Xuan Zhou | 2023-05-26T12:51:38Z | http://arxiv.org/abs/2305.16891v2 | # Generalization Guarantees of Gradient Descent for Multi-Layer Neural Networks
###### Abstract
Recently, significant progress has been made in understanding the generalization of neural networks (NNs) trained by gradient descent (GD) using the algorithmic stability approach. However, most of the existing research has focused on one-hidden-layer NNs and has not addressed the impact of different network scaling parameters. In this paper, we greatly extend the previous work [24, 35] by conducting a comprehensive stability and generalization analysis of GD for multi-layer NNs. For two-layer NNs, our results are established under general network scaling parameters, relaxing previous conditions. In the case of three-layer NNs, our technical contribution lies in demonstrating its nearly co-coercive property by utilizing a novel induction strategy that thoroughly explores the effects of over-parameterization. As a direct application of our general findings, we derive the excess risk rate of \(\mathcal{O}(1/\sqrt{n})\) for GD algorithms in both two-layer and three-layer NNs. This sheds light on sufficient or necessary conditions for under-parameterized and over-parameterized NNs trained by GD to attain the desired risk rate of \(\mathcal{O}(1/\sqrt{n})\). Moreover, we demonstrate that as the scaling parameter increases or the network complexity decreases, less over-parameterization is required for GD to achieve the desired error rates. Additionally, under a low-noise condition, we obtain a fast risk rate of \(\mathcal{O}(1/n)\) for GD in both two-layer and three-layer NNs.
## 1 Introduction
Deep neural networks (DNNs) trained by (stochastic) gradient descent (GD) have achieved great success in a wide spectrum of applications such as image recognition [22], speech recognition [19], machine translation [5], and reinforcement learning [37]. In practical applications, most of the deployed DNNs are over-parameterized, i.e., the number of parameters is far larger than the size of the training data. In [40], it was empirically demonstrated that over-parameterized NNs trained with SGD can generalize well to the test data while achieving a small training error. This has triggered a surge of theoretical studies on unveiling this generalization mastery of DNNs.
In particular, norm-based generalization bounds are established using the uniform convergence approach [6, 7, 17, 28, 33, 32]. However, this approach does not take the optimization algorithm and the data distribution into account. Another line of work is to take into account the structure of the data distribution and provide algorithm-dependent generalization bounds. In [9, 27], it is shown that SGD for over-parameterized two-layer NNs can achieve small generalization error under certain assumptions on the structure of the data. [1] studies the generalization of SGD for two-layer and three-layer NNs if there exists a true (unknown) NN with low error on the data distribution. The other important line of work is the neural tangent kernel (NTK)-type approach [3, 10, 13, 34] which shows that
the model trained by GD is well approximated by the tangent space near the initialization and the generalization analysis can be reduced to those of the convex case or kernel methods. However, most of them either require a very high over-parameterization or focus on special function classes.
Recently, the appealing work [35] provides an alternative approach in a kernel-free regime, using the concept of algorithmic stability [8, 18, 23]. Specifically, it uses the model-average stability [25] to derive generalization bounds of GD for two-layer over-parameterized NNs. [24] improves this result by deriving generalization bounds for both GD and SGD, and relaxing the over-parameterization requirement. [39] derives fast generalization bounds for GD under a separable distribution. However, the above studies only focus on two-layer NNs and a very specific network scaling.
**Contributions.** We study the stability and generalization of GD for both two-layer and three-layer NNs with generic network scaling factors. Our contributions are summarized as follows.
\(\bullet\) We establish excess risk bounds for GD on both two-layer and three-layer NNs with general network scaling parameters under relaxed over-parameterization conditions. As a direct application of our generalization results, we show that GD can achieve the excess risk rate \(\mathcal{O}(1/\sqrt{n})\) when the network width \(m\) satisfies certain qualitative conditions related to the scaling parameter \(c\), the size of training data \(n\), and the network complexity measured by the norm of the minimizer of the population risk. Further, under a low-noise condition, our excess risk rate can be improved to \(\mathcal{O}(1/n)\) for both two-layer and three-layer NNs.
\(\bullet\) A crucial technical element in the stability analysis for NNs trained by GD is establishing the almost co-coercivity of the gradient operator. This property naturally holds true for two-layer NNs due to the empirical risks' monotonically decreasing nature which no longer remains valid for three-layer NNs. Our technical contribution lies in demonstrating that the nearly co-coercive property still holds valid throughout the trajectory of GD. To achieve this, we employ a novel induction strategy that fully explores the effects of over-parameterization (refer to Section 4 for further discussions). Furthermore, we are able to eliminate a critical assumption made in [24] regarding an inequality associated with the population risk of GD's iterates (see Remark 2 for additional details).
\(\bullet\) Our results characterize a quantitative condition in terms of the network complexity and scaling factor under which GD for two-layer and three-layer NNs can achieve the excess risk rate \(\mathcal{O}(1/\sqrt{n})\) in under-parameterization and over-parameterization regimes. Our results shed light on sufficient or necessary conditions for under-parameterized and over-parameterized NNs trained by GD to achieve the risk rate \(\mathcal{O}(1/\sqrt{n}).\) In addition, our results show that the larger the scaling parameter or the simpler the network complexity is, the less over-parameterization is needed for GD to achieve the desired error rates for multi-layer NNs.
## 2 Problem Formulation
Let \(P\) be a probability measure defined on a sample space \(\mathcal{Z}=\mathcal{X}\times\mathcal{Y}\), where \(\mathcal{X}\subseteq\mathbb{R}^{d}\) and \(\mathcal{Y}\subseteq\mathbb{R}\). Let \(S=\{z_{i}=(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\) be a training dataset drawn from \(P\). One aims to build a prediction model \(f_{\mathbf{W}}:\mathcal{X}\mapsto\mathbb{R}\) parameterized by \(\mathbf{W}\) in some parameter space \(\mathcal{W}\) based on \(S\). The performance of \(f_{\mathbf{W}}\) can be measured by the population risk defined as \(L(f_{\mathbf{W}})=\frac{1}{2}\iint_{\mathcal{X}\times\mathcal{Y}}\big{(}f_{ \mathbf{W}}(\mathbf{x})-y\big{)}^{2}dP(\mathbf{x},y).\) The corresponding empirical risk is defined as
\[L_{S}(f_{\mathbf{W}})=\frac{1}{2n}\sum_{i=1}^{n}\big{(}f_{\mathbf{W}}(\mathbf{ x}_{i})-y_{i}\big{)}^{2}. \tag{1}\]
We denote by \(\ell(\mathbf{W};z)=\frac{1}{2}(f_{\mathbf{W}}(\mathbf{x})-y)^{2}\) the loss function of \(\mathbf{W}\) on a data point \(z=(\mathbf{x},y)\). The best possible model is the regression function \(f^{*}\) defined as \(f^{*}(\mathbf{x})=\mathbb{E}[y|\mathbf{x}]\), where \(\mathbb{E}[\cdot|\mathbf{x}]\) is the conditional expectation given \(\mathbf{x}\). In this paper, we consider the prediction model \(f_{\mathbf{W}}\) with a neural network structure. In particular, we are interested in two-layer and three-layer fully-connected NNs.
**Two-layer NNs.** A two-layer NN of width \(m>0\) and scaling parameter \(c\in[1/2,1]\) takes the form
\[f_{\mathbf{W}}(\mathbf{x})=\frac{1}{m^{c}}\sum_{k=1}^{m}a_{k}\sigma(\mathbf{w}_{ k}\mathbf{x}),\]
where \(\sigma:\mathbb{R}\mapsto\mathbb{R}\) is an activation function, \(\mathbf{W}=[\mathbf{w}_{1}^{\top},\dots,\mathbf{w}_{m}^{\top}]^{\top}\in \mathcal{W}\) with \(\mathcal{W}=\mathbb{R}^{m\times d}\) is the weight matrix of the first layer, and a fixed \(\mathbf{a}=[a_{1},\dots,a_{m}]\) with \(a_{k}\in\{-1,+1\}\) is the weight of the output layer. In the above formulation, \(\mathbf{w}_{k}\in\mathbb{R}^{1\times d}\) denotes the weight of the edge connecting the input to the \(k\)-th hidden node, and \(a_{k}\) denotes the weight of the edge connecting the \(k\)-th hidden node to the output node. The output of the network is scaled by a factor \(m^{-c}\) that is decreasing with the network width \(m\). Two popular choices of scaling are Neural Tangent Kernel (NTK) [2, 3, 4, 20, 16, 15] with \(c=1/2\) and mean field [13, 14, 29, 30] with \(c=1\). [36] studied two-layer NNs with \(c\in[1/2,1]\) and discussed the influence of the scaling by trading off it with the network complexity. We also focus on the scaling \(c\in[1/2,1]\) to ensure that meaningful generalization bounds can be obtained. We fix the output layer weight \(\mathbf{a}\) and only optimize the first layer weight \(\mathbf{W}\) in this setting.
**Three-layer NNs.** For a matrix \(\mathbf{W}\), let \(\mathbf{W}_{s:}\) and \(\mathbf{W}_{is}\) denote the \(s\)-th row and the \((i,s)\)-th entry of \(\mathbf{W}\). For an input \(\mathbf{x}\in\mathbb{R}^{d}\), a three-layer fully-connected NN with width \(m>0\) and scaling \(c\in(1/2,1]\) is
\[f_{\mathbf{W}}(\mathbf{x})=\frac{1}{m^{c}}\sum_{i=1}^{m}a_{i}\sigma\big{(} \frac{1}{m^{c}}\sum_{s=1}^{m}\mathbf{W}_{is}^{(2)}\sigma(\mathbf{W}_{s:}^{(1 )}\mathbf{x})\big{)},\]
where \(\sigma:\mathbb{R}\mapsto\mathbb{R}\) is an activation function, \(\mathbf{W}=[\mathbf{W}^{(1)},\mathbf{W}^{(2)}]\in\mathcal{W}=\mathbb{R}^{m \times(d+m)}\) is the weight matrix of the neural network, and \(\mathbf{a}=[a_{1},\dots,a_{m}]\) with \(a_{i}\in\{-1,+1\}\) is the fixed output layer weight. Here, \(\mathbf{W}^{(1)}\in\mathbb{R}^{m\times d}\) and \(\mathbf{W}^{(2)}\in\mathbb{R}^{m\times m}\) are the weights at the first and the second layer, respectively. We consider the setting of optimizing the weights in both the first and the second layers.
We consider the gradient descent to solve the minimization problem: \(\min_{\mathbf{W}}L_{S}(f_{\mathbf{W}}).\) For simplicity, let \(L(\mathbf{W})=L(f_{\mathbf{W}})\) and \(L_{S}(\mathbf{W})=L_{S}(f_{\mathbf{W}})\).
**Definition 1** (Gradient Descent).: Let \(\mathbf{W}_{0}\in\mathcal{W}\) be an initialization point, and \(\{\eta_{t}:t\in\mathbb{N}\}\) be a sequence of step sizes. Let \(\nabla\) denote the gradient operator. At iteration \(t\), the update rule of GD is
\[\mathbf{W}_{t+1}=\mathbf{W}_{t}-\eta_{t}\nabla L_{S}(\mathbf{W}_{t}). \tag{2}\]
**Target of Analysis.** Let \(\mathbf{W}^{*}=\arg\min_{\mathbf{W}\in\mathcal{W}}L(\mathbf{W})\) where the minimizer is chosen to be the one enjoying the smallest norm. For a randomized algorithm \(\mathcal{A}\) to solve (1), let \(\mathcal{A}(S)\) be the output of \(\mathcal{A}\) applied to the dataset \(S\). The generalization performance of \(\mathcal{A}(S)\) is measured by its _excess population risk_, i.e., \(L(\mathcal{A}(S))-L(\mathbf{W}^{*})\). In this paper, we are interested in studying the excess population risk of models trained by GD for both two-layer and three-layer NNs.
We denote by \(\|\cdot\|_{2}\) the standard Euclidean norm of a vector or a matrix. For any \(\lambda>0\), let \(\mathbf{W}_{\lambda}^{*}=\arg\min_{\mathbf{W}}\big{\{}L(\mathbf{W})+\frac{ \lambda}{2}\|\mathbf{W}-\mathbf{W}_{0}\|_{2}^{2}\big{\}}\). We have the following error decomposition
\[\mathbb{E}[L(\mathcal{A}(S))]-L(\mathbf{W}^{*})= \mathbb{E}\big{[}L(\mathcal{A}(S))-L_{S}(\mathcal{A}(S))\big{]}+ \mathbb{E}\big{[}L_{S}(\mathcal{A}(S))-L_{S}(\mathbf{W}_{\lambda}^{*})-\frac{ \lambda}{2}\|\mathbf{W}_{\lambda}^{*}-\mathbf{W}_{0}\|_{2}^{2}\big{]}\] \[+\big{[}L(\mathbf{W}_{\lambda}^{*})+\frac{\lambda}{2}\|\mathbf{W} _{\lambda}^{*}-\mathbf{W}_{0}\|_{2}^{2}-L(\mathbf{W}^{*})\big{]}, \tag{3}\]
where we use \(\mathbb{E}[L_{S}(\mathbf{W}_{\lambda}^{*})]=L(\mathbf{W}_{\lambda}^{*})\). The first term in (3) is called the _generalization error_, which can be controlled by stability analysis. The second term is the _optimization error_, which can be estimated by tools from optimization theory. The third term is the _approximation error_ which will be estimated by introducing an assumption on the complexity of the NN (Assumption 3 in Section 3.1).
We will use the on-average argument stability [25] to study the generalization error defined as follows.
**Definition 2** (On-average argument stability).: Let \(S=\{z_{i}\}_{i=1}^{n}\) and \(S^{\prime}=\{z^{\prime}_{i}\}_{i=1}^{n}\) be drawn i.i.d. from an unknown distribution \(P\). For any \(i\in[n]\), define \(S^{(i)}=\{z_{1},\ldots,z_{i-1},z^{\prime}_{i},z_{i+1},\ldots,z_{n}\}\) as the set formed from \(S\) by replacing the \(i\)-th element with \(z^{\prime}_{i}\). We say \(\mathcal{A}\) is on-average argument \(\epsilon\)-stable if
\[\mathbb{E}_{S,S^{\prime},\mathcal{A}}\Big{[}\frac{1}{n}\sum_{i=1}^{n}\| \mathcal{A}(S)-\mathcal{A}(S^{(i)})\|_{2}^{2}\Big{]}\leq\epsilon^{2}.\]
We say a function \(\ell:\mathcal{W}\times\mathcal{Z}\to\mathbb{R}\) is \(\rho\)-smooth if for any \(\mathbf{W},\mathbf{W}^{\prime}\in\mathcal{W}\) and \(z\in\mathcal{Z}\), there holds \(\ell(\mathbf{W};z)-\ell(\mathbf{W}^{\prime};z)\leq\langle\nabla\ell(\mathbf{ W}^{\prime};z),\mathbf{W}-\mathbf{W}^{\prime}\rangle+\frac{\rho}{2}\| \mathbf{W}-\mathbf{W}^{\prime}\|_{2}^{2}\). The following lemma establishes the connection [25] between the on-average argument stability and the generalization error.
**Lemma 1**.: _Let \(\mathcal{A}\) be an algorithm. If for any \(z\), the map \(\mathbf{W}\mapsto\ell(\mathbf{W};z)\) is \(\rho\)-smooth, then_
\[\mathbb{E}[L(\mathcal{A}(S))-L_{S}(\mathcal{A}(S))]\leq\frac{\rho}{2n}\sum_{i =1}^{n}\mathbb{E}[\|\mathcal{A}(S)-\mathcal{A}(S^{(i)})\|_{2}^{2}]+\big{(} \frac{2\rho\mathbb{E}[L_{S}(\mathcal{A}(S))]}{n}\sum_{i=1}^{n}\mathbb{E}[\| \mathcal{A}(S)-\mathcal{A}(S^{(i)})\|_{2}^{2}]\big{)}^{\frac{1}{2}}.\]
Our analysis requires the following standard assumptions on activation and loss functions [24, 35].
**Assumption 1** (Activation function).: _The activation function \(a\mapsto\sigma(a)\) is continuous and twice differentiable with \(|\sigma(a)|\leq B_{\sigma}\), \(|\sigma^{\prime}(a)|\leq B_{\sigma^{\prime}}\) and \(|\sigma^{\prime\prime}(a)|\leq B_{\sigma^{\prime\prime}}\), where \(B_{\sigma},B_{\sigma^{\prime}},B_{\sigma^{\prime\prime}}>0\)._
Both sigmoid and hyperbolic tangent activation functions satisfy Assumption 1[35].
**Assumption 2** (Inputs, labels and loss).: _There exist constants \(c_{\mathbf{x}},c_{y},c_{0}>0\) such that \(\|\mathbf{x}\|_{2}\leq c_{\mathbf{x}}\), \(|y|\leq c_{y}\) and \(\max\{\ell(\mathbf{0};z),\ell(\mathbf{W}_{0};z)\}\leq c_{0}\) for any \(\mathbf{x}\in\mathcal{X},y\in\mathcal{Y}\) and \(z\in\mathcal{Z}\)._
## 3 Main Results
In this section, we present our main results on the excess population risk of NNs trained by GD.
### Two-layer Neural Networks with Scaling Parameters
We denote \(B\asymp B^{\prime}\) if there exist some universal constants \(c_{1},c_{2}>0\) such that \(c_{1}B\geq B^{\prime}\geq c_{2}B\). We denote \(B\gtrsim B^{\prime}\) (\(B\lesssim B^{\prime}\)) if there exists a constant \(c>0\) such that \(B\geq cB^{\prime}\) (\(cB\leq B^{\prime}\)). Let \(\rho=c_{\mathbf{x}}^{2}\big{(}\frac{B_{\sigma^{\prime}}^{2}+B_{\sigma}B_{\sigma ^{\prime\prime}}}{m^{2c-1}}+\frac{B_{\sigma^{\prime\prime}}c_{\mathbf{x}}}{m^ {2}}\big{)}\). The following theorem presents generalization error bounds of GD for two-layer NNs. The proof can be found in Appendix A.1.
**Theorem 2** (Generalization error).: _Suppose Assumptions 1 and 2 hold. Let \(\{\mathbf{W}_{t}\}\) be produced by (2) with \(\eta_{t}\equiv\eta\leq 1/(2\rho)\) based on \(S\). Assume_
\[m\gtrsim\big{(}(\eta T)^{2}(1+\eta\rho)\sqrt{\rho(\rho\eta T+2)}\big{/}n\big{)} ^{\frac{2}{4c-1}}+(\eta T)^{\frac{3}{4c-1}}+\big{(}\eta T\big{)}^{\frac{1}{c}}. \tag{4}\]
_Then, for any \(t\in[T]\), there holds_
\[\mathbb{E}[L(\mathbf{W}_{t})-L_{S}(\mathbf{W}_{t})]\leq\big{(}\frac{4e^{2}\eta ^{2}\rho^{2}t}{n^{2}}+\frac{4e\eta\rho}{n}\big{)}\sum_{j=0}^{t-1}\mathbb{E} \big{[}L_{S}(\mathbf{W}_{j})\big{]}.\]
**Remark 1**.: Theorem 2 establishes the first generalization results for general \(c\in[1/2,1]\), which shows that the requirement on \(m\) becomes weaker as \(c\) becomes larger. Indeed, with a typical choice \(\eta T\asymp n^{\frac{c}{\mu+2c-3}}\) (as shown in Corollary 5 below), the assumption (4) becomes \(m\gtrsim n^{\frac{c}{\mu+2c-3}}(1+\frac{51-96\mu}{8(c-3)})\). This requirement becomes milder as \(c\) increases for any \(\mu\in[0,1]\), which implies that large scaling reduces the requirement on the network width. In particular, our assumption only requires \(m\gtrsim\eta T\) when \(c=1\). [24] provided a similar generalization bound with an assumption \(m\gtrsim(\eta T)^{5}/n^{2}+(\eta T)^{2}\), and our result with \(c=1/2\) is consistent with their result.
The following theorem to be proved in Appendix A.2 develops the optimization error bounds of GD.
**Theorem 3** (Optimization error).: _Suppose Assumptions 1 and 2 hold. Let \(\{\mathbf{W}_{t}\}\) be produced by (2) with \(\eta_{t}\equiv\eta\leq 1/(2\rho)\) based on \(S\). Let \(\tilde{b}=c_{\mathbf{x}}^{2}B_{\sigma^{\prime}}(\frac{2B_{\sigma^{\prime}c_{ \mathbf{x}}}}{m^{c-1/2}}+\sqrt{2c_{0}})\) and assume (4) holds. Assume_
\[m\gtrsim\big{(}\tilde{b}T(\sqrt{\eta T}+\|\mathbf{W}^{*}-\mathbf{W}_{0}\|_{2} )\big{(}\frac{e^{2}\eta^{3}\rho^{2}T^{2}}{n^{2}}+\frac{en^{2}T\rho}{n}+1\big{)} \big{)}^{\frac{1}{c}}. \tag{5}\]
_Then we have_
\[\mathbb{E}[L_{S}(\mathbf{W}_{T})]-L_{S}(\mathbf{W}_{\frac{1}{ \eta T}}^{*})-\frac{1}{2\eta T}\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W} _{0}\|_{2}^{2}]\] \[\leq\]
We can combine generalization and optimization error bounds together to derive our main result of GD for two-layer NNs. Without loss of generality, we assume \(\eta T\geq 1\).
**Theorem 4** (Excess population risk).: _Suppose Assumptions 1 and 2 hold. Let \(\{\mathbf{W}_{t}\}\) be produced by (2) with \(\eta_{t}\equiv\eta\leq 1/(2\rho)\). Assume (4) and (5) hold. For any \(c\in[1/2,1]\), if \(m\gtrsim(\eta T/n)^{\frac{1}{2c-1}}\) and \(m\gtrsim(\eta T(\sqrt{\eta}T+\|\mathbf{W}^{*}-\mathbf{W}_{0}\|_{2}))^{1/c}\), then there holds_
\[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]=\mathcal{O}\Big{(}\frac{\eta Tm ^{1-2c}}{n}L(\mathbf{W}^{*})+\frac{\|\mathbf{W}^{*}-\mathbf{W}_{0}\|_{2}^{2}}{ \eta T}\Big{)}.\]
**Remark 2** (Comparison with [35, 24]).: Theorem 4 provides the first excess risk bounds of GD for two-layer NNs with general scaling parameter \(c\in[1/2,1]\) which recovers the previous work [35, 24] with \(c=1/2\). Specifically, [35] derived the excess risk bound \(\mathcal{O}\big{(}\frac{\eta T}{n}L(\mathbf{W}^{*})+\frac{1}{\eta T}\| \mathbf{W}^{*}-\mathbf{W}_{0}\|_{2}^{2}\big{)}\) with \(m\gtrsim(\eta T)^{5}\), we relax this condition to \(m\gtrsim(\eta T)^{3}\) by providing a better estimation of the smallest eigenvalue of a Hessian matrix of the empirical risk (see Lemma A.5). This bound was obtained in [24] under a crucial condition \(\mathbb{E}[L(\mathbf{W}_{s})]\geq L(\mathbf{W}_{\frac{1}{\eta T}}^{*})\) for any \(s\in[T]\), which is difficult to verify in practice. Here, we are able to remove this condition by using \(L\big{(}\mathbf{W}_{s}^{*}\big{)}-L(\mathbf{W}_{s})\leq L(\mathbf{W}_{\frac{1 }{\eta T}}^{*})-L(\mathbf{W}^{*})\leq\frac{1}{\eta T}\|\mathbf{W}^{*}- \mathbf{W}_{0}\|_{2}^{2}\) since \(L(\mathbf{W}_{s})\geq L(\mathbf{W}^{*})\) for any \(s\in[T]\) when controlling the bound of GD iterates (see Lemma A.6 for details). Furthermore, our result in Theorem 4 implies, assuming that \(\mathbf{W}^{*}\) is fixed, that the larger the scaling parameter \(c\) is, the better the excess risk bound is. The reason could be that the smoothness parameter related to the objective function along the GD trajectory becomes smaller for a larger \(c\) (see Lemma A.2 for details).
As a direct application of Theorem 4, we can derive the excess risk rates of GD for two-layer NNs by properly trade-offing the generalization, optimization and approximation errors. To this end, we introduce the following assumption on network complexity to control the approximation error.
**Assumption 3** ([36]).: _There is \(\mu\!\in\![0,1]\) and population risk minimizer \(\mathbf{W}^{*}\) with \(\|\mathbf{W}^{*}\|_{2}\!\leq\!m^{\frac{1}{2}-\mu}\)._
**Corollary 5**.: _Let assumptions in Theorem 4 hold and Assumption 3 hold. Suppose \(\frac{c}{3}+\mu>\frac{1}{2}\)._
1. _If_ \(c\in[1/2,3/4)\) _and_ \(c+\mu\geq 1\)_, we can choose_ \(m\asymp(\eta T)^{\frac{3}{2c}}\) _and_ \(\eta T\) _such that_ \(n^{\frac{c}{6\mu+2c-3}}\lesssim\eta T\lesssim n^{\frac{c}{6\mu+2c-3}}\)_. If_ \(c\in[3/4,1]\)_, we can choose_ \(m\asymp(\eta T)^{\frac{3}{2c}}\) _and_ \(\eta T\) _such that_ \(\eta T\gtrsim n^{\frac{c}{6\mu+2c-3}}\)_. Then there holds_ \[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]=\mathcal{O}\big{(}\frac{1}{ \sqrt{n}}\big{)}.\]
2. _Assume_ \(L(\mathbf{W}^{*})=0\)_. We can choose_ \(m\asymp(\eta T)^{\frac{3}{2c}}\) _and_ \(\eta T\) _such that_ \(\eta T\gtrsim n^{\frac{2c}{6\mu+2c-3}}\)_, and get_ \[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]=\mathcal{O}\big{(}\frac{1}{n} \big{)}.\]
**Interpretation of the Results:** Corollary 5 indicates a quantitative condition via the network complexity and scaling factor where GD for two-layer NNs can achieve the excess risk rate \(\mathcal{O}(1/\sqrt{n})\) in under-parameterization and over-parameterization regimes. We use the left panel of Figure 1 to interpret the results.
Let us first explain the meanings of the regions and lines in the left panel of Figure 1. Specifically, the _blue regions_ (with or without dots) correspond to the conditions \(\frac{c}{3}+\mu>\frac{1}{2}\) and \(c+\mu\geq 1\) in part (a) of Corollary 5, while our results do not hold in the _pink region_ which violates the conditions in part (a). Furthermore, under conditions \(\frac{c}{3}+\mu>\frac{1}{2}\) and \(c+\mu\geq 1\), that the desired bound in part (a) can be achieved by choosing \(m\asymp(\eta T)^{\frac{3}{2c}}\) for any \(\eta T\) satisfying \(\eta T\gtrsim n^{\frac{3}{6\mu+2c-3}}\) if \(c\geq 3/4\) and \(n^{\frac{3}{2c-4c}}\gtrsim\eta T\gtrsim n^{\frac{c}{6\mu+2c-3}}\) if \(c\in[1/2,3/4)\), which further implies that GD with any width \(m\gtrsim n^{\frac{3}{2(6\mu+2c-3)}}\) if \(c\in[3/4,1]\) and \(n^{\frac{3}{2(6\mu+2c-3)}}\) if \(c\in[1/2,3/4)\) with suitable iterations can achieve the error rate \(\mathcal{O}(1/\sqrt{n})\). This observation tells us **that the smallest width** for guaranteeing our results in part (a) is \(m\asymp n^{\frac{3}{2(6\mu+2c-3)}}\) for any \(c\in[1/2,1]\). The dotted line \(\mu=3/4-c/3\) in the figure corresponds to the setting \(m\asymp n\), i.e., \(\frac{3}{2(6\mu+2c-3)}=1\). Correspondingly, we use the _dotted blue region_ above the dotted line to indicate the under-parameterization region and _the blue region without dots_ below the dotted line for the over-parameterization region. With the above explanations, we can interpret the left panel of Figure 1 as follows.
\(\bullet\) Firstly, from the figure, we know that, if values of \(c\) and \(\mu\) are located above the dotted line \(\mu\geq 3/4-c/3\), i.e., the blue region with dots, under-parameterization is _sufficient_ for GD to achieve the error rate \(\mathcal{O}(1/\sqrt{n})\). It implies that the sufficient condition for under-parameterized NNs trained by GD achieving the desired rate is \(\mu\geq 3/4-c/3\). The potential reason is that the population risk minimizer \(\mathbf{W}^{*}\) is well-behaved in terms of its norm being relatively small with \(\mu\) being relatively large there. In particular, when \(\mu>1/2\), \(\|\mathbf{W}^{*}\|_{2}\leq m^{1/2-\mu}\) tends to \(0\) as \(m\) tends to infinity. Hence, it is expected that under-parameterized NNs can learn this relatively simple \(\mathbf{W}^{*}\) well. However, it is worthy of mentioning that over-parameterization can also achieve the rate \(\mathcal{O}(1/\sqrt{n})\) since the dotted line only indicates the smallest width required for achieving such an error rate.
\(\bullet\) Secondly, from the figure, we see that, if \(c\) and \(\mu\) belong to the blue region without dots which is between the solid lines and the dotted line, then over-parameterization is _necessary_ for achieving the error rate \(\mathcal{O}(1/\sqrt{n})\). This is because, in the blue region without dots, that the conditions of choosing \(m\gtrsim n^{\frac{3}{2(6\mu+2c-3)}}\) in part (a) of Corollary 5 which will always indicate the over-parameterization region, i.e., \(m\gtrsim n\). Furthermore, from the above discussions, our theoretical results indicate that the over-parameterization does bring benefit for GD to achieve
Figure 1: Scaling parameter \(c\) versus Network complexity parameter \(\mu\) for Part (a) in Corollary 5 (left) and Part (a) in Corollary 9 (right). _Blue Region without dots_: values of \(c\) and \(\mu\) where over-parameterization is necessary to achieve error bound \(\mathcal{O}(1/\sqrt{n})\). _Blue Region with dots_: values of \(c\) and \(\mu\) where under-parameterization is sufficient to achieve error bound \(\mathcal{O}(1/\sqrt{n})\). _Pink Region_: the desired bound cannot be guaranteed.
good generalization in the sense that GD can achieve excess risk rate \(\mathcal{O}(1/\sqrt{n})\) when \(c\) and \(\mu\) is in the whole blue region (with or without dots) while under-parameterization can only do so for the blue region with dots where the network complexity is relatively simple, i.e., \(\mu\) is relatively large.
\(\bullet\) Thirdly, our results do not hold for GD when values of \(c\) and \(\mu\) are in the pink region in the figure. In particular, when \(\mu<1/6\), our bounds do not hold for any \(c\in[1/2,1]\). We suspect that this is due to the artifacts of our analysis tools and it remains an open question to us whether we can get a generalization error bound \(\mathcal{O}(1/\sqrt{n})\) when \(\mu<1/6\). In addition, our results in Corollary 5 also indicate that the requirement on \(m\) becomes weaker as \(c\) and \(\mu\) become larger. It implies that networks with larger scaling and simpler network complexity are biased to weaken the over-parameterization for GD to achieve the desired error rates for two-layer NNs.
**Remark 3**.: In Lemma A.2, we show that \(f_{\mathbf{W}}\) is \(B_{\sigma^{\prime}}c_{\mathbf{X}}m^{\frac{1}{2}-c}\)-Lipschitz. Combining this result with Assumption 3 we know \(|f_{\mathbf{W}^{*}}(\mathbf{x})-f_{\mathbf{0}}(\mathbf{x})|^{2}=\mathcal{O}( m^{2(1-c-\mu)})\). In order for \(|f_{\mathbf{W}*}(\mathbf{x})-f_{\mathbf{0}}(\mathbf{x})|^{2}\) to not vanish as \(m\) tends to infinity, one needs \(c+\mu\leq 1\). In Corollary 5, we also need \(\frac{c}{3}+\mu>\frac{1}{2}\) to ensure the excess risk bounds vanish. Combining these two conditions together implies that \(c\) can not be larger than \(3/4\). That is, for the range \(c\in(3/4,1]\), the conditions in Corollary 5 restrict the class of functions the networks can represent as \(m\) tends to infinity. However, we want to emphasize that even for the simplest case that \(|f_{\mathbf{W}^{*}}(\mathbf{x})-f_{\mathbf{0}}(\mathbf{x})|^{2}\) tends to \(0\) as \(m\) tends to infinity, our results still imply that over-parameterization does bring benefit for GD to achieve optimal excess risk rate \(\mathcal{O}(1/\sqrt{n})\). Besides, our corollary mainly discusses the conditions for achieving the excess risk rate \(\mathcal{O}(1/\sqrt{n})\) and \(\mathcal{O}(1/n)\). The above-mentioned conditions will be milder if we consider the slower excess risk rates. Then the restriction on \(c\) will be weaker. Furthermore, our main result (i.e., Theorem 4) does not rely on Assumption 3, and it holds for any setting.
**Comparison with the Existing Work:** Part (b) in Corollary 5 shows fast rate \(\mathcal{O}(1/n)\) can be derived under a low-noise condition \(L(\mathbf{W}^{*})=0\) which is equivalent to the fact that there is a true network such that \(y=f_{\mathbf{W}^{*}}(\mathbf{x})\) almost surely. Similar to part (a), large \(c\) and large \(\mu\) also help weaken the requirement on the width \(m\) in this case. For a special case \(\mu=1/2\) and \(c=1/2\), [24] proved that GD for two-layer NNs achieves the excess risk rate \(\mathcal{O}(1/\sqrt{n})\) with \(m\asymp n^{3/2}\) and \(\eta T\asymp\sqrt{n}\) in the general case, which is further improved to \(\mathcal{O}(1/n)\) with \(m\asymp n^{3}\) and \(\eta T\asymp n\) in a low-noise case. Corollary 5 recovers their results with the same conditions on \(m\) and \(\eta T\) for this setting.
[36] studied GD with weakly convex losses and showed that the excess population risk is controlled by \(\mathcal{O}\big{(}\frac{\eta TL^{2}}{n}+\frac{\|\mathbf{W}_{*}-\mathbf{W}_{0} \|_{2}^{2}}{\eta T}+\epsilon(\eta T+\|\mathbf{W}^{*}-\mathbf{W}_{0}\|_{2}) \big{)}\) if \(2\eta\epsilon<1/T\) when the empirical risk is \(\epsilon\)-weakly convex and \(L\)-Lipschitz continuous, where \(\mathbf{W}_{\epsilon}=\arg\min_{\mathbf{W}}L_{S}(\mathbf{W})+\epsilon\| \mathbf{W}-\mathbf{W}_{0}\|_{2}^{2}\). If the approximation error is small enough, then the \(\mathcal{O}(1/\sqrt{n})\) bound can be achieved by choosing \(\eta T=\sqrt{n}\) if \(\|\mathbf{W}_{\epsilon}\|_{2}=\mathcal{O}(1)\). Indeed, their excess risk bound will not converge for the general case. Specifically, note that \(L_{S}(\mathbf{W}_{\epsilon})+\epsilon\|\mathbf{W}_{\epsilon}-\mathbf{W}_{0}\|_ {2}^{2}\leq L_{S}(0)+\epsilon\|\mathbf{W}_{0}\|_{2}^{2}\), then there holds \(\|\mathbf{W}_{\epsilon}-\mathbf{W}_{0}\|_{2}^{2}=\mathcal{O}(1/\epsilon)\). The simultaneous appearance of \(\frac{1}{\eta T\epsilon}\) and \(\eta T\epsilon\) causes the non-vanishing error bound. [36] also investigated the weak convexity of two-layer NNs with a smooth activation function. Under the assumption that the derivative of the loss function is uniformly bounded by a constant, they proved that the weak convexity parameter is controlled by \(\mathcal{O}(d/m^{c})\). We provide a dimension-independent weak convexity parameter which further yields a dimension-independent excess risk rate \(\mathcal{O}(1/\sqrt{n})\). More discussion can be found in Appendix A.3.
### Three-layer Neural Networks with Scaling Parameters
Now, we present our results for three-layer NNs. Let \(\hat{\rho}=4B_{2}(1+2B_{1})\), where \(B_{1},B_{2}>0\) are constants depending on \(c_{\mathbf{x}},B_{\sigma^{\prime}},B_{\sigma^{\prime\prime}}\) and \(c_{0}\), whose specific forms are given in Appendix B.1. Let \(\mathcal{B}_{T}=\sqrt{\eta T}+\|\mathbf{W}_{0}\|_{2}\). We first present the generalization bounds for three-layer NNs.
**Theorem 6** (Generalization error).: _Suppose Assumptions 1 and 2 hold. Let \(\{\mathbf{W}_{t}\}\) be produced by (2) with
\(\eta_{t}\equiv\eta\leq 1/(8\hat{\rho})\) based on \(S\). Assume_
\[m \gtrsim(\eta T)^{4}+(\eta T)^{\frac{1}{4c-2}}+\left\|\mathbf{W}_{0} \right\|_{2}^{\frac{84}{6c-3}}+\left\|\mathbf{W}_{0}\right\|_{2}^{\frac{1}{6c- 3}}+\left((\eta T\mathcal{B}_{T})^{2}+\frac{(\eta T)^{\frac{7}{2}}\mathcal{B}_ {T}}{n}\right)^{\frac{1}{6c-\frac{1}{2}}}\] \[+\left((\eta T)^{\frac{3}{2}}\mathcal{B}_{T}^{2}+\frac{(\eta T)^ {3}\mathcal{B}_{T}}{n}\right)^{\frac{1}{4c-1}}+\left((\eta T)^{2}\mathcal{B}_ {T}+\frac{(\eta T)^{\frac{7}{2}}}{n}\right)^{\frac{1}{6c-1}}+\left((\eta T)^{ \frac{3}{2}}\mathcal{B}_{T}+\frac{(\eta T)^{3}}{n}\right)^{\frac{1}{4c-\frac{ 3}{2}}}. \tag{6}\]
_Then, for any \(t\in[T]\),_
\[\mathbb{E}[L(\mathbf{W}_{t})-L_{S}(\mathbf{W}_{t})]\leq\Big{(}\frac{4e^{2} \eta^{2}\hat{\rho}^{2}t}{n^{2}}+\frac{4e\eta\hat{\rho}}{n}\Big{)}\sum_{j=0}^{ t-1}\mathbb{E}\big{[}L_{S}(\mathbf{W}_{j})\big{]}.\]
Similar to Theorem 2, Theorem 6 also implies that a larger scaling \(c\) relaxes the requirement on \(m\).
**Remark 4**.: As compared to two-layer NNs, the analysis of three-layer NNs is more challenging since we can only show that \(\lambda_{\max}\big{(}\nabla^{2}\ell(\mathbf{W};z)\big{)}\leq\rho_{\mathbf{W}}\) where \(\rho_{\mathbf{W}}\) depends on \(\|\mathbf{W}\|_{2}\), i.e., the smoothness parameter of \(\ell\) relies on the upper bound of \(\mathbf{W}\), while that of two-layer NNs is uniformly bounded. In this way, three-layer NNs do not enjoy the almost co-coercivity, which is the key step to control the stability of GD. To handle this problem, we first establish a crude estimate \(\|\mathbf{W}_{t}-\mathbf{W}_{\emptyset}\|_{2}\leq\eta tm^{2c-1}\) for any \(c>1/2\) by induction strategy. By using this estimate, we can further show that \(\rho_{\mathbf{W}}\leq\hat{\rho}\) with \(\hat{\rho}=\mathcal{O}(1)\) for any \(\mathbf{W}\) produced by GD iterates if \(m\) satisfies (6). Finally, by assuming \(\eta\leq 1/(2\hat{\rho})\) we build the upper bound \(\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}\leq\sqrt{2c_{0}\eta t}\). However, for the case \(c=1/2\), we cannot get a similar bound due to the condition \(m^{2-4c}\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}^{2}=\mathcal{O}(m^{2c-1})\). Specifically, the upper bound of \(\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}\) in this case contains a worse term \(2^{t}\), which is not easy to control. Therefore, we only consider \(c\in(1/2,1]\) for three-layer NNs. The estimate of \(\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}\) when \(c=1/2\) remains an open problem. The detailed proof of the theorem is given in Appendix B.1.
Let \(\hat{C}_{\mathbf{W}}=4B_{1}\big{(}m^{-3c}(\|\mathbf{W}\|_{2}^{2}+2c_{0}\eta T )+m^{\frac{1}{2}-2c}(\|\mathbf{W}\|_{2}+\sqrt{2c_{0}\eta T})\big{)}\) and \(\hat{B}_{\mathbf{W}}=\big{(}\frac{B_{\tau}^{2}c_{\mathbf{w}}}{m^{2c-1/2}}( \sqrt{2c_{0}\eta T}+\|\mathbf{W}\|_{2})+\frac{B_{\tau}B_{\tau}}{m^{2c-1}}( 2\sqrt{2\eta TC_{0}}+\|\mathbf{W}\|_{2})+\sqrt{2c_{0}}\). The following theorem gives optimization error bounds for three-layer NNs. The proof is given in Appendix B.2.
**Theorem 7** (Optimization error).: _Let Assumptions 1, 2 hold. Let \(\{\mathbf{W}_{t}\}\) be produced by (2) with \(\eta_{t}\equiv\eta\leq 1/(8\hat{\rho})\). Let \(\mathcal{C}_{T,n}=\eta T+\eta^{3}T^{2}/n^{2}\). Assume (6) and_
\[m \gtrsim \big{(}\mathcal{C}_{T,n}(\mathcal{B}_{T}+\|\mathbf{W}^{*}\|_{2})^ {4}\big{)}^{\frac{1}{5c-\frac{1}{2}}}+(\mathcal{C}_{T,n}(\mathcal{B}_{T}+\| \mathbf{W}^{*}\|_{2})^{3})^{\frac{1}{4c-1}}+(\mathcal{C}_{T,n}(\mathcal{B}_{T}+ \|\mathbf{W}^{*}\|_{2})^{2})^{\frac{1}{4c-\frac{3}{2}}}\] \[+(\mathcal{C}_{T,n}(\mathcal{B}_{T}+\|\mathbf{W}^{*}\|_{2}))^{ \frac{1}{2c-\frac{1}{2}}}. \tag{7}\]
_Then we have_
\[\mathbb{E}\big{[}L_{S}(\mathbf{W}_{T})]-L_{S}(\mathbf{W}_{\frac{1} {\eta T}}^{*})-\frac{1}{2\eta T}\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_ {0}\|_{2}^{2}\big{]}\] \[\leq\hat{C}_{\mathbf{W}_{\frac{1}{\eta T}}^{*}}\hat{B}_{\mathbf{W }_{\frac{1}{\eta T}}^{*}}\big{(}\big{(}\frac{4e^{2}\eta^{3}T^{2}\hat{\rho}^{2}}{ n^{2}}+\frac{4e\eta^{2}T\hat{\rho}}{n}\big{)}\sum_{s=0}^{T-1}\mathbb{E}\big{[}L_{S}( \mathbf{W}_{s})\big{]}+\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2 }^{2}+\eta T\big{[}L(\mathbf{W}_{\frac{1}{\eta T}}^{*})-L(\mathbf{W}^{*}) \big{]}\big{)}.\]
Now, we develop excess risk bounds of GD for three-layer NNs by combining Theorem 6 and Theorem 7 together. The proof is given in Appendix B.3.
**Theorem 8** (Excess population risk).: _Suppose Assumptions 1 and 2 hold. Let \(\{\mathbf{W}_{t}\}\) be produced by (2) with \(\eta\leq 1/(8\hat{\rho})\). Assume (6) and (7) hold. For any \(c\in(1/2,1]\), if \(n\gtrsim\eta T\), then there holds_
\[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]=\mathcal{O}\Big{(}\frac{\eta T}{ n}L(\mathbf{W}^{*})+\frac{\|\mathbf{W}^{*}-\mathbf{W}_{0}\|_{2}^{2}}{\eta T}\Big{)}.\]
Finally, we establish excess risk bounds of GD for three-layer NNs by assuming Assumption 3 holds.
**Corollary 9**.: _Let assumptions in Theorem 8 and Assumption 3 hold. The following statements hold._
1. _Assume_ \(\mu\geq 1/2\)_. If_ \(c\in[9/16,1]\)_, we can choose_ \(m\asymp(\eta T)^{4}\) _and_ \(\eta T\) _such that_ \(n^{\frac{1}{2(9\mu-3)}}\lesssim\eta T\lesssim\sqrt{n}\)_. If_ \(c\in(1/2,9/16)\)_, we can choose_ \(m\asymp(\eta T)^{\frac{1}{4c-2}}\) _and_ \(\eta T\) _such that_ \(n^{\frac{2c-1}{2\mu+4c-3}}\lesssim\eta T\lesssim\sqrt{n}\)_. Then_ \[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]=\mathcal{O}\big{(}\frac{1}{ \sqrt{n}}\big{)}.\]
2. _Assume_ \(L(\mathbf{W}^{*})=0\) _and_ \(\mu\geq 1/2\)_. If_ \(c\in[9/16,1]\)_, we can choose_ \(m\asymp(\eta T)^{4}\) _and_ \(\eta T\gtrsim n^{\frac{1}{8\mu-3}}\)_. If_ \(c\in(1/2,9/16)\)_, we can choose_ \(m\asymp(\eta T)^{\frac{1}{4c-2}}\) _and_ \(\eta T\) _such that_ \(\eta T\gtrsim n^{\frac{4c-2}{4c+2\mu-3}}\)_. Then_ \[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]=\mathcal{O}\big{(}\frac{1}{n} \big{)}.\]
**Discussion of the Results:** Part (a) in Corollary 9 shows GD for three-layer NNs can achieve the excess risk rate \(\mathcal{O}(1/\sqrt{n})\) with \(m\asymp(\eta T)^{4}\) and \(n^{\frac{1}{2(6\mu-3)}}\lesssim\eta T\lesssim\sqrt{n}\) for the case \(c\in[9/16,1]\) and \(m\asymp(\eta T)^{\frac{1}{8\mu-2}}\) and \(n^{\frac{2c-1}{2\mu+4c-3}}\lesssim\eta T\lesssim\sqrt{n}\) for the case \(c\in(1/2,9/16)\), respectively. Note that there is an additional assumption \(\mu\geq 1/2\) in part (a). Combining this assumption with \(\|\mathbf{W}^{*}\|_{2}\leq m^{\frac{1}{2}-\mu}\) together, we know that the population risk minimizer \(\mathbf{W}^{*}\) cannot be too large to reach the power of the exponent of \(m\). The potential reason is that we use a constant to bound the smoothness parameter in the analysis for three-layer NNs. Part (a) also indicates a quantitative condition in terms of \(m\) and \(c\) where GD for three-layer NNs can achieve the excess risk rate \(\mathcal{O}(1/\sqrt{n})\) in under-parameterization and over-parameterization regimes, which is interpreted in the right panel of Figure 1. The results in part (a) tell us that the smallest width for guaranteeing the desired bounds are \(m\asymp n^{\frac{1}{8\mu-3}}\) for \(c\in[9/16,1]\) and \(m\asymp n^{\frac{1}{2(2\mu+4c-3)}}\) for \(c\in(1/2,9/16)\). Similar to the left panel of Figure 1, the dotted lines \(\mu=\frac{5}{8}\) and \(\mu=\frac{7}{4}-2c\) in the right panel of Figure 1 correspond to the setting \(m\asymp n\), i.e., \(\frac{2}{8\mu-3}=1\) and \(\frac{1}{2(2\mu+4c-3)}=1\). Hence, when \(c\) and \(\mu\) belong to the blue region with dots, under-parameterization is _sufficient_ to achieve the desired rate. When \(c\) and \(\mu\) are located in the blue region without dots which is between the solid lines and the dotted line, over-parameterization is _necessary_ for GD to achieve the rate \(\mathcal{O}(1/\sqrt{n})\). Our results for three-layer NNs also imply that the over-parameterization does bring benefit for GD to achieve good generalization in the sense that GD can achieve excess risk rate \(\mathcal{O}(1/\sqrt{n})\). Under a low-noise condition, part (b) implies that the excess risk rate can be improved to \(\mathcal{O}(1/n)\) with suitable choices of \(m\) and \(\eta T\). These results imply that the larger the scaling parameter is, the less over-parameterization is needed for GD to achieve the desired error rate for both the general case and low-noise case.
**Comparison with the Existing Work:**[36] studied the minimal eigenvalue of the empirical risk Hessian for a three-layer NN with a linear activation in the first layer for Lipschitz and convex losses (e.g., logistic loss), while we focus on NNs with more general activation functions for least square loss. See more detailed discussion in Appendix B.4. [21] studied the generalization performance of overparameterized three-layer NTK models with the absolute loss and ReLU activation. They showed that the generalization error is in the order of \(\mathcal{O}(1/\sqrt{n})\) when there are infinitely many neurons. They only trained the middle-layer weights of the networks. To the best of our knowledge, our work is the first study on the stability and generalization of GD to train both the first and the second layers of the network in the kernel-free regime.
## 4 Main Idea of the Proof
In this section, we present the main ideas for proving the main results in Section 3.
**Two-layer Neural Networks.** (3) decomposes the excess population risk into three terms: generalization error, optimization error and approximation error. We estimate these three terms separately.
_Generalization error._ From Lemma 1 we know the generalization error can be upper bounded by the on-average argument stability of GD. Hence, it remains to figure out the on-average argument stability of GD. A key step in
our stability analysis is to show that the loss is strongly smooth and weakly convex, which can be obtained by the following results given in Lemma A.2:
\[\lambda_{\text{max}}\big{(}\nabla^{2}\ell(\mathbf{W};z)\big{)}\!\leq\!\rho\text { and }\lambda_{\text{min}}\big{(}\nabla^{2}\ell(\mathbf{W};z)\big{)}\!\geq\!-\! \big{(}c_{\mathbf{x}}^{3}B_{\sigma}B_{\sigma^{\prime\prime}}m^{\frac{1}{2}-2c }\|\mathbf{W}\!-\!\mathbf{W}_{0}\|_{2}+c_{\mathbf{x}}^{2}B_{\sigma}\!\sqrt{2c _{0}}m^{-\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Conclusion
We present stability and generalization analysis of GD for multi-layer NNs with generic network scaling factors. Under some qualitative conditions on the network scaling and the network complexity, we establish excess risk bounds of the order \(\mathcal{O}(1/\sqrt{n})\) for GD on both two-layer and three-layer NNs, which are improved to \(\mathcal{O}(1/n)\) with an additional low-noise condition. Our results describe a quantitative condition related to the scaling factor and the network complexity under which GD on two-layer and three-layer NNs can achieve the desired excess risk rate.
There remain several questions for further study. The first question is whether our analysis of GD for multi-layer NNs can be extended to SGD with less computation cost. The key challenge here is that the analysis for GD relies critically on the monotonicity of the objective functions along the optimization process, which does not hold for SGD. Second, our analysis for three-layer NNs does not hold for \(c=1/2\). It would be interesting to develop a result for this setting. Finally, the results in Corollaries 5 and 9 hold true for \(\mu\geq 1/6\) or \(\mu\geq 1/2\). It remains an open question to us whether we can get a generalization error bound \(\mathcal{O}(1/\sqrt{n})\) when \(\mu\) is small.
**Acknowledgement.** The work of Ding-Xuan Zhou was partially supported by the Laboratory for AI-Powered Financial Technologies under the InnoHK scheme. The corresponding author is Yiming Ying whose work was supported by NSF research grants (DMS-2110836, IIS-2103450, and IIS-2110546). Di Wang was supported in part by BAS/1/1689-01-01, URF/1/4663-01-01, FCC/1/1976-49-01 of KAUST, and a funding of the SDAIA-KAUST AI center.
**Appendix for "Generalization Guarantees of Gradient Descent for Multi-Layer Neural Networks"**
## Appendix A Proofs of Two-layer Neural Networks
### Proofs of Generalization Bounds
We first introduce the self-bounding property of smooth functions [38].
**Lemma A.1** (Self-bounding property).: _Suppose for all \(z\in\mathcal{Z}\), the function \(\mathbf{W}\mapsto\ell(\mathbf{W};z)\) is nonnegative and \(\rho\)-smooth. Then \(\|\nabla\ell(\mathbf{W};z)\|_{2}^{2}\leq 2\rho\ell(\mathbf{W};z)\)._
We work with vectorized quantities so \(\mathbf{W}\in\mathbb{R}^{md}\). Then \(\nabla f_{\mathbf{W}}(\mathbf{x})\in\mathbb{R}^{md}\) and \(\nabla^{2}f_{\mathbf{W}}(\mathbf{x})\in\mathbb{R}^{md\times md}.\) Denote by \(\|\mathbf{W}\|_{op}\) the spectral norm of a matrix \(\mathbf{W}\). We first introduce the following lemma, which shows that the loss function is smooth and weakly convex.
**Lemma A.2** (Smoothness and Curvature).: _Suppose Assumptions 1 and 2 hold. Let \(\mathbf{W}_{0}\) be the initial point of GD. For any fixed \(\mathbf{W}\in\mathbb{R}^{m\times d}\) and any \(z\in\mathcal{Z}\), there holds_
\[\lambda_{\max}\!\big{(}\nabla^{2}\!\ell(\mathbf{W};z)\big{)}\! \leq\!\rho\text{ with }\rho\!=\!c_{\mathbf{x}}^{2}\Big{(}\frac{B_{\sigma^{\prime}}^{2}\!+\!B_{ \sigma}B_{\sigma^{\prime\prime}}}{m^{2c-1}}\!+\!\frac{B_{\sigma^{\prime\prime }}c_{y}}{m^{c}}\Big{)}.\] \[\lambda_{\min}\!\big{(}\nabla^{2}\!\ell(\mathbf{W};z)\big{)}\! \geq\!-\!\Big{(}\frac{c_{\mathbf{x}}^{3}B_{\sigma^{\prime}}B_{\sigma^{\prime \prime}}}{m^{2c-\frac{1}{2}}}\|\mathbf{W}\!-\!\mathbf{W}_{0}\|_{2}\!+\!\frac{c _{\mathbf{x}}^{2}B_{\sigma}\!\sqrt{2c_{0}}}{m^{c}}\Big{)}.\]
Proof.: Recall that \(f_{\mathbf{W}}(\mathbf{x})=\frac{1}{m^{c}}\sum_{k=1}^{m}a_{k}\sigma(\mathbf{w }_{k}\mathbf{x})\). Let \(\mathbf{v}=(\mathbf{v}_{1},\ldots,\mathbf{v}_{m})\in\mathbb{R}^{dm}\) with \(\mathbf{v}_{k}\in\mathbb{R}^{d}\) where \(k\in[m]\). According to Assumption 1,2 and noting that \(|a_{k}|=1\), we can give the following estimations:
\[\|\nabla^{2}f_{\mathbf{W}}(\mathbf{x})\|_{op} =\max_{\mathbf{v}:\|\mathbf{v}\|_{2}\leq 1}\sum_{k=1}^{m}\frac{a_{k}}{ m^{c}}\langle\mathbf{v}_{k},\mathbf{x}\rangle^{2}\sigma^{\prime\prime}(\mathbf{w }_{k}\mathbf{x})\] \[\leq\frac{\|\mathbf{x}\|_{2}^{2}B_{\sigma^{\prime\prime}}}{m^{c}} \max_{\mathbf{v}:\|\mathbf{v}\|_{2}\leq 1}\sum_{k=1}^{m}\|\mathbf{v}_{k}\|_{2}^{2}\] \[\leq\frac{c_{\mathbf{x}}^{2}B_{\sigma^{\prime\prime}}}{m^{c}},\] (A.1) \[\|\nabla f_{\mathbf{W}}(\mathbf{x})\|_{2}^{2} =\sum_{k=1}^{m}\big{\|}\frac{a_{k}}{m^{c}}\mathbf{x}\sigma^{\prime }(\mathbf{w}_{k}\mathbf{x})\big{\|}_{2}^{2}\leq\frac{B_{\sigma^{\prime}}^{2}c _{\mathbf{x}}^{2}}{m^{2c-1}}\] (A.2)
and
\[\big{|}f_{\mathbf{W}}(\mathbf{x})-y\big{|}\leq\big{|}f_{\mathbf{W}}(\mathbf{x })\big{|}+c_{y}\leq m^{1-c}B_{\sigma}+c_{y}.\] (A.3)
Note
\[\nabla^{2}\ell(\mathbf{W};z)=\nabla f_{\mathbf{W}}(\mathbf{x})\nabla f_{ \mathbf{W}}(\mathbf{x})^{\top}+\nabla^{2}f_{\mathbf{W}}(\mathbf{x})\big{(}f_ {\mathbf{W}}(\mathbf{x})-y\big{)}.\] (A.4)
Then for any \(\mathbf{W}\in\mathbb{R}^{md}\), we can upper bound the maximum eigenvalue of the Hessian as
\[\lambda_{\max}(\nabla^{2}\ell(\mathbf{W};z)) \leq\|\nabla f_{\mathbf{W}}(\mathbf{x})\|_{2}^{2}+\|\nabla^{2}f_ {\mathbf{W}}(\mathbf{x})\|_{op}|f_{\mathbf{W}}(\mathbf{x})-y|\] \[\leq\frac{B_{\sigma^{\prime}}^{2}c_{\mathbf{x}}^{2}}{m^{2c-1}}+ \frac{c_{\mathbf{x}}^{2}B_{\sigma^{\prime\prime}}}{m^{c}}\big{(}m^{1-c}B_{ \sigma}+c_{y}\big{)}\] \[=c_{\mathbf{x}}^{2}\Big{(}\frac{B_{\sigma^{\prime}}^{2}+B_{\sigma} B_{\sigma^{\prime\prime}}}{m^{2c-1}}+\frac{B_{\sigma^{\prime\prime}}c_{y}}{m^{c}} \Big{)},\]
which implies that the loss function is \(\rho\)-smooth with \(\rho=c_{\mathbf{x}}^{2}\big{(}\frac{B_{\sigma^{\prime}}^{2}+B_{\sigma}B_{\sigma^{ \prime\prime}}}{m^{2c-1}}+\frac{B_{\sigma^{\prime\prime}}c_{\mathbf{y}}}{m^{c}} \big{)}\).
For any \(\mathbf{W},\mathbf{W}^{\prime}\in\mathbb{R}^{md}\), from Assumption 1 we know
\[\big{|}f_{\mathbf{W}}(\mathbf{x})-f_{\mathbf{W}^{\prime}}(\mathbf{x})\big{|} \leq\frac{1}{m^{c}}\sum_{k=1}^{m}\big{|}\sigma(\mathbf{w}_{k}\mathbf{x})- \sigma(\mathbf{w}_{k}^{\prime}\mathbf{x})\big{|}\leq\frac{B_{\sigma^{\prime}}} {m^{c}}\sum_{k=1}^{m}\big{|}(\mathbf{w}_{k}-\mathbf{w}_{k}^{\prime})\mathbf{x }\big{|}\leq\frac{c_{\mathbf{x}}B_{\sigma^{\prime}}}{m^{c-1/2}}\|\mathbf{W}- \mathbf{W}^{\prime}\|_{2}.\] (A.5)
Combining (A.4) with the fact that \(\nabla f_{\mathbf{W}}(\mathbf{x})\nabla f_{\mathbf{W}}(\mathbf{x})^{\top}\) is positive semi-definite together, we obtain
\[\lambda_{\min}(\nabla^{2}\ell(\mathbf{W};z)) \geq-\|\nabla^{2}f_{\mathbf{W}}(\mathbf{x})\|_{op}\big{|}f_{ \mathbf{W}}(\mathbf{x})-y\big{|}\] \[\geq-\frac{c_{\mathbf{x}}^{2}B_{\sigma^{\prime\prime}}}{m^{c}} \Big{(}\big{|}f_{\mathbf{W}}(\mathbf{x})-f_{\mathbf{W}_{0}}(\mathbf{x})\big{|} +\big{|}f_{\mathbf{W}_{0}}(\mathbf{x})-y\big{|}\Big{)}\] \[\geq-\frac{c_{\mathbf{x}}^{2}B_{\sigma^{\prime\prime}}}{m^{c}} \Big{(}\frac{c_{\mathbf{x}}B_{\sigma^{\prime}}}{m^{c-1/2}}\|\mathbf{W}- \mathbf{W}_{0}\|_{2}+\sqrt{2\ell(\mathbf{W}_{0};z)}\Big{)}\] \[\geq-\frac{c_{\mathbf{x}}^{2}B_{\sigma^{\prime\prime}}}{m^{c}} \Big{(}\frac{c_{\mathbf{x}}B_{\sigma^{\prime}}}{m^{c-1/2}}\|\mathbf{W}- \mathbf{W}_{0}\|_{2}+\sqrt{2c_{0}}\Big{)},\] (A.6)
where in the third inequality we used (A.5) with \(\mathbf{W}^{\prime}=\mathbf{W}_{0}\) and in the last inequality we used Assumption 2. The proof is completed.
To give an upper bound of the uniform stability, we need the following lemma which shows how the GD iterate will deviate from the initial point.
**Lemma A.3** ([35]).: _Suppose the loss is \(\rho\)-smooth and \(\eta\leq 1/(2\rho)\). Then for any \(t\geq 0\), \(i\in[n]\),_
\[\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2} \leq\sqrt{2\eta tL_{S}(\mathbf{W}_{0})},\] \[\|\mathbf{W}_{t}^{(i)}-\mathbf{W}_{0}\|_{2} \leq\sqrt{2\eta tL_{S^{(i)}}(\mathbf{W}_{0})}.\]
The following lemma shows an almost co-coercivity of the gradient operator associated with shallow neural networks. For any \(i\in[n]\), define \(S^{(i)}=\{z_{1},\ldots,z_{i-1},z_{i}^{\prime},z_{i+1},\ldots,z_{n}\}\) as the set formed from \(S\) by replacing the \(i\)-th element with \(z_{i}^{\prime}\). For any \(\mathbf{W}\in\mathcal{W}\),
\[L_{S^{(i)}}(\mathbf{W})=L_{S}(\mathbf{W})-\frac{1}{n}\ell(\mathbf{W};z_{i})=L _{S^{(i)}}(\mathbf{W})-\frac{1}{n}\ell(\mathbf{W};z_{i}^{\prime}).\]
Let \(\{\mathbf{W}_{t}\}\) and \(\{\mathbf{W}_{t}^{(i)}\}\) be the sequence produced by GD based on \(S\) and \(S^{(i)}\), respectively.
**Lemma A.4** (Almost Co-coercivity of the Gradient Operator).: _Suppose the loss is \(\rho\)-smooth and \(\eta\leq 1/(2\rho)\). Then_
\[\langle\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)},\nabla L_{S^{(i)}}( \mathbf{W}_{t})-\nabla L_{S^{(i)}}(\mathbf{W}_{t}^{(i)})\rangle\geq 2\eta\Big{(}1- \frac{\eta\rho}{2}\Big{)}\big{\|}\nabla L_{S^{(i)}}(\mathbf{W}_{t})-\nabla L_{S ^{(i)}}(\mathbf{W}_{t}^{(i)})\big{\|}_{2}^{2}\] \[\qquad\qquad\qquad\qquad-\epsilon_{t}\big{\|}\mathbf{W}_{t}- \mathbf{W}_{t}^{(i)}-\eta\big{(}\nabla L_{S^{(i)}}(\mathbf{W}_{t})-\nabla L_{S ^{(i)}}(\mathbf{W}_{t}^{(i)})\big{)}\big{\|}_{2}^{2},\]
_where \(\epsilon_{t}=\frac{c_{\mathbf{x}}^{2}B_{\sigma^{\prime\prime}}}{m^{c}}\Big{(} \frac{c_{\mathbf{x}}B_{\sigma^{\prime}}}{m^{c-1/2}}(1+\eta\rho)\big{\|} \mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}\big{\|}_{2}+\frac{c_{\mathbf{x}}B_{\sigma^ {\prime}}\sqrt{2\eta Tc_{0}}}{m^{c-1/2}}+\sqrt{2c_{0}}\Big{)}\)._
Proof.: This lemma can be proved in a similar way as Lemma 5 in [35] except the estimation of the eigenvalue of Hessian matrix. Specifically, for \(\alpha\in[0,1]\), let \(\mathbf{W}(\alpha)=\alpha\mathbf{W}_{t}+(1-\alpha)\mathbf{W}_{t}^{(i)}-\alpha \eta\big{(}\nabla\ell(\mathbf{W}_{t};z)-\nabla\ell(\mathbf{W}_{t}^{(i)};z)\big{)}\). According to (A.1) and (A.4), for any \(\mathbf{W}\in\mathcal{W}\), we know
\[\lambda_{\min}\big{(}\nabla^{2}L_{S^{(i)}}(\mathbf{W})\big{)}\geq-\frac{c_{ \mathbf{x}}^{2}B_{\sigma^{\prime\prime}}}{m^{c}}\Big{(}\frac{1}{n}\sum_{j\in[n],j\neq i}|f_{\mathbf{W}}(\mathbf{x}_{j})-y_{j}|\Big{)}.\]
Let \(\mathbf{W}=\mathbf{W}(\alpha)\). Note Lemma A.2 shows that the loss is \(\rho\)-smooth with \(\rho=c_{\mathbf{x}}^{2}\big{(}\frac{B_{\sigma^{\prime}}^{2}+B_{\sigma}B_{\sigma^ {\prime\prime}}}{m^{2c-1}}+\frac{B_{\sigma^{\prime\prime}}c_{\mathbf{y}}}{m^{c }}\big{)}\). Then from (A.5) and the smoothness of \(\ell\) we can get
\[\frac{1}{n}\sum_{j\in[n],j\neq i}|f_{\mathbf{W}(\alpha)}(\mathbf{ x}_{j})-y_{j}|\] \[\leq\frac{1}{n}\sum_{j\in[n],j\neq i}|(f_{\mathbf{W}(\alpha)}( \mathbf{x}_{j})-f_{\mathbf{W}_{t}^{(i)}}(\mathbf{x}_{j}))+(f_{\mathbf{W}_{t}^ {(i)}}(\mathbf{x}_{j})-f_{\mathbf{W}_{0}}(\mathbf{x}_{j}))+(f_{\mathbf{W}_{0} }(\mathbf{x}_{j})-y_{j})|\] \[\leq\frac{c_{\mathbf{x}}B_{\sigma^{\prime}}}{m^{c-1/2}}\|\mathbf{ W}(\alpha)-\mathbf{W}_{t}^{(i)}\|_{2}+\frac{c_{\mathbf{x}}B_{\sigma^{\prime}}}{m^{c -1/2}}\|\mathbf{W}_{t}^{(i)}-\mathbf{W}_{0}\|_{2}+\sqrt{2L_{S^{\setminus i}}( \mathbf{W}_{0})}\] \[\leq\frac{c_{\mathbf{x}}B_{\sigma^{\prime}}\alpha}{m^{c-1/2}} \big{(}\|\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}\|_{2}+\eta\|\nabla\ell(\mathbf{W }_{t};z)-\nabla\ell(\mathbf{W}_{t}^{(i)};z)\|_{2}\big{)}+\frac{c_{\mathbf{x}}B_ {\sigma^{\prime}}\sqrt{2\eta Tc_{0}}}{m^{c-1/2}}+\sqrt{2c_{0}}\] \[\leq\frac{c_{\mathbf{x}}B_{\sigma^{\prime}}(1+\eta\rho)}{m^{c-1/ 2}}\|\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}\|_{2}+\frac{c_{\mathbf{x}}B_{\sigma^ {\prime}}\sqrt{2\eta Tc_{0}}}{m^{c-1/2}}+\sqrt{2c_{0}},\]
where in the third inequality we used Lemma A.3 and \(\ell(\mathbf{W}_{0};z)\leq c_{0}\).
Combining the above two inequalities together, we get
\[\lambda_{\min}\big{(}\nabla^{2}L_{S^{\setminus i}}(\mathbf{W}(\alpha))\big{)} \geq-\frac{c_{\mathbf{x}}^{2}B_{\sigma^{\prime\prime}}}{m^{c}}\Big{(}\frac{c_{ \mathbf{x}}B_{\sigma^{\prime}}(1+\eta\rho)}{m^{c-1/2}}\|\mathbf{W}_{t}- \mathbf{W}_{t}^{(i)}\|_{2}+\frac{c_{\mathbf{x}}B_{\sigma^{\prime}}\sqrt{2\eta Tc _{0}}}{m^{c-1/2}}+\sqrt{2c_{0}}\Big{)}.\]
Similarly, let \(\widetilde{\mathbf{W}}(\alpha)=\alpha\mathbf{W}_{t}^{(i)}+(1-\alpha)\mathbf{W }_{t}-\alpha\eta\big{(}\nabla\ell(\mathbf{W}_{t}^{(i)};z)\big{)}-\nabla\ell( \mathbf{W}_{t};z)\big{)}\), we can prove that
\[\lambda_{\min}\big{(}\nabla^{2}L_{S^{\setminus i}}(\widetilde{\mathbf{W}}( \alpha))\big{)}\geq-\frac{c_{\mathbf{x}}^{2}B_{\sigma^{\prime\prime}}}{m^{c}} \Big{(}\frac{c_{\mathbf{x}}B_{\sigma^{\prime}}(1+\eta\rho)}{m^{c-1/2}}\| \mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}\|_{2}+\frac{c_{\mathbf{x}}B_{\sigma^{ \prime}}\sqrt{2\eta Tc_{0}}}{m^{c-1/2}}+\sqrt{2c_{0}}\Big{)}.\]
The remaining arguments in proving the lemma are the same as Lemma 5 in [35]. We omit the proof for simplicity.
Based on the almost co-coercivity property of the gradient operator, we give the following uniform stability theorem.
**Theorem A.5** (Uniform Stability).: _Suppose Assumptions 1 and 2 hold. Let \(S,S^{(i)}\) be constructed in Definition 2. Let \(\{\mathbf{W}_{t}\}\) and \(\{\mathbf{W}_{t}^{(i)}\}\) be produced by (2) with \(\eta\leq 1/(2\rho)\) based on \(S\) and \(S^{(i)}\), respectively. Assume (4) holds. For any \(t\in[T]\), there holds_
\[\|\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}\|_{2}\leq\frac{2\eta eT\sqrt{2c_{0}\rho( \rho\eta T+2)}}{n}.\]
Proof.: Recall that
\[L_{S^{\setminus i}}(\mathbf{W})=L_{S}(\mathbf{W})-\frac{1}{n}\ell(\mathbf{W}; z_{i})=L_{S^{(i)}}(\mathbf{W})-\frac{1}{n}\ell(\mathbf{W};z_{i}^{\prime}).\]
Note \(\mathcal{W}=\mathbb{R}^{m\times d}\). Then by the update rule \(\mathbf{W}_{t+1}=\mathbf{W}_{t}-\eta\nabla L_{S}(\mathbf{W}_{t})\), there holds
\[\big{\|}\mathbf{W}_{t+1}-\mathbf{W}_{t+1}^{(i)}\big{\|}_{2}^{2}\] \[=\big{\|}\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}-\eta\big{(}\nabla L_{S ^{\setminus i}}(\mathbf{W}_{t})-\nabla L_{S^{\setminus i}}(\mathbf{W}_{t}^{(i) })\big{)}-\frac{\eta}{n}\big{(}\nabla\ell(\mathbf{W}_{t};z_{i})-\nabla\ell( \mathbf{W}_{t}^{(i)};z_{i}^{\prime})\big{)}\big{\|}_{2}^{2}\] \[\leq(1+p)\big{\|}\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}-\eta\big{(} \nabla L_{S^{\setminus i}}(\mathbf{W}_{t})-\nabla L_{S^{\setminus i}}(\mathbf{W}_{ t}^{(i)})\big{)}\big{\|}_{2}^{2}\] \[\quad+\frac{\eta^{2}(1+1/p)}{n^{2}}\big{\|}\nabla\ell(\mathbf{W}_ {t};z_{i})-\nabla\ell(\mathbf{W}_{t}^{(i)};z_{i}^{\prime})\big{\|}_{2}^{2}\] \[\leq(1+p)\big{\|}\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}-\eta\big{(} \nabla L_{S^{\setminus i}}(\mathbf{W}_{t})-\nabla L_{S^{\setminus i}}(\mathbf{W}_{ t}^{(i)})\big{)}\big{\|}_{2}^{2}\] \[\quad+\frac{2\eta^{2}(1+1/p)}{n^{2}}\Big{(}\big{\|}\nabla\ell( \mathbf{W}_{t};z_{i})\big{\|}_{2}^{2}+\big{\|}\nabla\ell(\mathbf{W}_{t}^{(i)};z_{ i}^{\prime})\big{\|}_{2}^{2}\Big{)},\] (A.7)
where in the first inequality we used \((a+b)^{2}\leq(1+p)a^{2}+(1+1/p)b^{2}\).
According to Lemma A.4 we can get
\[\big{\|}\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}-\eta\big{(}\nabla L_{S ^{\setminus i}}(\mathbf{W}_{t})-\nabla L_{S^{\setminus i}}(\mathbf{W}_{t}^{(i )})\big{)}\big{\|}_{2}^{2}\] \[=\big{\|}\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}\big{\|}_{2}^{2}+\eta ^{2}\big{\|}\nabla L_{S^{\setminus i}}(\mathbf{W}_{t})-\nabla L_{S^{\setminus i }}(\mathbf{W}_{t}^{(i)})\big{\|}_{2}^{2}\] \[\quad-2\eta\Big{\langle}\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}, \nabla L_{S^{\setminus i}}(\mathbf{W}_{t})-\nabla L_{S^{\setminus i}}( \mathbf{W}_{t}^{(i)})\Big{\rangle}\] \[\leq\big{\|}\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}\big{\|}_{2}^{2}+ \eta^{2}\big{\|}\nabla L_{S^{\setminus i}}(\mathbf{W}_{t})-\nabla L_{S^{ \setminus i}}(\mathbf{W}_{t}^{(i)})\big{\|}_{2}^{2}\] \[\quad-4\eta^{2}\Big{(}1-\frac{\eta\rho}{2}\Big{)}\big{\|}\nabla L _{S^{\setminus i}}(\mathbf{W}_{t})-\nabla L_{S^{\setminus i}}(\mathbf{W}_{t} ^{(i)})\big{\|}_{2}^{2}\] \[\quad+2\eta\epsilon_{t}\big{\|}\mathbf{W}_{t}-\mathbf{W}_{t}^{(i )}-\eta\big{(}\nabla L_{S^{\setminus i}}(\mathbf{W}_{t})-\nabla L_{S^{ \setminus i}}(\mathbf{W}_{t}^{(i)})\big{)}\big{\|}_{2}^{2},\]
where \(\epsilon_{t}=\frac{c_{\mathbf{s}}B_{\mu^{\prime}}}{m^{c}}\big{(}\frac{c_{ \mathbf{s}}B_{\mu^{\prime}}}{m^{c-1/2}}(1+\eta\rho)\big{\|}\mathbf{W}_{t}- \mathbf{W}_{t}^{(i)}\big{\|}_{2}+\frac{c_{\mathbf{s}}B_{\mu^{\prime}}\sqrt{2 \eta 1Tc_{0}}}{m^{c-1/2}}+\sqrt{2c_{0}}\big{)}\).
Rearranging the above inequality and noting that \(\eta\rho\leq 1/2\), we obtain
\[(1-2\eta\epsilon_{t})\big{\|}\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}- \eta\big{(}\nabla L_{S^{\setminus i}}(\mathbf{W}_{t})-\nabla L_{S^{\setminus i }}(\mathbf{W}_{t}^{(i)})\big{)}\big{\|}_{2}^{2}\] \[\leq\big{\|}\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}\big{\|}_{2}^{2}+ \eta^{2}(2\eta\rho-3)\big{\|}\nabla L_{S^{\setminus i}}(\mathbf{W}_{t})- \nabla L_{S^{\setminus i}}(\mathbf{W}_{t}^{(i)})\big{\|}_{2}^{2}\leq\big{\|} \mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}\big{\|}_{2}^{2}.\]
We can choose \(m\) large enough to ensure \(2\eta\epsilon_{t}<1\) holds for any \(t\in[T]\). Indeed, \(2\eta\epsilon_{t}<1\) holds as long as condition (4) holds. We will discuss it at the end of the proof. Now, plugging the above inequality back into (A.7) yields
\[\big{\|}\mathbf{W}_{t+1}-\mathbf{W}_{t+1}^{(i)}\big{\|}_{2}^{2}\] \[\leq\frac{1+p}{1-2\eta\epsilon_{t}}\big{\|}\mathbf{W}_{t}- \mathbf{W}_{t}^{(i)}\big{\|}_{2}^{2}+\frac{2\eta^{2}(1+1/p)}{n^{2}}\Big{(} \big{\|}\nabla\ell(\mathbf{W}_{t};z_{i})\big{\|}_{2}^{2}+\big{\|}\nabla\ell( \mathbf{W}_{t}^{(i)};z_{i}^{\prime})\big{\|}_{2}^{2}\Big{)}.\] (A.8)
We can apply (A.8) recursively and derive
\[\big{\|}\mathbf{W}_{t+1}-\mathbf{W}_{t+1}^{(i)}\big{\|}_{2}^{2}\leq \frac{2\eta^{2}(1+1/p)}{n^{2}}\sum_{j=0}^{t}\Big{(}\big{\|}\nabla\ell( \mathbf{W}_{j};z_{i})\big{\|}_{2}^{2}+\big{\|}\nabla\ell(\mathbf{W}_{j}^{(i) };z_{i}^{\prime})\big{\|}_{2}^{2}\Big{)}\prod_{j=j+1}^{t}\frac{1+p}{1-2\eta \epsilon_{j}},\] (A.9)
where we used \(\mathbf{W}_{0}=\mathbf{W}_{0}^{(i)}\).
According to Lemma A.1 and Lemma A.3, we know
\[\|\nabla\ell(\mathbf{W}_{j};z)\|_{2}^{2} \leq 2\|\nabla\ell(\mathbf{W}_{j};z)-\nabla\ell(\mathbf{W}_{0};z) \|_{2}^{2}+2\|\nabla\ell(\mathbf{W}_{0};z)\|_{2}^{2}\] \[\leq 2\rho^{2}\|\mathbf{W}_{j}-\mathbf{W}_{0}\|_{2}^{2}+4\rho \ell(\mathbf{W}_{0};z)\leq 4\rho^{2}\eta jL_{S}(\mathbf{W}_{0})+4\rho\ell(\mathbf{W}_{0} ;z).\]
Similarly, we can show that
\[\|\nabla\ell(\mathbf{W}_{j}^{(i)};z)\|_{2}^{2}\leq 4\rho^{2}\eta jL_{S^{ (i)}}(\mathbf{W}_{0})+4\rho\ell(\mathbf{W}_{0};z).\]
Combining the above three inequalities together, we get
\[\left\|\mathbf{W}_{t+1}-\mathbf{W}_{t+1}^{(i)}\right\|_{2}^{2}\] \[\leq\frac{8\rho\eta^{2}(1+1/p)}{n^{2}}\sum_{j=0}^{t}\Big{(}\rho \eta jL_{S}(\mathbf{W}_{0})+\ell(\mathbf{W}_{0};z_{i})+\rho\eta jL_{S^{(i)}}( \mathbf{W}_{0})+\ell(\mathbf{W}_{0};z_{i}^{\prime})\Big{)}\prod_{\tilde{j}=j+1 }^{t}\frac{1+p}{1-2\eta\epsilon_{\tilde{j}}}\] \[\leq\frac{8\rho\eta^{2}(1+1/p)}{n^{2}}\prod_{\tilde{j}=1}^{t} \frac{1+p}{1-2\eta\epsilon_{\tilde{j}}}\sum_{j=0}^{t}\Big{(}\rho\eta jL_{S}( \mathbf{W}_{0})+\ell(\mathbf{W}_{0};z_{i})+\rho\eta jL_{S^{(i)}}(\mathbf{W}_{ 0})+\ell(\mathbf{W}_{0};z_{i}^{\prime})\Big{)}\] \[=\frac{4\rho\eta^{2}(1+1/p)}{n^{2}}\prod_{\tilde{j}=1}^{t}\frac{1 +p}{1-2\eta\epsilon_{\tilde{j}}}\Big{(}\rho\eta t(t+1)\big{(}L_{S}(\mathbf{W }_{0})+L_{S^{(i)}}(\mathbf{W}_{0})\big{)}+2(t+1)\big{(}\ell(\mathbf{W}_{0};z_{ i})\] \[\quad+\ell(\mathbf{W}_{0};z_{i}^{\prime}))\Big{)}\] \[\leq\frac{8\rho\eta^{2}c_{0}(1+1/p)(1+t)\big{(}\rho\eta t+2)}{n^{ 2}}\prod_{\tilde{j}=1}^{t}\frac{1+p}{1-2\eta\epsilon_{\tilde{j}}},\]
where we used \(\ell(\mathbf{W}_{0};z)\leq c_{0}\) for any \(z\in\mathcal{Z}\). If we further choose \(p=1/t\), then there holds
\[\left\|\mathbf{W}_{t+1}-\mathbf{W}_{t+1}^{(i)}\right\|_{2}^{2}\leq\frac{8\rho \eta^{2}c_{0}e(1+t)^{2}(\rho\eta t+2)}{n^{2}}\prod_{\tilde{j}=1}^{t}\frac{1}{1 -2\eta\epsilon_{\tilde{j}}},\] (A.10)
where we used \((1+1/t)^{t}\leq e\).
Now, we prove by induction to show
\[\left\|\mathbf{W}_{t+1}-\mathbf{W}_{t+1}^{(i)}\right\|_{2}\leq\frac{2\eta eT \sqrt{2c_{0}\rho(\rho\eta T+2)}}{n}.\] (A.11)
(A.11) with \(k=0\) holds trivially. Assume (A.11) holds with all \(k\leq t\), i.e., for all \(k\leq t\)
\[\left\|\mathbf{W}_{k}-\mathbf{W}_{k}^{(i)}\right\|_{2}\leq\frac{2\eta eT\sqrt{ 2c_{0}\rho(\rho\eta T+2)}}{n}.\] (A.12)
and we want to show it holds with \(k=t+1\leq T\). Recall that \(\epsilon_{k}=\frac{c_{\mathbf{x}}B_{\rho^{\prime\prime}}}{m^{c}}\big{(}\frac{ c_{\mathbf{x}}B_{\rho^{\prime}}}{m^{c-1/2}}(1+\eta\rho)\big{\|}\mathbf{W}_{k}- \mathbf{W}_{k}^{(i)}\big{\|}_{2}+\frac{c_{\mathbf{x}}B_{\rho^{\prime}}\sqrt{2 \eta Tc_{0}}}{m^{c-1/2}}+\sqrt{2c_{0}}\big{)}\). From (A.12), for any \(\tilde{j}\leq t\), we know
\[\epsilon_{\tilde{j}}\leq\epsilon^{\prime}:=\frac{c_{\mathbf{x}}B_{\rho^{\prime \prime}}}{m^{c}}\Big{(}\frac{2\sqrt{2c_{0}\rho(\rho\eta T+2)}\eta eT(1+\eta \rho)c_{\mathbf{x}}B_{\rho^{\prime}}}{nm^{c-1/2}}+\frac{c_{\mathbf{x}}B_{ \rho^{\prime}}\sqrt{2\eta Tc_{0}}}{m^{c-1/2}}+\sqrt{2c_{0}}\Big{)}.\]
Putting the above inequality back into (A.10), we get
\[\left\|\mathbf{W}_{t+1}-\mathbf{W}_{t+1}^{(i)}\right\|_{2}^{2}\leq\frac{8\rho \eta^{2}c_{0}e(1+t)^{2}(\rho\eta t+2)}{n^{2}}\Big{(}\frac{1}{1-2\eta\epsilon^ {\prime}}\Big{)}^{t}.\]
If \(m\) is large enough such that \(2\eta\epsilon^{\prime}\leq 1/(t+1)\), then we can show
\[\Big{(}\frac{1}{1-2\eta\epsilon^{\prime}}\Big{)}^{t}\leq\Big{(}\frac{1}{1-1/( t+1)}\Big{)}^{t}\leq e.\] (A.13)
Then there holds
\[\left\|\mathbf{W}_{t+1}-\mathbf{W}_{t+1}^{(i)}\right\|_{2}\leq\frac{2\eta e(1+t )\sqrt{2\rho c_{0}(\rho\eta t+2)}}{n}\leq\frac{2\eta eT\sqrt{2\rho c_{0}(\rho \eta T+2)}}{n}.\] (A.14)
Now, we discuss the conditions on \(m\). Suppose \(m\) satisfies the following conditions
\[m\geq C_{1}\Big{(}\frac{(\eta T)^{2}(1+\eta\rho)\sqrt{\rho(\rho\eta T+2)}}{n} \Big{)}^{\frac{2}{4c-1}},m\geq C_{2}(\eta T)^{\frac{3}{4c-1}}\text{ and }m\geq C_{3}\Big{(}\eta T\Big{)}^{\frac{1}{c}},\]
where \(C_{1}=(8ee_{\mathbf{x}}^{2}B_{\sigma^{\prime}}B_{\sigma^{\prime\prime}}\sqrt{2 c_{0}})^{\frac{2}{4c-1}}\), \(C_{2}=\big{(}4\sqrt{2c_{0}}c_{\mathbf{x}}^{2}B_{\sigma^{\prime}}B_{\sigma^{ \prime\prime}}\big{)}^{\frac{3}{4c-1}}\) and \(C_{3}=(8\sqrt{2c_{0}}c_{\mathbf{x}}B_{\sigma^{\prime\prime}})^{1/c}\). Then it is easy to verify that
\[\frac{2\eta c_{\mathbf{x}}B_{\sigma^{\prime\prime}}}{m^{c}}\Big{(}\frac{2 \sqrt{2c_{0}\rho(\rho\eta T+2)}\eta cT(1+\eta\rho)c_{\mathbf{x}}B_{\sigma^{ \prime}}}{nm^{c-1/2}}+\frac{c_{\mathbf{x}}B_{\sigma^{\prime}}\sqrt{2\eta T \,c_{0}}}{m^{c-1/2}}+\sqrt{2c_{0}}\Big{)}\leq\frac{1}{T}\leq\frac{1}{1+t},\]
which ensures that \(2\eta\epsilon^{\prime}\leq 1/(t+1)\), and then (A.14) holds. The proof is completed.
We can combine Theorem A.5 and Lemma 1 together to get the upper bound of the generalization error.
Proof of Theorem 2.: Eq.(A.9) with \(p=1/t\) and Eq.(A.13) implies
\[\big{\|}\mathbf{W}_{t+1}-\mathbf{W}_{t+1}^{(i)}\big{\|}_{2}^{2} \leq\frac{2e^{2}\eta^{2}(1+t)}{n^{2}}\sum_{j=0}^{t}\Big{(}\big{\|} \nabla\ell(\mathbf{W}_{j};z_{i})\big{\|}_{2}^{2}+\big{\|}\nabla\ell(\mathbf{W }_{j}^{(i)};z_{i}^{\prime})\big{\|}_{2}^{2}\Big{)}\] \[\leq\frac{4e^{2}\eta^{2}\rho(1+t)}{n^{2}}\sum_{j=0}^{t}\Big{(} \ell(\mathbf{W}_{j};z_{i})+\ell(\mathbf{W}_{j}^{(i)};z_{i}^{\prime})\Big{)},\]
where in the last inequality we used self-bounding property of the smooth loss (Lemma A.1). Now, taking an average over \(i\in[n]\) and using \(\mathbb{E}\big{[}\ell(\mathbf{W}_{j};z_{i})\big{]}=\mathbb{E}\big{[}\ell( \mathbf{W}_{j}^{(i)};z_{i}^{\prime})\big{]}\), we have
\[\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\big{\|}\mathbf{W}_{t+1}- \mathbf{W}_{t+1}^{(i)}\big{\|}_{2}^{2} \leq\frac{4e^{2}\eta^{2}\rho(1+t)}{n^{3}}\sum_{j=0}^{t}\Big{(} \sum_{i=1}^{n}\mathbb{E}\big{[}\ell(\mathbf{W}_{j};z_{i})\big{]}+\mathbb{E}[ \ell(\mathbf{W}_{j}^{(i)};z_{i}^{\prime})]\Big{)}\] \[=\frac{8e^{2}\eta^{2}\rho(1+t)}{n^{3}}\sum_{j=0}^{t}\sum_{i=1}^{n }\mathbb{E}\big{[}\ell(\mathbf{W}_{j};z_{i})\big{]}\] \[=\frac{8e^{2}\eta^{2}\rho(1+t)}{n^{2}}\sum_{j=0}^{t}\mathbb{E} \big{[}L_{S}(\mathbf{W}_{j})\big{]}.\]
Combining the above stability bounds with Lemma 1 together, we get
\[\mathbb{E}[L(\mathbf{W}_{t})-L_{S}(\mathbf{W}_{t})] \leq\frac{4e^{2}\eta^{2}\rho^{2}t}{n^{2}}\sum_{j=0}^{t-1}\mathbb{ E}\big{[}L_{S}(\mathbf{W}_{j})\big{]}+\Big{(}\frac{16e^{2}\eta^{2}\rho^{2}t \mathbb{E}[L_{S}(\mathbf{W}_{t})]}{n^{2}}\sum_{j=0}^{t-1}\mathbb{E}\big{[}L_{S} (\mathbf{W}_{j})\big{]}\Big{)}^{\frac{1}{2}}\] \[\leq\frac{4e^{2}\eta^{2}\rho^{2}t}{n^{2}}\sum_{j=0}^{t-1}\mathbb{ E}\big{[}L_{S}(\mathbf{W}_{j})\big{]}+\frac{4e\eta\rho}{n}\sum_{j=0}^{t-1} \mathbb{E}\big{[}L_{S}(\mathbf{W}_{j})\big{]},\]
where in the last inequality we used \(L_{S}(\mathbf{W}_{t})\leq\frac{1}{t}\sum_{j=1}^{t-1}L_{S}(\mathbf{W}_{j})\)[35]. The proof is completed.
### Proofs of Optimization Bounds
Before giving the proofs of optimization error bound, we first introduce the following lemma on the bound of GD iterates.
**Lemma A.6**.: _Suppose Assumptions 1 and 2 hold, and \(\eta\leq 1/(2\rho)\). Assume (4) and (5) hold. Then for any \(t\in[T]\), there holds_
\[1\vee\mathbb{E}[\|\mathbf{W}^{*}_{\frac{1}{\eta^{2}}}-\mathbf{W}_{t}\|_{2}^{2}] \leq\Big{(}\frac{8e^{2}\eta^{3}\rho^{2}t^{2}}{n^{2}}+\frac{8e\eta^{2}t\rho}{n} \!\Big{)}\!\!\sum_{s=0}^{t-1}\!\!\mathbb{E}\big{[}L_{S}(\mathbf{W}_{s})\big{]} +2\mathbb{E}\big{[}\|\mathbf{W}^{*}_{\frac{1}{\eta^{2}}}-\mathbf{W}_{0}\|_{2}^{ 2}\big{]}+2\eta T\big{[}L(\mathbf{W}^{*}_{\frac{1}{\eta^{2}}})-L(\mathbf{W}^{* })\big{]}.\]
Proof.: For any \(\mathbf{W},\widetilde{\mathbf{W}}\in\mathbb{R}^{md}\) and \(\alpha\in[0,1]\), define \(\mathbf{W}(\alpha):=\widetilde{\mathbf{W}}+\alpha(\mathbf{W}-\widetilde{ \mathbf{W}})\). Note that
\[\lambda_{\min}\big{(}\nabla^{2}L_{S}(\mathbf{W}(\alpha))\big{)}\] \[\geq-\max_{i}\{\|\nabla^{2}f(\mathbf{x}_{i})\|_{2}\}\Big{(}\frac {1}{n}\sum_{i=1}^{n}|f_{\mathbf{W}(\alpha)}(\mathbf{x}_{i})-y_{i}|\Big{)}\] \[\geq-\frac{c_{x}^{2}B_{\sigma^{\prime\prime}}}{m^{c}}\frac{1}{n} \Big{(}\sum_{i=1}^{n}\big{(}|f_{\mathbf{W}(\alpha)}(\mathbf{x}_{i})-f_{ \widetilde{\mathbf{W}}}(\mathbf{x}_{i})|+|f_{\widetilde{\mathbf{W}}}(\mathbf{ x}_{i})-f_{\mathbf{W}_{0}}(\mathbf{x}_{i})|+|f_{\mathbf{W}_{0}}(\mathbf{x}_{i})-y_{i}| \big{)}\Big{)}\] \[\geq-\frac{c_{x}^{2}B_{\sigma^{\prime\prime}}}{m^{c}}\Big{(} \frac{B_{\sigma^{\prime}}c_{\mathbf{x}}}{m^{c-1/2}}\|\mathbf{W}(\alpha)- \widetilde{\mathbf{W}}\|_{2}+\frac{B_{\sigma^{\prime}}c_{\mathbf{x}}}{m^{c-1/ 2}}\|\widetilde{\mathbf{W}}-\mathbf{W}_{0}\|_{2}+\sqrt{2L_{S}(\mathbf{W}_{0})} \Big{)}\] \[\geq-\frac{c_{x}^{2}B_{\sigma^{\prime\prime}}}{m^{c}}\Big{(} \frac{B_{\sigma^{\prime}}c_{\mathbf{x}}}{m^{c-1/2}}\|\mathbf{W}-\widetilde{ \mathbf{W}}\|_{2}+\frac{B_{\sigma^{\prime}}c_{\mathbf{x}}}{m^{c-1/2}}\| \widetilde{\mathbf{W}}-\mathbf{W}_{0}\|_{2}+\sqrt{2c_{0}}\Big{)}.\]
Then for any \(t\in[T]\), let \(\widetilde{\mathbf{W}}=\mathbf{W}_{t}\), and define
\[g(\alpha):= L_{S}(\mathbf{W}(\alpha))+\frac{c_{\mathbf{x}}^{2}B_{\sigma^{ \prime\prime}}}{m^{c}}\frac{\alpha^{2}}{2}\Big{(}\frac{B_{\sigma^{\prime}}c_{ \mathbf{x}}}{m^{c-1/2}}\|\mathbf{W}-\mathbf{W}_{t}\|_{2}+\frac{B_{\sigma^{ \prime}}c_{\mathbf{x}}}{m^{c-1/2}}\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}+\sqrt {2c_{0}}\Big{)}\] \[\times(1\vee\mathbb{E}[\|\mathbf{W}-\mathbf{W}_{t}\|_{2}^{2}]).\]
It is obvious that \(g^{\prime\prime}(\alpha)\geq 0\). Then \(g(\alpha)\) is convex in \(\alpha\in[0,1]\). Now, by convexity we know
\[g(1)-g(0) =L_{S}(\mathbf{W})+\frac{c_{\mathbf{x}}^{2}B_{\sigma^{\prime \prime}}}{2m^{c}}\Big{(}\frac{B_{\sigma^{\prime}}c_{\mathbf{x}}}{m^{c-1/2}}\| \mathbf{W}-\mathbf{W}_{t}\|_{2}+\frac{B_{\sigma^{\prime}}c_{\mathbf{x}}}{m^{c- 1/2}}\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}+\sqrt{2c_{0}}\Big{)}\] \[\quad\times(1\vee\mathbb{E}[\|\mathbf{W}-\mathbf{W}_{t}\|_{2}^{2} ])-L_{S}(\mathbf{W}_{t})\] \[\geq\langle\mathbf{W}-\mathbf{W}_{t},\nabla L_{S}(\mathbf{W}_{t })\rangle=g^{\prime}(0).\]
Rearranging the above inequality we get
\[L_{S}(\mathbf{W}_{t}) \leq L_{S}(\mathbf{W})+\frac{c_{\mathbf{x}}^{2}B_{\sigma^{\prime \prime}}}{2m^{c}}\Big{(}\frac{B_{\sigma^{\prime}}c_{\mathbf{x}}}{m^{c-1/2}}\| \mathbf{W}-\mathbf{W}_{t}\|_{2}+\frac{B_{\sigma^{\prime}}c_{\mathbf{x}}}{m^{c- 1/2}}\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}+\sqrt{2c_{0}}\Big{)}\] \[\quad\times(1\vee\mathbb{E}[\|\mathbf{W}-\mathbf{W}_{t}\|_{2}^{2} ])-\langle\mathbf{W}-\mathbf{W}_{t},\nabla L_{S}(\mathbf{W}_{t})\rangle.\] (A.15)
Combining (A.15) with the smoothness of the loss we can get
\[L_{S}(\mathbf{W}_{t+1}) \leq L_{S}(\mathbf{W}_{t})+\langle\nabla L_{S}(\mathbf{W}_{t}), \mathbf{W}_{t+1}-\mathbf{W}_{t}\rangle+\frac{\rho}{2}\|\mathbf{W}_{t+1}- \mathbf{W}_{t}\|_{2}^{2}\] \[\leq L_{S}(\mathbf{W}_{t})-\eta\langle\nabla L_{S}(\mathbf{W}_{t }),\nabla L_{S}(\mathbf{W}_{t})\rangle+\frac{\rho}{2}\|\mathbf{W}_{t+1}- \mathbf{W}_{t}\|_{2}^{2}\] \[\leq L_{S}(\mathbf{W}_{t})-\eta(1-\frac{\eta\rho}{2})\|\nabla L_{S }(\mathbf{W}_{t})\|_{2}^{2}\] \[\leq L_{S}(\mathbf{W}_{t})-\frac{\eta}{2}\|\nabla L_{S}(\mathbf{W} _{t})\|_{2}^{2}\] \[\leq L_{S}(\mathbf{W})+\frac{c_{\mathbf{x}}^{2}B_{\sigma^{\prime \prime}}}{2m^{c}}\Big{(}\frac{B_{\sigma^{\prime}}c_{\mathbf{x}}}{m^{c-1/2}}\| \mathbf{W}-\mathbf{W}_{t}\|_{2}+\frac{B_{\sigma^{\prime}}c_{\mathbf{x}}}{m^{c- 1/2}}\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}+\sqrt{2c_{0}}\Big{)}\] \[\quad\times(1\vee\mathbb{E}[\|\mathbf{W}-\mathbf{W}_{t}\|_{2}^{2} ])-\langle\mathbf{W}-\mathbf{W}_{t},\nabla L_{S}(\mathbf{W}_{t})\rangle-\frac{ \eta}{2}\|\nabla L_{S}(\mathbf{W}_{t})\|_{2}^{2},\]
where in the third inequality we used the update rule (2) and \(\eta\rho\leq 1\).
According to the equality \(2\langle x-y,x-z\rangle=\|x-y\|_{2}^{2}+\|x-z\|_{2}^{2}-\|y-z\|_{2}^{2}\), we know
\[-\langle\mathbf{W}-\mathbf{W}_{t},\nabla L_{S}(\mathbf{W}_{t}) \rangle-\frac{\eta}{2}\|\nabla L_{S}(\mathbf{W}_{t})\|_{2}^{2} =\frac{1}{\eta}\langle\mathbf{W}-\mathbf{W}_{t},\mathbf{W}_{t+1}- \mathbf{W}_{t}\rangle-\frac{1}{2\eta}\|\mathbf{W}_{t+1}-\mathbf{W}_{t}\|_{2}^{2}\] \[=\frac{1}{2\eta}\big{(}\|\mathbf{W}-\mathbf{W}_{t}\|_{2}^{2}\!-\! \|\mathbf{W}_{t+1}-\mathbf{W}\|_{2}^{2}\big{)}.\]
Then there holds
\[L_{S}(\mathbf{W}_{t+1}) \leq L_{S}(\mathbf{W})+\frac{c_{\mathbf{x}}^{2}B_{\sigma^{\prime \prime}}}{2m^{c}}\Big{(}\frac{B_{\sigma^{\prime}}c_{\mathbf{x}}}{m^{c-1/2}} \|\mathbf{W}-\mathbf{W}_{t}\|_{2}+\frac{B_{\sigma^{\prime}}c_{\mathbf{x}} \sqrt{2\eta Tc_{0}}}{m^{c-1/2}}+\sqrt{2c_{0}}\Big{)}\] \[\times(1\vee\mathbb{E}[\|\mathbf{W}-\mathbf{W}_{t}\|_{2}^{2}])+ \frac{1}{2\eta}\Big{(}\|\mathbf{W}-\mathbf{W}_{t}\|_{2}^{2}-\|\mathbf{W}_{t+ 1}-\mathbf{W}\|_{2}^{2}\Big{)}.\] (A.16)
The above inequality with \(\mathbf{W}=\mathbf{W}_{\frac{1}{\eta T}}^{*}\) implies
Combined the above inequality with Theorem 2 implies
\[\frac{\mathbb{E}\big{[}\|\mathbf{W}_{t}-\mathbf{W}_{\frac{1}{\eta T }}^{*}\|_{2}^{2}\big{]}}{2\eta t}\] \[\leq\frac{1}{t}\sum_{s=0}^{t-1}\big{[}L(\mathbf{W}_{\frac{1}{ \eta T}}^{*})\!-\!\mathbb{E}[L(\mathbf{W}_{s})]\big{]}\!+\!\frac{\mathbb{E} \big{[}\|\mathbf{W}_{\frac{1}{\eta T}}^{*}\!-\!\mathbf{W}_{0}\|_{2}^{2}\big{]} }{2\eta t}\!+\!\Big{(}\frac{4e^{2}\eta^{2}\rho^{2}t}{n^{2}}\!+\!\frac{4e\eta \rho}{n}\Big{)}\sum_{s=0}^{t-1}\mathbb{E}\big{[}L_{S}(\mathbf{W}_{s})\big{]}\] \[\quad+\!\frac{c_{\mathbf{x}}^{2}B_{\sigma^{\prime\prime}}}{2m^{c }t}\!\sum_{s=0}^{t-1}\!\Big{(}\frac{B_{\sigma^{\prime}}c_{\mathbf{x}}}{m^{c- 1/2}}\mathbb{E}[\|\mathbf{W}_{\frac{1}{\eta T}}^{*}\!-\!\mathbf{W}_{s}\|_{2}] \!+\!\frac{B_{\sigma^{\prime}}c_{\mathbf{x}}\sqrt{2\eta Tc_{0}}}{m^{c-1/2}} \!+\!\sqrt{2c_{0}}\Big{)}(1\vee\mathbb{E}[\|\mathbf{W}_{\frac{1}{\eta T}}^{*} \!-\!\mathbf{W}_{s}\|_{2}^{2}])\] \[\leq L(\mathbf{W}_{\frac{1}{\eta T}}^{*})\!-\!L(\mathbf{W}^{*})\!+ \!\frac{\mathbb{E}\big{[}\|\mathbf{W}_{\frac{1}{\eta T}}^{*}\!-\!\mathbf{W}_{0 }\|_{2}^{2}\big{]}}{2\eta t}\!+\!\Big{(}\frac{4e^{2}\eta^{2}\rho^{2}t}{n^{2}} \!+\!\frac{4e\eta\rho}{n}\Big{)}\sum_{s=0}^{t-1}\mathbb{E}\big{[}L_{S}( \mathbf{W}_{s})\big{]}\] \[\quad+\!\frac{c_{\mathbf{x}}^{2}B_{\sigma^{\prime\prime}}}{2m^{c }t}\!\sum_{s=0}^{t}\!\Big{(}\frac{B_{\sigma^{\prime}}c_{\mathbf{x}}}{m^{c- 1/2}}\mathbb{E}[\|\mathbf{W}_{\frac{1}{\eta T}}^{*}\!-\!\mathbf{W}_{s}\|_{2} ]\!+\!\frac{B_{\sigma^{\prime}}c_{\mathbf{x}}\sqrt{2\eta Tc_{0}}}{m^{c-1/2} }\!+\!\sqrt{2c_{0}}\Big{)}(1\vee\mathbb{E}[\|\mathbf{W}_{\frac{1}{\eta T}}^{* }\!-\!\mathbf{W}_{s}\|_{2}^{2}]),\] (A.17)
where in the second inequality we used \(L(\mathbf{W}_{\frac{1}{\eta T}}^{*})-L(\mathbf{W}_{s})\leq L(\mathbf{W}_{ \frac{1}{\eta T}}^{*})-L(\mathbf{W}^{*})\) since \(L(\mathbf{W}_{s})\geq L(\mathbf{W}^{*})\) for any \(s\in[t-1]\).
On the other hand, using Lemma A.3 we can obtain
\[\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{s}\|_{2}\leq\|\mathbf{W}_{ \frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2}+\|\mathbf{W}_{s}-\mathbf{W}_{0}\|_{ 2}\leq\sqrt{2\eta Tc_{0}}+\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0} \|_{2}.\] (A.18)
Then we know
\[\Big{(}\frac{B_{\sigma^{\prime}}c_{\mathbf{x}}}{m^{c-1/2}} \mathbb{E}[\|\mathbf{W}_{\frac{1}{\eta T}}^{*}\!-\!\mathbf{W}_{s}\|_{2}]\!+\! \frac{B_{\sigma^{\prime}}c_{\mathbf{x}}\sqrt{2\eta Tc_{0}}}{m^{c-1/2}}\!+\! \sqrt{2c_{0}}\Big{)}(1\vee\mathbb{E}[\|\mathbf{W}_{\frac{1}{\eta T}}^{*}- \mathbf{W}_{s}\|_{2}^{2}])\] \[\leq\Big{(}\frac{B_{\sigma^{\prime}}c_{\mathbf{x}}}{m^{c-1/2}} (2\sqrt{2\eta Tc_{0}}+\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2} )+\sqrt{2c_{0}}\Big{)}(1\vee\mathbb{E}[\|\mathbf{W}_{\frac{1}{\eta T}}^{*}- \mathbf{W}_{s}\|_{2}^{2}]).\]
Plugging the above inequality back into (A.17) yields
\[\frac{\mathbb{E}\big{[}\|\mathbf{W}_{t}-\mathbf{W}_{\frac{1}{9T}}^{*} \|_{2}^{2}\big{]}}{2\eta t}\] \[\leq\frac{\mathbb{E}\big{[}\|\mathbf{W}_{\frac{1}{9T}}^{*}- \mathbf{W}_{0}\|_{2}^{2}\big{]}}{2\eta t}+\frac{\tilde{b}(\sqrt{2\eta Tc_{0}}+\| \mathbf{W}_{\frac{1}{9T}}^{*}-\mathbf{W}_{0}\|_{2})}{m^{c}t}\sum_{s=0}^{t}(1 \vee\mathbb{E}[\|\mathbf{W}_{\frac{1}{9T}}^{*}-\mathbf{W}_{s}\|_{2}^{2}])+ \Big{(}\frac{4e^{2}\eta^{2}\rho^{2}t}{n^{2}}+\frac{4e\eta\rho}{n}\Big{)}\!\! \sum_{s=0}^{t-1}\!\!\mathbb{E}\big{[}L_{S}(\mathbf{W}_{s})\big{]}\] \[\quad+L(\mathbf{W}_{\frac{1}{9T}}^{*})-L(\mathbf{W}^{*}),\]
where \(\tilde{b}=\frac{c_{s}^{2}B_{s^{\prime\prime}}}{2}\big{(}\frac{2B_{s^{\prime} }c_{\mathbf{s}}}{m^{c-1/2}}+\sqrt{2c_{0}}\big{)}\).
Multiplying both sides by \(2\eta t\) yields
\[\mathbb{E}\big{[}\|\mathbf{W}_{t}-\mathbf{W}_{\frac{1}{9T}}^{*} \|_{2}^{2}\big{]}\] \[\leq\mathbb{E}\big{[}\|\mathbf{W}_{\frac{1}{9T}}^{*}-\mathbf{W}_ {0}\|_{2}^{2}\big{]}+\frac{2\tilde{b}\eta(\sqrt{2\eta Tc_{0}}+\|\mathbf{W}_{ \frac{1}{9T}}^{*}-\mathbf{W}_{0}\|_{2})}{m^{c}}\sum_{s=0}^{t}(1\vee\mathbb{E}[ \|\mathbf{W}_{\frac{1}{9T}}^{*}-\mathbf{W}_{s}\|_{2}^{2}])\] \[\quad+\Big{(}\frac{8e^{2}\eta^{3}\rho^{2}t^{2}}{n^{2}}+\frac{8e \eta^{2}\rho t}{n}\Big{)}\sum_{s=0}^{t-1}\mathbb{E}\big{[}L_{S}(\mathbf{W}_{s} )\big{]}+2\eta T\big{[}L(\mathbf{W}_{\frac{1}{9T}}^{*})-L(\mathbf{W}^{*}) \big{]}.\]
Let \(x=\max_{s\in[T]}\mathbb{E}[\|\mathbf{W}_{\frac{1}{9T}}^{*}-\mathbf{W}_{s}\|_{ 2}^{2}]\lor 1\). Then the above inequality implies
\[x \leq\mathbb{E}\big{[}\|\mathbf{W}_{\frac{1}{9T}}^{*}-\mathbf{W}_ {0}\|_{2}^{2}\big{]}+\frac{2\tilde{b}\eta T(\sqrt{2\eta Tc_{0}}+\|\mathbf{W}_ {\frac{1}{9T}}^{*}-\mathbf{W}_{0}\|_{2})}{m^{c}}x\] \[\quad+\Big{(}\frac{8e^{2}\eta^{3}\rho^{2}t^{2}}{n^{2}}+\frac{8e \eta^{2}\rho t}{n}\Big{)}\sum_{s=0}^{t-1}\mathbb{E}\big{[}L_{S}(\mathbf{W}_{s })\big{]}+2\eta T\big{[}L(\mathbf{W}_{\frac{1}{9T}}^{*})-L(\mathbf{W}^{*}) \big{]}.\]
Without loss of generality, we assume \(\eta\leq 1\). Condition (5) implies \(m\geq\big{(}4\tilde{b}\eta T(\sqrt{2\eta Tc_{0}}+\|\mathbf{W}_{\frac{1}{9T}}^ {*}-\mathbf{W}_{0}\|_{2})\big{)}^{\frac{1}{c}}\), then there holds \(\frac{2\tilde{b}\eta T(\sqrt{2\eta Tc_{0}}+\|\mathbf{W}_{\frac{1}{9T}}^{*}- \mathbf{W}_{0}\|_{2})}{m^{c}}\leq\frac{1}{2}\). Hence
\[x\]
It then follows that
\[1\vee\mathbb{E}[\|\mathbf{W}_{\frac{1}{9T}}^{*}-\mathbf{W}_{t}\|_{ 2}^{2}]\leq\Big{(}\frac{16e^{2}\eta^{3}\rho^{2}t^{2}}{n^{2}}+\frac{16e\eta^{2}t \rho}{n}\Big{)}\sum_{s=0}^{t-1}\mathbb{E}\big{[}L_{S}(\mathbf{W}_{s})\big{]}+ 2\mathbb{E}\big{[}\|\mathbf{W}_{\frac{1}{9T}}^{*}-\mathbf{W}_{0}\|_{2}^{2} \big{]}+2\eta T\big{[}L(\mathbf{W}_{\frac{1}{9T}}^{*})-L(\mathbf{W}^{*}) \big{]}.\]
This completes the proof.
Now, we can give the proof of Theorem 3.
Proof of Theorem 3.: Recall that \(\tilde{b}=\frac{c_{s}^{2}B_{s^{\prime\prime}}}{2}\big{(}\frac{2B_{s^{\prime}}c_ {\mathbf{s}}}{m^{c-1/2}}+\sqrt{2c_{0}}\big{)}\). Eq.(A.16) with \(\mathbf{W}=\mathbf{W}_{\frac{1}{9T}}^{*}\) implies
\[\frac{1}{T}\sum_{s=0}^{T-1}\mathbb{E}[L_{S}(\mathbf{W}_{s})]\] \[\leq\mathbb{E}[L_{S}(\mathbf{W}_{\frac{1}{9T}}^{*})]+\frac{\tilde {b}(\sqrt{2\eta Tc_{0}}+\|\mathbf{W}_{\frac{1}{9T}}^{*}-\mathbf{W}_{0}\|_{2}) }{m^{c}T}\sum_{s=0}^{T-1}1\vee\mathbb{E}[\|\mathbf{W}_{\frac{1}{9T}}^{*}- \mathbf{W}_{s}\|_{2}^{2}]+\frac{\|\mathbf{W}_{\frac{1}{9T}}^{*}-\mathbf{W}_{0}\|_ {2}^{2}}{2\eta T},\] (A.19)
where in the last inequality we used (A.18).
Further, by monotonically decreasing of \(\{L_{S}(\mathbf{W}_{t})\}\), we know
\[\mathbb{E}[L_{S}(\mathbf{W}_{T})]\leq\mathbb{E}[L_{S}(\mathbf{W}_{ \frac{1}{\eta T}}^{*})]+\frac{\tilde{b}(\sqrt{2\eta Tc_{0}}+\|\mathbf{W}_{\frac{ 1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2})}{m^{c}T}\sum_{s=0}^{T-1}1\vee\mathbb{E}[ \|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{s}\|_{2}^{2}]+\frac{\|\mathbf{ W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2}^{2}}{2\eta T}.\]
Note that Lemma A.6 shows
Combining the above two inequalities together, we get
\[\mathbb{E}[L_{S}(\mathbf{W}_{T})]\] \[\qquad+\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2}^ {2}+\eta T\big{[}L(\mathbf{W}_{\frac{1}{\eta T}}^{*})-L(\mathbf{W}^{*})\big{]} \Big{)}+\frac{\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2}^{2}}{ 2\eta T}.\]
The theorem is proved.
**Lemma A.7**.: _Suppose Assumptions 1 and 2 hold. Let \(\{\mathbf{W}_{t}\}\) be produced by (2) with \(\eta\leq 1/(2\rho)\). Assume (4) and (5) hold. Then_
\[\sum_{s=0}^{T-1}\mathbb{E}[L_{S}(\mathbf{W}_{s})]\leq 4TL(\mathbf{W}_{ \frac{1}{\eta T}}^{*})-2\eta TL(\mathbf{W}^{*})+\Big{(}\frac{2\tilde{b}T(\sqrt {2\eta Tc_{0}}+\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2})}{m^{c }}+\frac{1}{2\eta}\Big{)}\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_ {2}^{2}.\]
Proof.: Multiplying \(T\) over both sides of (A.19) and using Lemma A.6 we get
\[\sum_{s=0}^{T-1}\mathbb{E}[L_{S}(\mathbf{W}_{s})] \leq TL(\mathbf{W}_{\frac{1}{\eta T}}^{*})+\frac{\tilde{b}(\sqrt {2\eta Tc_{0}}+\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2})}{m^{ c}}\sum_{s=0}^{T-1}1\vee\mathbb{E}[\|\mathbf{W}_{\frac{1}{\eta T}}^{*}- \mathbf{W}_{s}\|_{2}^{2}]+\frac{\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W }_{0}\|_{2}^{2}}{2\eta}\] \[\leq TL(\mathbf{W}_{\frac{1}{\eta T}}^{*})+\frac{\tilde{b}T(\sqrt {2\eta Tc_{0}}+\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2})}{m^{ c}}\Big{(}\big{(}\frac{16e^{2}\eta^{3}\rho^{2}T^{2}}{n^{2}}+\frac{16e\eta^{2}T \rho}{n}\big{)}\sum_{s=0}^{T-1}\mathbb{E}\big{[}L_{S}(\mathbf{W}_{s})\big{]}\] \[\quad+2\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2}^ {2}+2\eta T\big{[}L(\mathbf{W}_{\frac{1}{\eta T}}^{*})-L(\mathbf{W}^{*}) \big{]}\Big{)}+\frac{\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2}^ {2}}{2\eta}.\]
Condition (5) implies \(m\geq\big{(}2\tilde{b}T(\sqrt{2\eta Tc_{0}}+\|\mathbf{W}_{\frac{1}{\eta T}}^{* }-\mathbf{W}_{0}\|_{2})\big{(}\frac{16e^{2}\eta^{3}\rho^{2}T^{2}}{n^{2}}+\frac{ 16e\eta^{2}T\rho}{n}\big{)}\big{)}^{1/c}\) and \(m\geq\big{(}2\tilde{b}T(\sqrt{2\eta Tc_{0}}+\|\mathbf{W}_{\frac{1}{\eta T}}^{* }-\mathbf{W}_{0}\|_{2})\big{)}^{1/c}\), there holds
\[\sum_{s=0}^{T-1}\mathbb{E}[L_{S}(\mathbf{W}_{s})]\leq 4TL(\mathbf{W}_{ \frac{1}{\eta T}}^{*})-2\eta TL(\mathbf{W}^{*})+\Big{(}\frac{2\tilde{b}T(\sqrt {2\eta Tc_{0}}+\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2})}{m^{ c}}+\frac{1}{2\eta}\Big{)}\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2}^{2},\]
which completes the proof.
### Proofs of Excess Risk Bounds
Proof of Theorem 4.: According to Lemma A.7 and noting that \(\tilde{b}=\mathcal{O}(1)\), we know
\[\sum_{s=0}^{T-1}\mathbb{E}[L_{S}(\mathbf{W}_{s})]= \mathcal{O}\Big{(}TL(\mathbf{W}_{\frac{1}{\eta T}}^{*})+\Big{(}\frac{T( \sqrt{\eta T}+\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2})}{m^{c }}+\frac{1}{\eta}\Big{)}\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2 }^{2}\Big{)}.\] (A.20)
The upper bound of the generalization error can be controlled by plugging (A.20) back into Theorem 2
\[\mathbb{E}[L(\mathbf{W}_{T})-L_{S}(\mathbf{W}_{T})]\] \[=\mathcal{O}\Big{(}\big{(}\frac{\eta^{2}\rho^{2}T^{2}}{n^{2}}+ \frac{\eta\rho}{n}\big{)}\sum_{s=0}^{T-1}\mathbb{E}[L_{S}(\mathbf{W}_{s})] \Big{)}\] \[=\mathcal{O}\Big{(}\Big{(}\frac{\eta^{2}\rho^{2}T^{2}}{n^{2}}+ \frac{\eta T\rho}{n}\Big{)}\Big{(}L(\mathbf{W}_{\frac{1}{\frac{1}{\frac{1}{ \eta T}}}}^{*})+\Big{(}\frac{\sqrt{\eta T}+\|\mathbf{W}_{\frac{1}{\frac{1}{ \eta T}}}^{*}-\mathbf{W}_{0}\|_{2}}{m^{c}}+\frac{1}{\eta T}\Big{)}\|\mathbf{W }_{\frac{1}{\frac{1}{\eta T}}}^{*}-\mathbf{W}_{0}\|_{2}^{2}\Big{)}.\] (A.21)
The estimation of the optimization error is given by plugging (A.20) back into Theorem 3
\[\mathbb{E}[L_{S}(\mathbf{W}_{T})-L_{S}(\mathbf{W}_{\frac{1}{\eta T }}^{*})-\frac{1}{2\eta T}\|\mathbf{W}_{\frac{1}{\frac{1}{\eta T}}}^{*}- \mathbf{W}_{0}\|_{2}^{2}]\] \[=\mathcal{O}\Big{(}\frac{(\sqrt{\eta T}+\|\mathbf{W}_{\frac{1}{ \eta T}}^{*}-\mathbf{W}_{0}\|_{2})}{m^{c}}\Big{[}\Big{(}\frac{\eta^{3}\rho^{2} T^{2}}{n^{2}}+\frac{\eta^{2}T\rho}{n}\Big{)}\sum_{s=0}^{T-1}\mathbb{E}\big{[}L_{S}( \mathbf{W}_{s})\big{]}+\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{ 2}^{2}\] \[\quad+\eta T\big{[}L(\mathbf{W}_{\frac{1}{\eta T}}^{*})-L(\mathbf{ W}^{*})\big{]}\Big{]}\Big{)}\] \[=\mathcal{O}\Big{(}\frac{(\sqrt{\eta T}\!+\!\|\mathbf{W}_{\frac{ 1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2})}{m^{c}}\Big{(}\frac{\eta^{3}\rho^{2}T^{2 }}{n^{2}}\!+\!\frac{\eta^{2}T\rho}{n}\Big{)}\Big{(}\frac{T(\sqrt{\eta T}\!+\! \|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2})}{m^{c}}\!+\!\frac{1} {\eta}\Big{)}\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2}^{2}\] \[\quad+\frac{(\sqrt{\eta T}+\|\mathbf{W}_{\frac{1}{\eta T}}^{*}- \mathbf{W}_{0}\|_{2})}{m^{c}}\Big{(}\|\mathbf{W}_{\frac{1}{\eta T}}^{*}- \mathbf{W}_{0}\|_{2}^{2}+\|\mathbf{W}^{*}-\mathbf{W}_{0}\|_{2}^{2}\Big{)}\Big{)},\] (A.22)
where we used \(L(\mathbf{W}_{\frac{1}{\eta T}}^{*})-L(\mathbf{W}^{*})\leq\frac{1}{2\eta T}\| \mathbf{W}^{*}-\mathbf{W}_{0}\|_{2}^{2}\).
Combining (A.21) and (A.22) together and noting that the approximation error \(\Lambda_{\frac{1}{\eta T}}=L(\mathbf{W}_{\frac{1}{\eta T}}^{*})+\frac{1}{2\eta T }\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2}^{2}-L(\mathbf{W}^{*})\) we get
\[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]\] \[= \Big{[}\mathbb{E}[L(\mathbf{W}_{T})-L_{S}(\mathbf{W}_{T})\Big{]} +\mathbb{E}\Big{[}L_{S}(\mathbf{W}_{T})-\big{(}L_{S}(\mathbf{W}_{\frac{1}{ \eta T}}^{*})+\frac{1}{2\eta T}\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_ {0}\|_{2}^{2}\big{)}\Big{]}\] \[+\Big{[}L_{S}(\mathbf{W}_{\frac{1}{\eta T}}^{*})+\frac{1}{2\eta T }\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2}^{2}-L(\mathbf{W}^{*}) \Big{]}\] \[= \mathcal{O}\Big{(}\frac{\eta T\rho}{n}\Big{(}\!\frac{\eta\rho}{n} \!+\!1\Big{)}\Big{(}1\!+\!\frac{\eta T(\sqrt{\eta T}\!+\!\|\mathbf{W}_{\frac{1} {\eta T}}^{*}\!-\!\mathbf{W}_{0}\|_{2})}{m^{c}}\Big{)}\Big{[}L(\mathbf{W}_{ \frac{1}{\eta T}}^{*})\!+\!\Big{(}\frac{1}{2\eta T}\!+\!\frac{\sqrt{\eta T}\!+\! \|\mathbf{W}_{\frac{1}{\eta T}}^{*}\!-\!\mathbf{W}_{0}\|_{2}}{m^{c}}\Big{)}\] \[\quad\times\|\mathbf{W}_{\frac{1}{\eta T}}^{*}\!-\!\mathbf{W}_{0} \|_{2}^{2}\Big{]}+\frac{(\sqrt{\eta T}+\|\mathbf{W}_{\frac{1}{\eta T}}^{*}- \mathbf{W}_{0}\|_{2})}{m^{c}}\big{(}\|\mathbf{W}_{\frac{1}{\eta T}}^{*}- \mathbf{W}_{0}\|_{2}^{2}+\|\mathbf{W}^{*}-\mathbf{W}_{0}\|_{2}^{2}\big{)}+ \Lambda_{\frac{1}{\eta T}}\Big{)}.\]
Recalling that \(\rho=\mathcal{O}(m^{1-2c})\). If \(\eta Tm^{1-2c}=\mathcal{O}(n)\) and \(\eta T(\sqrt{\eta T}\!+\!\|\mathbf{W}^{*}\!-\!\mathbf{W}_{0}\|_{2})=\mathcal{O}(m^ {c})\), there holds \(\eta T\rho=\mathcal{O}(n)\) and \(\eta T(\sqrt{\eta T}+\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2})/m ^{c}=\mathcal{O}(1)\). Then from the above bound we can get
\[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]=\mathcal{O}\Big{(}\frac{\eta T \rho}{n}\Big{[}L(\mathbf{W}_{\frac{1}{\eta T}}^{*})+\frac{1}{2\eta T}\| \mathbf{W}_{\frac{1}{\eta T}}^{*}\!-\!\mathbf{W}_{0}\|_{2}^{2}\Big{]}+\frac{1} {\eta T}\big{(}\|\mathbf{W}_{\frac{1}{\eta T}}^{*}\!-\!\mathbf{W}_{0}\|_{2}^{2}+\| \mathbf{W}^{*}\!-\!\mathbf{W}_{0}\|_{2}^{2}\big{)}+\Lambda_{\frac{1}{\eta T}} \Big{)}.\]
Combining the above bound with the facts \(\Lambda_{\frac{1}{\eta T}}\leq\frac{1}{2\eta T}\|\mathbf{W}^{*}\!-\!\mathbf{W}_{0}\|_{2}^ {2}\), \(L(\mathbf{W}_{\frac{1}{\eta T}}^{*})+\frac{1}{2\eta T}\|\mathbf{W}_{\frac{1}{ \eta T}}^{*}-\mathbf{W}_{0}\|_{2}^{2}=L(\mathbf{W}^{*})\!+\!\Lambda_{\frac{1}{ \eta T}}\) and \(\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2}\leq\|\mathbf{W}^{*}- \mathbf{W}_{0}\|_{2}\) together we get
\[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]=\mathcal{O}\Big{(}\frac{\eta T \rho}{n}L(\mathbf{W}^{*})+\frac{1}{\eta T}\|\mathbf{W}^{*}-\mathbf{W}_{0}\|_{2}^ {2}\Big{)}.\]
The proof is completed.
Proof of Corollary 5.: **Part (a). Case 1.** Without loss of generality, we consider \(\|\mathbf{W}_{0}\|_{2}\) as a constant. We first consider the case \(2c+6\mu-3>0\). To ensure conditions (4) and (5) and \(\eta T(\sqrt{\eta T}+\|\mathbf{W}^{*}-\mathbf{W}_{0}\|_{2})=\mathcal{O}(m^{c})\) hold, we set \(m\asymp(\eta T)\frac{3}{4c}\) for this case. Then according to Theorem 4 we know
\[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]=\mathcal{O}\Big{(}(\eta T)^{ \frac{3-4c}{2c}}n^{-1}+(\eta T)^{\frac{3-6\mu-2c}{2c}}\Big{)}.\]
If \(c<3/4\), under the condition \(\eta T\lesssim n^{\frac{c}{3-4c}}\) and \(n^{\frac{c}{6\mu+2c-3}}\lesssim\eta T\), there holds \((\eta T)^{\frac{3-4c}{2c}}n^{-1}+(\eta T)^{\frac{3-6\mu-2c}{2c}}=\mathcal{O}( 1/\sqrt{n})\). To ensure the above-mentioned conditions hold simultaneously, we further require \(c+\mu\geq 1\) such that \(n^{\frac{c}{3-4c}}\gtrsim n^{\frac{c}{6\mu+2c-3}}\). Therefore, if \(c\in[1-\mu,3/4)\) and \(c+3\mu>3/2\), we can obtain
\[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]=\mathcal{O}\Big{(}\frac{1}{ \sqrt{n}}\Big{)}\]
with \(m\asymp(\eta T)^{\frac{3}{2c}}\) and \(n^{\frac{c}{6\mu+2c-3}}\lesssim\eta T\lesssim n^{\frac{c}{3-4c}}\).
If \(c\geq 3/4\), for any \(\eta T\geq 1\) and \(n\geq 1\), there holds \((\eta T)^{\frac{3-4c}{2c}}n^{-1}=\mathcal{O}(1/\sqrt{n})\). Similar to before, if \(\eta T\gtrsim n^{\frac{c}{6\mu+2c-3}}\), there holds \((\eta T)^{\frac{3-6\mu-2c}{2c}}=\mathcal{O}(1/\sqrt{n})\). Then we can obtain the excess population bound \(\mathcal{O}(1/\sqrt{n})\) with \(m\asymp(\eta T)^{\frac{3}{2c}}\) and \(\eta T\gtrsim n^{\frac{3-6\mu+2c-3}{6\mu+2c-3}}\).
**Case 2.** For the case \(2c+6\mu-3\leq 0\), we can choose \(m\asymp(\eta T)^{\frac{1}{c+\mu-1/2}}\) to ensure conditions (4) and (5) and \(\eta T(\sqrt{\eta T}+\|\mathbf{W}^{*}-\mathbf{W}_{0}\|_{2})=\mathcal{O}(m^{c})\) hold. From Theorem 4 we know
\[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]=\mathcal{O}\big{(}(\eta T)^{ \frac{1-2c+2\mu}{2c+2\mu-1}}n^{-1}+(\eta T)^{\frac{3-6\mu-2c}{2c+2\mu-1}} \big{)}.\]
Note \(3-6\mu-2c\geq 0\) and \(2c+2\mu-1>0\), then the term \((\eta T)^{\frac{3-6\mu-2c}{2c+2\mu-1}}\) will not converge for any choice of \(\eta T\) in this case. The proof of Part (a) is completed.
**Part (b).** Now, we consider the low noise case, i.e., \(L(\mathbf{W}^{*})=0\). Theorem 4 with \(L(\mathbf{W}^{*})=0\) implies
\[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]=\mathcal{O}\Big{(}\frac{m^{1 -2\mu}}{\eta T}\Big{)}.\]
Similar to part (a), we can set \(m\asymp(\eta T)^{\frac{3}{4c}}\) and \(\eta T\gtrsim n^{\frac{2c}{6\mu+2c-3}}\) and obtain
\[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]=\mathcal{O}\big{(}\frac{1}{n} \big{)}.\]
We can check that this choice of \(m\) and \(\eta T\) satisfies conditions (4) and (5). The proof is completed.
**Remark A.1**.: Several works [11, 18, 26, 41, 31] studied the stability behavior of stochastic gradient methods for non-convex losses, which can be applied to two-layer networks. Specifically, to obtain meaningful stability bounds, [18] required a time-dependent step size \(\eta_{t}=1/t\), which is insufficient to get a good convergence rate for optimization error. [11, 26, 41] established generalization bounds by introducing the Polyak-Lojasiewicz condition, which depends on a problem-dependent number. This number might be large in practice and results in a worse generalization bound. [12] established generalization bound \(\mathcal{O}(1/\sqrt{n})\) using the mean-field techniques [30] for two-layer infinitely wide NNs trained with noisy gradient descent. It is hard to provide a direct comparison with their results since the learning settings are different.
## Appendix B Proofs of Three-layer Neural Networks
### Proofs of Generalization Bounds
For a matrix \(\mathbf{W}\), let \(\mathbf{W}_{s;}\) and \(\mathbf{W}_{is}\) denote the \(s\)-th row and the \((i,s)\)-th entry of \(\mathbf{W}\), respectively.
**Lemma B.1** (Smoothness and Curvature).: _Suppose Assumptions 1 and 2 hold. For any fixed \(\mathbf{W}=(\mathbf{W}^{(1)},\mathbf{W}^{(2)})\in\mathbb{R}^{m\times d}\times \mathbb{R}^{m\times m}\) and any \(z\in\mathcal{Z}\), there holds_
\[\lambda_{\max}\big{(}\nabla^{2}\ell(\mathbf{W};z)\big{)}\leq\rho_{ \mathbf{W}}\text{ with }\] \[\rho_{\mathbf{W}}=\frac{B_{\sigma^{\prime}}^{4}c_{\mathbf{x}}^{2} }{m^{4c-1}}\big{\|}\mathbf{W}^{(2)}\big{\|}_{2}^{2}+\frac{B_{\sigma^{\prime}} ^{2}B_{\sigma}^{2}}{m^{4c-2}}+C_{\mathbf{W}}\Big{(}\frac{B_{\sigma^{\prime}}B_ {\sigma}}{m^{2c-1}}\big{\|}\mathbf{W}^{(2)}\big{\|}_{2}+\sqrt{2c_{0}}\Big{)},\] \[\lambda_{\min}\big{(}\nabla^{2}\ell(\mathbf{W};z)\big{)}\!\geq\! -C_{\mathbf{W}}\Big{(}\frac{B_{\sigma^{\prime}}B_{\sigma}}{m^{2c-1}}\big{\|} \mathbf{W}^{(2)}\big{\|}_{2}+\sqrt{2c_{0}}\Big{)},\]
_where \(C_{\mathbf{W}}=\frac{B_{\sigma^{\prime}}^{2}B_{\sigma^{\prime\prime}}c_{ \mathbf{x}}^{2}}{m^{3c}}\big{\|}\mathbf{W}^{(2)}\big{\|}_{2}^{2}+\Big{(}\frac {B_{\sigma^{\prime}}B_{\sigma^{\prime\prime}}c_{\mathbf{x}}^{2}}{m^{2c-\frac{ 1}{2}}}+\frac{2B_{\sigma^{\prime\prime}}B_{\sigma^{\prime}}^{2}B_{\sigma}c_{ \mathbf{x}}}{m^{3c-\frac{1}{2}}}\Big{)}\|\mathbf{W}^{(2)}\|_{2}+\frac{B_{ \sigma^{\prime\prime}}B_{\sigma}^{2}}{m^{3c-\frac{1}{2}}}+\frac{2B_{\sigma^{ \prime}}^{2}c_{\mathbf{x}}^{2}}{m^{2c-\frac{1}{2}}}\)._
Proof.: Let \(A_{\mathbf{W}^{(1)}}=[\sigma(\mathbf{W}^{(1)}_{1};\mathbf{x}),\ldots,\sigma( \mathbf{W}^{(1)}_{m;\mathbf{x}})]^{\top}\in\mathbb{R}^{m}\). Let \(\tilde{\mathbf{w}}=\mathbf{w}^{\top}\). We first estimate the upper bound of \(\|\nabla f_{\mathbf{W}}(\mathbf{x})\|_{2}\). Note that for any \(k=1,\ldots,m\)
\[\nabla_{\tilde{\mathbf{w}}^{(1)}_{k}}f_{\mathbf{W}}(\mathbf{x})=\frac{\partial f _{\mathbf{W}}(\mathbf{x})}{\partial\tilde{\mathbf{w}}^{(1)}_{k}}=\frac{1}{m^{ 2c}}\sum_{i=1}^{m}a_{i}\sigma^{\prime}\Big{(}\frac{1}{m^{c}}\sum_{s=1}^{m} \mathbf{W}^{(2)}_{is}\sigma(\mathbf{W}^{(1)}_{s;}\mathbf{x})\Big{)}\mathbf{W} ^{(2)}_{ik}\sigma^{\prime}(\mathbf{W}^{(1)}_{k;}\mathbf{x})\mathbf{x}\]
and
\[\nabla_{\tilde{\mathbf{w}}^{(2)}_{k}}f_{\mathbf{W}}(\mathbf{x})=\frac{ \partial f_{\mathbf{W}}(\mathbf{x})}{\partial\tilde{\mathbf{w}}^{(2)}_{k}}= \frac{a_{k}}{m^{2c}}\sigma^{\prime}\Big{(}\frac{1}{m^{c}}\sum_{s=1}^{m} \mathbf{W}^{(2)}_{ks}\sigma(\mathbf{W}^{(1)}_{s;}\mathbf{x})\Big{)}A_{\mathbf{ W}^{(1)}}.\]
According to Assumptions 1 and 2, the upper bound of the gradient can be controlled as follows
\[\|\nabla f_{\mathbf{W}}(\mathbf{x})\|_{2}^{2} =\sum_{k=1}^{m}\Big{(}\big{\|}\nabla_{\tilde{\mathbf{w}}^{(1)}_{k }}f_{\mathbf{W}}(\mathbf{x})\big{\|}_{2}^{2}+\big{\|}\nabla_{\tilde{\mathbf{w} }^{(2)}_{k}}f_{\mathbf{W}}(\mathbf{x})\big{\|}_{2}^{2}\Big{)}\] \[\leq\frac{B_{\sigma^{\prime}}^{4}c_{\mathbf{x}}^{2}}{m^{4c-1}} \sum_{k=1}^{m}\sum_{i=1}^{m}\big{|}\mathbf{W}^{(2)}_{ik}\big{|}^{2}+\frac{B_{ \sigma^{\prime}}^{2}B_{\sigma}^{2}}{m^{4c-2}}\] \[\leq\frac{B_{\sigma^{\prime}}^{4}c_{\mathbf{x}}^{2}}{m^{4c-1}}\| \mathbf{W}^{(2)}\big{\|}_{2}^{2}+\frac{B_{\sigma^{\prime}}^{2}B_{\sigma}^{2}}{m ^{4c-2}}.\] (B.1)
For any \(k,j\in[m]\), we know
\[\frac{\partial^{2}f_{\mathbf{W}}(\mathbf{x})}{\big{(}\partial \tilde{\mathbf{w}}^{(1)}_{k}\big{)}^{2}}= \frac{1}{m^{3c}}\sum_{i=1}^{m}a_{i}\sigma^{\prime\prime}\Big{(} \frac{1}{m^{c}}\sum_{s=1}^{m}\mathbf{w}^{2}_{is}\sigma(\mathbf{W}^{(1)}_{s;} \mathbf{x})\Big{)}\big{(}\mathbf{W}^{(2)}_{ik}\big{)}^{2}\big{(}\sigma^{\prime }(\mathbf{W}^{(1)}_{k;}\mathbf{x})\big{)}^{2}\mathbf{x}\mathbf{x}^{\top}\] \[+\frac{1}{m^{2c}}\sum_{i=1}^{m}a_{i}\sigma^{\prime}\Big{(}\frac{1} {m^{c}}\sum_{s=1}^{m}\mathbf{W}^{2}_{is}\sigma(\mathbf{W}^{(1)}_{s;}\mathbf{x}) \Big{)}\mathbf{W}^{(2)}_{ik}\sigma^{\prime\prime}(\mathbf{W}^{(1)}_{k;} \mathbf{x})\mathbf{x}\mathbf{x}^{\top},\]
\[\frac{\partial^{2}f_{\mathbf{W}}(\mathbf{x})}{\big{(}\partial \tilde{\mathbf{w}}^{(2)}_{k}\big{)}^{2}}=\frac{a_{k}}{m^{3c}}\sigma^{\prime \prime}\Big{(}\frac{1}{m^{c}}\sum_{s=1}^{m}\mathbf{W}^{(2)}_{ks}\sigma(\mathbf{W} ^{(1)}_{s;}\mathbf{x})\Big{)}A_{\mathbf{W}^{(1)}}A_{\mathbf{W}^{(1)}}^{\top}\]
and
\[\frac{\partial^{2}f_{\mathbf{W}}(\mathbf{x})}{\partial\tilde{ \mathbf{w}}^{(1)}_{k}\partial\tilde{\mathbf{w}}^{(2)}_{j}}= \frac{1}{m^{3c}}a_{j}\sigma^{\prime\prime}\Big{(}\frac{1}{m^{c}} \sum_{s=1}^{m}\mathbf{W}^{(2)}_{js}\sigma(\mathbf{W}^{(1)}_{s;}\mathbf{x}) \Big{)}\mathbf{W}^{(2)}_{jk}\sigma^{\prime}(\mathbf{W}^{(1)}_{k;}\mathbf{x}) \mathbf{x}A_{\mathbf{W}^{(1)}}^{\top}\] \[+\frac{1}{m^{2c}}a_{j}\sigma^{\prime}\Big{(}\frac{1}{m^{c}}\sum_{s= 1}^{m}\mathbf{W}^{(2)}_{js}\sigma(\mathbf{W}^{(1)}_{s;}\mathbf{x})\Big{)} \sigma^{\prime}(\mathbf{W}^{(1)}_{k;}\mathbf{x})\mathbf{x}B_{k}\]
where \(B_{k}\in\mathbb{R}^{1\times m}\) with \(k\)-th element is \(1\) and others are \(0\). Let the vector \(\mathbf{u}\in\mathbb{R}^{md+m^{2}}\) have unit norm \(\|\mathbf{u}\|_{2}=1\) and be composed in a manner matching the parameter \(\mathbf{W}=(\mathbf{W}^{(1)},\mathbf{W}^{(2)})\) so that \(\mathbf{u}=(\mathbf{u}^{(1)},\mathbf{u}^{(2)})\) where \(\mathbf{u}^{(1)}\in\mathbb{R}^{m\times d}\) and \(\mathbf{u}^{(2)}\in\mathbb{R}^{m\times m}\) have been vectorised in a row-major manner with \(\mathbf{u}^{(1)}_{k}\in\mathbb{R}^{1\times d}\) and \(\mathbf{u}^{(2)}_{k}\in\mathbb{R}^{1\times m}\). Then
\[\mathbf{u}^{\top}\nabla^{2}f_{\mathbf{W}}(\mathbf{x})\mathbf{u}\] \[= \sum_{k=1}^{m}\Big{(}\mathbf{u}^{(1)}_{k}\frac{\partial^{2}f_{ \mathbf{W}}(\mathbf{x})}{\partial\tilde{\mathbf{w}}^{(1)}_{k}\big{)}^{2}}\
We estimate the above three terms separately. Let \(\mathbf{W}_{\cdot k}^{(2)}\) denote the \(k\)-th column of \(\mathbf{W}^{(2)}\).
\[\sum_{k=1}^{m}\mathbf{u}_{k}^{(1)}\frac{\partial^{2}f_{\mathbf{W}}( \mathbf{x})}{\left(\partial\tilde{\mathbf{w}}_{k}^{(1)}\right)^{2}}\big{(} \mathbf{u}_{k}^{(1)}\big{)}^{\top}\] \[\leq\frac{B_{\sigma^{\prime\prime}}B_{\sigma^{\prime}}^{2}}{m^{3c }}\sum_{k=1}^{m}\mathbf{u}_{k}^{(1)}\sum_{i=1}^{m}\big{(}\mathbf{W}_{ik}^{(2)} \big{)}^{2}\mathbf{x}\mathbf{x}^{\top}\big{(}\mathbf{u}_{k}^{(1)}\big{)}^{\top }+\frac{B_{\sigma^{\prime}}B_{\sigma^{\prime\prime}}}{m^{2c}}\sum_{k=1}^{m} \mathbf{u}_{k}^{(1)}\sum_{i=1}^{m}\mathbf{W}_{ik}^{(2)}\mathbf{x}\mathbf{x}^{ \top}\big{(}\mathbf{u}_{k}^{(1)}\big{)}^{\top}\] \[\leq\frac{B_{\sigma^{\prime\prime}}B_{\sigma^{\prime}}^{2}}{m^{3c }}\sum_{k=1}^{m}\sum_{i=1}^{m}\big{(}\mathbf{W}_{ik}^{(2)}\big{)}^{2}\big{\|} \mathbf{u}_{k}^{(1)}\big{\|}_{2}^{2}+\frac{B_{\sigma^{\prime}}B_{\sigma^{ \prime\prime}}}{m^{2c}}\sqrt{\sum_{k=1}^{m}\big{(}\sum_{i=1}^{m}\mathbf{W}_{ ik}^{(2)}\big{)}^{2}}\sqrt{\sum_{k=1}^{m}\big{(}\mathbf{u}_{k}^{(1)}\mathbf{x} \mathbf{x}^{\top}\big{(}\mathbf{u}_{k}^{(1)}\big{)}^{\top}\big{)}^{2}}\] \[\leq\frac{B_{\sigma^{\prime\prime}}B_{\sigma^{\prime}}^{2}C_{ \mathbf{x}}^{2}}{m^{3c}}\sum_{k=1}^{m}\sum_{i=1}^{m}\big{(}\mathbf{W}_{ik}^{(2) }\big{)}^{2}\big{\|}\mathbf{u}_{k}^{(1)}\big{\|}_{2}^{2}+\frac{B_{\sigma^{ \prime}}B_{\sigma^{\prime\prime}}}{m^{2c}}\sqrt{m\sum_{k=1}^{m}\sum_{i=1}^{m} \big{(}\mathbf{W}_{ik}^{(2)}\big{)}^{2}}\sum_{k=1}^{m}\big{(}\mathbf{u}_{k}^{ (1)}\mathbf{x}\mathbf{x}^{\top}\big{(}\mathbf{u}_{k}^{(1)}\big{)}^{\top}\big{)}\] \[\leq\frac{B_{\sigma^{\prime\prime}}B_{\sigma^{\prime}}^{2}C_{ \mathbf{x}}^{2}}{m^{3c}}\big{\|}\mathbf{W}^{(2)}\big{\|}_{2}^{2}+\frac{B_{ \sigma^{\prime}}B_{\sigma^{\prime\prime}}c_{\mathbf{x}}^{2}}{m^{2c-\frac{1}{2} }}\big{\|}\mathbf{W}^{(2)}\big{\|}_{2},\] (B.3)
where we used \(\big{(}\sum_{i=1}^{m}\mathbf{W}_{ik}^{(2)}\big{)}^{2}\leq m\sum_{i=1}^{m} \big{(}\mathbf{W}_{ik}^{(2)}\big{)}^{2}\) and \(\sum_{k=1}^{m}\big{(}\mathbf{u}_{k}^{(1)}\mathbf{x}\mathbf{x}^{\top}\big{(} \mathbf{u}_{k}^{(1)}\big{)}^{\top}\big{)}^{2}\leq\big{(}\sum_{k=1}^{m}\mathbf{ u}_{k}^{(1)}\mathbf{x}\mathbf{x}^{\top}\big{(}\mathbf{u}_{k}^{(1)}\big{)}^{\top} \big{)}^{2}\) in the third inequality, and the last inequality follows from \(\mathbf{u}_{k}^{(1)}\mathbf{x}\mathbf{x}^{\top}\big{(}\mathbf{u}_{k}^{(1)} \big{)}^{\top}=\big{(}\mathbf{u}_{k}^{(1)}\mathbf{x}\big{)}^{2}\leq c_{\mathbf{ x}}^{2}\|\mathbf{u}_{k}^{(1)}\|_{2}^{2}\) and \(\|\mathbf{u}_{k}^{(1)}\|_{2}^{2}\leq 1\).
For the second term in (B.2), we control it by
\[\sum_{k=1}^{m}\mathbf{u}_{k}^{(2)}\frac{\partial^{2}f_{\mathbf{W}}(\mathbf{x}) }{\left(\partial\tilde{\mathbf{w}}_{k}^{(2)}\right)^{2}}\big{(}\mathbf{u}_{k}^{ (2)}\big{)}^{\top}\leq\frac{B_{\sigma^{\prime\prime}}}{m^{3c}}\sum_{k=1}^{m} \big{(}\mathbf{u}_{k}^{(2)}A_{\mathbf{W}^{(1)}}\big{)}^{2}\leq\frac{B_{\sigma^ {\prime\prime}}B_{\sigma}}{m^{3c-1}}\big{\|}\mathbf{u}^{(2)}\big{\|}_{2}^{2} \leq\frac{B_{\sigma^{\prime\prime}}B_{\sigma}}{m^{3c-1}}.\] (B.4)
Further, according to Cauchy-Schwarz inequality, we can get
\[\sum_{k=1}^{m}\sum_{j=1}^{m}\mathbf{u}_{k}^{(1)}\frac{\partial^{2}f _{\mathbf{W}}(\mathbf{x})}{\partial\tilde{\mathbf{w}}_{k}^{(1)}\partial\tilde{ \mathbf{w}}_{j}^{(2)}}\big{(}\mathbf{u}_{j}^{(2)}\big{)}^{\top}\] \[\leq\frac{B_{\sigma^{\prime\prime}}B_{\sigma^{\prime}}B_{\sigma}}{m ^{3c}}\sqrt{\sum_{j=1}^{m}\big{\|}\mathbf{u}_{j}^{(2)}\big{\|}_{2}^{2}}\sqrt{ \sum_{j=1}^{m}\big{\|}\mathbf{W}_{j;}^{(2)}\big{\|}_{2}^{2}\big{\|}\mathbf{u}^{( 1)}\mathbf{x}\big{\|}_{2}^{2}+\frac{B_{\sigma^{\prime}}^{2}c_{\mathbf{x}}}{m^{2c }}\sqrt{\sum_{k=1}^{m}\big{\|}\mathbf{u}_{k}^{(1)}\big{\|}_{2}^{2}}\sqrt{\sum_{ k=1}^{m}\big{\|}\mathbf{u}_{k}^{(2)}\big{\|}_{2}^{2}}\] \[\leq\frac{B_{\sigma^{\prime\prime}}B_{\sigma^{\prime}}B_{\sigma}}{m ^{3c}}\sqrt{m\sum_{j=1}^{m}\big{\|}\mathbf{u}_{j}^{(2)}\big{\|}_{2}^{2}}\sqrt{ \sum_{j=1}^{m}\big{\|}\mathbf{W}_{j;}^{(2)}\big{\|}_{2}^{2}\big{\|}\mathbf{u}^{( 1)}\mathbf{x}\big{\|}_{2}^{2}}+\frac{B_{\sigma^{\prime}}^{2}c_{\mathbf{x}}}{m^{2c }}\sqrt{\sum_{k=1}^{m}\big{\|}\mathbf{u}_{k}^{(1)}\big{\|}_{2}^{2}}\sqrt{m \sum_{k=1}^{m}\big{\|}\mathbf{u}_{k}^{(2)}\big{\|}_{2}^{2}}\] \[\leq\frac{B_{\sigma^{\prime\prime}}B_{\sigma^{\prime}}B_{\sigma}c_{ \mathbf{x}}}{m^{3c-\frac{1}{2}}}\big{\|}\mathbf{W}^{(2)}\big{\|}_{2}+\frac{B_{ \sigma^{\prime}}^{2}c_{\mathbf{x}}}{m^{2c-\frac{1}{2}}},\] (B.5)
where in the first equality we used \(\sum_{k=1}^{m}\mathbf{u}_{k}^{(1)}\mathbf{W}_{jk}^{(2)}\mathbf{x}=\mathbf{W}_{j;}^{( 2)}\mathbf{u}^{(1)}\mathbf{x}\), the second inequality follows from \(\Big{(}\sum_{j=1}^{m}\mathbf{u}_{jk}^{(2)}\Big{)}\leq\big{\|}\mathbf{u}_{k}^{(2 )}\big{\|}_{1}\), here \(\mathbf{u}_{\cdot k}^{(2)}\) denotes the \(k\)-th column of \(\mathbf{u}^{(2)}\).
Plugging (B.3), (B.4) and (B.5) back into (B.2) we can get
\[\big{\|}\nabla^{2}f_{\mathbf{W}}(\mathbf{x})\big{\|}_{op}\] \[\leq\frac{B_{\sigma^{\prime}}^{2}B_{\sigma^{\prime\prime}}c_{ \mathbf{x}}^{2}}{m^{3c}}\big{\|}\mathbf{W}^{(2)}\big{\|}_{2}^{2}+\Big{(}\frac{B_{ \sigma^{\prime}}B_{\sigma^{\prime\prime}}c_{\mathbf{x}}^{2}}{m^{2c-\frac{1}{2} }}+\frac{2B_{\sigma^{
For any \(\mathbf{W},\widetilde{\mathbf{W}}\), according to Assumptions 1 ans 2 we can get
\[\big{|}f_{\mathbf{W}}(\mathbf{x})-f_{\widetilde{\mathbf{W}}}( \mathbf{x})\big{|}\] \[\leq\frac{1}{m^{c}}\sum_{k=1}^{m}\big{|}\sigma\Big{(}\frac{1}{m^{c }}\sum_{s=1}^{m}\mathbf{W}_{ks}^{(2)}\sigma(\mathbf{W}_{s}^{(1)}\mathbf{x}) \Big{)}-\sigma\Big{(}\frac{1}{m^{c}}\sum_{s=1}^{m}\widetilde{\mathbf{W}}_{ks}^ {(2)}\sigma(\widetilde{\mathbf{W}}_{s;}^{(1)}\mathbf{x})\Big{)}\big{|}\] \[\leq\frac{B_{\sigma^{\prime}}B_{\sigma}}{m^{2c}}\sum_{k=1}^{m} \sum_{s=1}^{m}\big{|}\mathbf{W}_{ks}^{(2)}\sigma(\mathbf{W}_{s;}^{(1)}\mathbf{ x})-\widetilde{\mathbf{W}}_{ks}^{(2)}\sigma(\mathbf{W}_{s;}^{(1)}\mathbf{x})+ \widetilde{\mathbf{W}}_{ks}^{(2)}\sigma(\mathbf{W}_{s;}^{(1)}\mathbf{x})- \widetilde{\mathbf{W}}_{ks}^{(2)}\sigma(\widetilde{\mathbf{W}}_{s;}^{(1)} \mathbf{x})\big{|}\] \[\leq\frac{B_{\sigma^{\prime}}B_{\sigma}}{m^{2c}}\sum_{k=1}^{m} \sum_{s=1}^{m}\big{|}\mathbf{W}_{ks}^{(2)}-\widetilde{\mathbf{W}}_{ks}^{(2)} \big{|}+\frac{B_{\sigma^{\prime}}^{2}}{m^{2c}}\sum_{k=1}^{m}\sum_{s=1}^{m}| \widetilde{\mathbf{W}}_{ks}^{(2)}\big{|}\cdot\big{|}\big{(}\mathbf{W}_{s;}^{( 1)}-\widetilde{\mathbf{W}}_{s;}^{(1)}\big{)}\mathbf{x}\big{|}\] \[\leq\frac{B_{\sigma^{\prime}}B_{\sigma}}{m^{2c-1}}\big{\|} \mathbf{W}^{(2)}-\widetilde{\mathbf{W}}^{(2)}\big{\|}_{2}+\frac{B_{\sigma^{ \prime}}^{2}c_{\mathbf{x}}\|\widetilde{\mathbf{W}}^{(2)}\|_{\infty}}{m^{2c- \frac{3}{2}}}\big{\|}\mathbf{W}^{(1)}-\widetilde{\mathbf{W}}^{(1)}\big{\|}_{2}.\] (B.7)
Since
\[\nabla^{2}\ell(\mathbf{W};z)=\nabla f_{\mathbf{W}}(\mathbf{x})\nabla f_{ \mathbf{W}}(\mathbf{x})^{\top}+\nabla^{2}f_{\mathbf{W}}(\mathbf{x})\big{(}f_ {\mathbf{W}}(\mathbf{x})-y\big{)}.\] (B.8)
Then for any \(\mathbf{W}\in\mathbb{R}^{md+m^{2}}\), we can upper bound the maximum eigenvalue of the Hessian by combining (B.1), (B.6) and (B.7) with \(\widetilde{\mathbf{W}}=\mathbf{0}\) together
\[\lambda_{\max}(\nabla^{2}\ell(\mathbf{W};z)) \leq\|\nabla f_{\mathbf{W}}(\mathbf{x})\|_{2}^{2}+\|\nabla^{2}f_ {\mathbf{W}}(\mathbf{x})\|_{op}|f_{\mathbf{W}}(\mathbf{x})-y|\] \[\leq\frac{B_{\sigma^{\prime}}^{4}c_{\mathbf{x}}^{2}}{m^{4c-1}}\big{\|} \mathbf{W}^{(2)}\big{\|}_{2}^{2}+\frac{B_{\sigma^{\prime}}^{2}B_{\sigma}^{2}}{ m^{4c-2}}+C_{\mathbf{W}}\big{(}\big{|}f_{\mathbf{W}}(\mathbf{x})-f_{ \mathbf{0}}(\mathbf{x})\big{|}+\big{|}f_{\mathbf{0}}(\mathbf{x})-y\big{|}\big{)}\] \[\leq\frac{B_{\sigma^{\prime}}^{4}c_{\mathbf{x}}^{2}}{m^{4c-1}}\big{\|} \mathbf{W}^{(2)}\big{\|}_{2}^{2}+\frac{B_{\sigma^{\prime}}^{2}B_{\sigma}^{2}}{ m^{4c-2}}+C_{\mathbf{W}}\Big{(}\frac{B_{\sigma^{\prime}}B_{\sigma}}{m^{2c-1}} \big{\|}\mathbf{W}^{(2)}\big{\|}_{2}+\sqrt{2\ell(\mathbf{0};z)}\Big{)}\] \[\leq\frac{B_{\sigma^{\prime}}^{4}c_{\mathbf{x}}^{2}}{m^{4c-1}} \big{\|}\mathbf{W}^{(2)}\big{\|}_{2}^{2}+\frac{B_{\sigma^{\prime}}^{2}B_{ \sigma}^{2}}{m^{4c-2}}+C_{\mathbf{W}}\Big{(}\frac{B_{\sigma^{\prime}}B_{ \sigma}}{m^{2c-1}}\big{\|}\mathbf{W}^{(2)}\big{\|}_{2}+\sqrt{2\ell_{0}}\Big{)}.\]
Note that \(\nabla f_{\mathbf{W}}(\mathbf{x})\nabla f_{\mathbf{W}}(\mathbf{x})^{\top}\) is positive semi-definite, then from (B.6), (B.8) and (B.7) with \(\widetilde{\mathbf{W}}=\mathbf{0}\) we can get
\[\lambda_{\min}(\nabla^{2}\ell(\mathbf{W};z)) \geq-\|\nabla^{2}f_{\mathbf{W}}(\mathbf{x})\|_{op}\big{|}f_{ \mathbf{W}}(\mathbf{x})-y\big{|}\] \[\geq-C_{\mathbf{W}}\Big{(}\big{|}f_{\mathbf{W}}(\mathbf{x})-f_{ \mathbf{0}}(\mathbf{x})\big{|}+\big{|}f_{\mathbf{0}}(\mathbf{x})-y\big{|}\Big{)}\] \[\geq-C_{\mathbf{W}}\Big{(}\frac{B_{\sigma^{\prime}}B_{\sigma}}{m^ {2c-1}}\big{\|}\mathbf{W}^{(2)}\big{\|}_{2}+\sqrt{2\ell(\mathbf{W}_{0};z)} \Big{)}\] \[\geq-C_{\mathbf{W}}\Big{(}\frac{B_{\sigma^{\prime}}B_{\sigma}}{m^ {2c-1}}\big{\|}\mathbf{W}^{(2)}\big{\|}_{2}+\sqrt{2\ell_{0}}\Big{)}.\]
The proof is completed.
Let \(B_{1}=\max\big{\{}B_{\sigma^{\prime}}^{2}B_{\sigma^{\prime\prime}}c_{\mathbf{x} }^{2},B_{\sigma^{\prime}}B_{\sigma^{\prime\prime}}c_{\mathbf{x}}^{2},2B_{\sigma^{ \prime\prime}}B_{\sigma}c_{\mathbf{x}},B_{\sigma^{\prime\prime}}B_{\sigma}^{2},2B_{ \sigma^{\prime\prime}}^{2},c_{\mathbf{x}}\big{\}}\) and \(B_{2}=\max\big{\{}B_{\sigma^{\prime}}^{4}c_{\mathbf{x}}^{2},B_{\sigma^{\prime}}^{2}B_{ \sigma^{\prime\prime}}^{2},B_{\sigma^{\prime\prime}}B_{\sigma},\sqrt{2c_{0}}\big{\}}\).
**Lemma B.2**.: _Suppose Assumptions 1 and 2 hold. Let \(\{\mathbf{W}_{t}\}\) and \(\{\mathbf{W}_{t}^{\prime}\}\) be produced by GD iterates with \(T\) iterations based on \(S\) and \(S^{\prime}\), respectively. Let \(\hat{\rho}=4B_{2}(1+2B_{1})\) and \(C>0\) be a constant. Assume \(\eta\leq 1/(2\hat{\rho})\) and (6) hold. Then for any \(c\in(1/2,1]\) and any \(t=0,\ldots,T\) there holds_
\[\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}\leq\sqrt{2c_{0}\eta t}\]
_and_
\[\|\nabla\ell(\mathbf{W}_{t};z)-\nabla\ell(\mathbf{W}_{t}^{\prime};z)\|_{2}\leq \hat{\rho}\|\mathbf{W}_{t}-\mathbf{W}_{t}^{\prime}\|_{2}.\]
Proof.: We will prove by induction to show \(\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}\leq\eta tm^{2c-1}\). Further, we can show that \(\rho_{\mathbf{W}}=\mathcal{O}(1)\) for any \(\mathbf{W}\) produced by GD iterates if \(m\) satisfies (6). Then by assuming \(\eta\leq 1/(2\hat{\rho})\) we can prove that \(\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}\leq\sqrt{2c_{0}\eta t}\).
It's obvious that \(\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}\leq\eta tm^{2c-1}\) with \(t=0\) holds trivially. Assume \(\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}\leq\eta tm^{2c-1}\), according to the update rule (2) we know
\[\|\mathbf{W}_{t+1}-\mathbf{W}_{0}\|_{2}\] \[\leq\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}+\eta\|\nabla L_{S}( \mathbf{W}_{t})\|_{2}\] \[\leq\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}+\eta\max_{i\in[n]}\| \nabla f_{\mathbf{W}_{t}}(\mathbf{x}_{i})\|_{2}\big{|}f_{\mathbf{W}_{t}}( \mathbf{x}_{i})-y_{i}\big{|}\] \[\leq\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}+\eta\Big{(}\frac{B_{ \sigma^{\prime}}^{2}c_{\mathbf{x}}}{m^{2c-\frac{1}{2}}}(\|\mathbf{W}_{t}- \mathbf{W}_{0}\|_{2}+\|\mathbf{W}_{0}\|_{2})+\frac{B_{\sigma^{\prime}}B_{ \sigma}}{m^{2c-1}}\Big{)}\Big{(}\big{|}f_{\mathbf{W}_{t}}(\mathbf{x}_{i})-f_{ \mathbf{0}}(\mathbf{W}_{t})\big{|}\] \[\quad+\big{|}f_{\mathbf{0}}(\mathbf{W}_{t})-y_{i}\big{|}\Big{)}\] \[\leq\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}+\eta\Big{(}\frac{B_{ \sigma^{\prime}}^{2}c_{\mathbf{x}}}{m^{2c-\frac{1}{2}}}(\|\mathbf{W}_{t}- \mathbf{W}_{0}\|_{2}+\|\mathbf{W}_{0}\|_{2})+\frac{B_{\sigma^{\prime}}B_{ \sigma}}{m^{2c-1}}\Big{)}\Big{(}\frac{B_{\sigma^{\prime}}B_{\sigma}}{m^{2c-1} }(\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}\] \[\quad+\|\mathbf{W}_{0}\|_{2})+\sqrt{2c_{0}}\Big{)},\] (B.9)
where in the third inequality we used (B.1), the last inequality used (B.7) with \(\widetilde{\mathbf{W}}=\mathbf{0}\). If \(m\) is large enough such that
\[\Big{(}\frac{B_{\sigma^{\prime}}^{2}c_{\mathbf{x}}}{m^{2c-\frac{ 1}{2}}}(\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}+\|\mathbf{W}_{0}\|_{2})+\frac{B_ {\sigma^{\prime}}B_{\sigma}}{m^{2c-1}}\Big{)}\Big{(}\frac{B_{\sigma^{\prime}} B_{\sigma}}{m^{2c-1}}\big{(}\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}+\|\mathbf{W}_{0}\|_{2} \big{)}+\sqrt{2c_{0}}\Big{)}\] \[\leq m^{2c-1},\] (B.10)
then from (B.9) we know that \(\|\mathbf{W}_{t+1}-\mathbf{W}_{0}\|_{2}\leq\eta(t+1)m^{2c-1}\). The first part of the lemma can be proved. Now, we discuss the conditions on \(m\) such that (B.10) holds. Let \(x=\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}+\|\mathbf{W}_{0}\|_{2}\). To guarantee (B.10), it suffices that the following three inequalities hold
\[\frac{B_{\sigma^{\prime}}^{3}B_{\sigma}c_{\mathbf{x}}}{m^{4c-\frac{3}{2}}}x^{ 2}\leq\frac{m^{2c-1}}{3},\big{(}\frac{B_{\sigma^{\prime}}^{2}B_{\sigma}^{2}}{ m^{4c-2}}+\frac{\sqrt{2c_{0}}B_{\sigma^{\prime}}^{2}c_{\mathbf{x}}}{m^{2c- \frac{1}{2}}}\big{)}x\leq\frac{m^{2c-1}}{3},\frac{\sqrt{2c_{0}}B_{\sigma^{ \prime}}B_{\sigma}}{m^{2c-1}}\leq\frac{m^{2c-1}}{3}.\] (B.11)
It's easy to verify that (B.11) holds if \(m\gtrsim\max\{(\eta T)^{\frac{4}{4c-1}},(\eta T)^{\frac{1}{4c-2}},\|\mathbf{W} _{0}\|_{2}^{\frac{1}{3c-\frac{1}{4}}},\|\mathbf{W}_{0}\|_{2}^{\frac{1}{2c-3}} \}\), which can be ensured by (6). Hence, if \(c\in(1/2,1]\) and (6) holds, we have \(\|\mathbf{W}_{t}-\mathbf{W}_{0}\|_{2}\leq\eta tm^{2c-1}\) for all \(t=0,1,\ldots,T\).
Recall that
\[B_{1}=\max\big{\{}B_{\sigma^{\prime}}^{2}B_{\sigma^{\prime\prime}}c_{\mathbf{x} }^{2},B_{\sigma^{\prime\prime}}c_{\mathbf{x}}^{2},2B_{\sigma^{\prime\prime}}B_{ \sigma}c_{\mathbf{x}},B_{s\sigma^{\prime\prime}}B_{\sigma}^{2},2B_{s\sigma^{ \prime\prime}}^{2}c_{\mathbf{x}}\big{\}}\]
and
\[B_{2}=\max\big{\{}B_{\sigma^{\prime}}^{4}c_{\mathbf{x}}^{2},B_{\sigma^{\prime \prime}}^{2}B_{\sigma^{\prime\prime}}^{2},B_{s\sigma^{\prime\prime}}B_{\sigma}, \sqrt{2c_{0}}\big{\}}.\]
Then from Lemma B.1 we know
\[C_{\mathbf{W}}\leq B_{1}\Big{(}m^{-3c}\big{\|}\mathbf{W}^{(2)}\big{\|}_{2}^{2}+ \big{(}m^{\frac{1}{2}-2c}+m^{\frac{1}{2}-3c}\big{)}\|\mathbf{W}^{(2)}\|_{2}+m^{ 1-3c}+m^{\frac{1}{2}-2c}\Big{)}\]
and
\[\rho_{\mathbf{W}}\leq B_{2}\Big{(}m^{1-4c}\big{\|}\mathbf{W}\big{\|}_{2}^{2}+m^ {2-4c}+C_{\mathbf{W}}\big{(}m^{1-2c}\big{\|}\mathbf{W}\big{\|}_{2}+1\big{)} \Big{)}.\]
Note that (6) implies \(m\gtrsim(\eta T)^{4}+\|\mathbf{W}_{0}\|_{2}^{\frac{4}{4c-3}}\). By using \(\|\mathbf{W}_{t}\|_{2}\leq\eta tm^{2c-1}+\|\mathbf{W}_{0}\|_{2}\) we can verify that
\[\rho_{\mathbf{W}_{t}}\leq 4B_{2}(1+2B_{1}):=\hat{\rho}\quad\text{ for any }\quad t=0,\ldots,T.\]
Hence, we know that \(\ell\) is \(\hat{\rho}\)-smooth when the parameter space is the trajectory of GD. Then for any \(t=0,\ldots,T\) and any \(\mathbf{W}_{t},\mathbf{W}_{t}^{\prime}\) produced by GD iterates, there holds
\[\|\nabla\ell(\mathbf{W}_{t};z)-\nabla\ell(\mathbf{W}_{t}^{\prime};z)\|_{2}\leq \hat{\rho}\|\mathbf{W}_{t}-\mathbf{W}_{t}^{\prime}\|_{2}.\] (B.12)
In addition, by the smoothness of \(\ell\) we can get for any \(j=0,\ldots,T-1\)
\[L_{S}(\mathbf{W}_{j+1})\leq L_{S}(\mathbf{W}_{j})-\eta\Big{(}1-\frac{\eta\hat{ \rho}}{2}\Big{)}\|\nabla L_{S}(\mathbf{W}_{j})\|_{2}^{2}.\]
Rearranging and summing over \(j\) yields
\[\eta\Big{(}1-\frac{\eta\hat{\rho}}{2}\Big{)}\sum_{j=0}^{t}\|\nabla L_{S}(\mathbf{ W}_{j})\|_{2}^{2}\leq\sum_{j=0}^{t}L_{S}(\mathbf{W}_{j})-L_{S}(\mathbf{W}_{j+1}) \leq L_{S}(\mathbf{W}_{0}).\]
Note that the update rule of GD (2) implies
\[\mathbf{W}_{t+1}=\mathbf{W}_{0}-\eta\sum_{j=0}^{t}\nabla L_{S}(\mathbf{W}_{j}).\]
Combining the above two equations together and noting that \(\eta\hat{\rho}\leq 1/2\), we obtain
\[\|\mathbf{W}_{t+1}-\mathbf{W}_{0}\|_{2}^{2}=\eta^{2}\big{\|}\sum_{j=0}^{t} \nabla L_{S}(\mathbf{W}_{j})\big{\|}_{2}^{2}\leq\eta^{2}t\sum_{j=0}^{t}\big{\|} \nabla L_{S}(\mathbf{W}_{j})\big{\|}_{2}^{2}\leq 2\eta tL_{S}(\mathbf{W}_{0}) \leq 2c_{0}\eta t.\]
The proof is completed.
The almost co-coercivity of the gradient operator for three-layer neural networks is given as follows. Recall that \(S^{(i)}=\{z_{1},\ldots,z_{i-1},z_{i}^{\prime},z_{i+1},z_{n}\}\) is the set formed from \(S\) by replacing the \(i\)-th element with \(z_{i}^{\prime}\) and for any \(\mathbf{W}\),
\[L_{S^{(i)}}(\mathbf{W})=L_{S}(\mathbf{W})-\frac{1}{n}\ell(\mathbf{W};z_{i})=L _{S^{(i)}}(\mathbf{W})-\frac{1}{n}\ell(\mathbf{W};z_{i}^{\prime}).\]
Let \(\{\mathbf{W}_{t}\}\) and \(\{\mathbf{W}_{t}^{(i)}\}\) be the sequence produced by GD based on \(S\) and \(S^{(i)}\), respectively. Let \(C_{3,T}=4B_{1}\big{(}2c_{0}\eta Tm^{-3c}+\big{(}m^{\frac{1}{2}-2c}+m^{\frac{1} {2}-3c}\big{)}\sqrt{2c_{0}\eta T}+m^{1-3c}+m^{\frac{1}{2}-2c}\big{)}\).
**Lemma B.3**.: _Suppose \(\eta\leq 1/(8\hat{\rho})\) where \(\hat{\rho}=4B_{2}(1+2B_{1})\). Assume (6) holds. Then_
\[\langle\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)},\nabla L_{S^{(i)}}( \mathbf{W}_{t})-\nabla L_{S^{(i)}}(\mathbf{W}_{t}^{(i)})\rangle\geq 2\eta\Big{(}1-4 \eta\hat{\rho}\Big{)}\big{\|}\nabla L_{S^{(i)}}(\mathbf{W}_{t})-\nabla L_{S^{ (i)}}(\mathbf{W}_{t}^{(i)})\big{\|}_{2}^{2}\] \[-\tilde{\epsilon}_{t}\big{\|}\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)} -\eta\big{(}\nabla L_{S^{(i)}}(\mathbf{W}_{t})-\nabla L_{S^{(i)}}(\mathbf{W}_ {t}^{(i)})\big{)}\big{\|}_{2}^{2},\]
_where \(\tilde{\epsilon}_{t}=C_{3,T}\big{(}B_{3}\big{(}\frac{\sqrt{4T}+\|\mathbf{W}_{0 }\|_{2}}{m^{2c-\frac{1}{2}}}+m^{1-2c}\big{)}\big{(}(1+\eta\hat{\rho})\|\mathbf{ W}_{t}-\mathbf{W}_{t}^{(i)}\|_{2}+2\big{(}\sqrt{2c_{0}\eta T}+\|\mathbf{W}_{0}\|_{2} \big{)}+\sqrt{2c_{0}}\big{)}\)._
Proof.: For any \(\mathbf{W}\in\mathcal{B}\big{(}0,2(\sqrt{2c_{0}\eta T}+\|\mathbf{W}_{0}\|_{2} )\big{)}\), defining the following two functions
\[G_{1}(\mathbf{W})=L_{S^{(i)}}(\mathbf{W})-\langle\nabla L_{S^{ (i)}}(\mathbf{W}_{t}^{(i)}),\mathbf{W}\rangle,\qquad G_{2}(\mathbf{W})=L_{S^{ (i)}}(\mathbf{W})-\langle\nabla L_{S^{(i)}}(\mathbf{W}_{t}),\mathbf{W}\rangle.\]
Note that
\[\langle\mathbf{W}_{t}\!-\!\mathbf{W}_{t}^{(i)},\nabla L_{S^{ (i)}}(\mathbf{W}_{t})\!-\!\nabla L_{S^{(i)}}(\mathbf{W}_{t}^{(i)})\rangle\!= \!\big{(}G_{1}\big{(}\mathbf{W}_{t}\big{)}\!-\!G_{1}\big{(}\mathbf{W}_{t}^{(i )}\big{)}\big{)}\!+\!\big{(}G_{2}\big{(}\mathbf{W}_{t}^{(i)}\big{)}\!-\!G_{2} \big{(}\mathbf{W}_{t}\big{)}\big{)}.\] (B.13)
Hence, it is enough to lower bound \(G_{1}\big{(}\mathbf{W}_{t}\big{)}-G_{1}\big{(}\mathbf{W}_{t}^{(i)}\big{)}\) and \(G_{2}\big{(}\mathbf{W}_{t}^{(i)}\big{)}-G_{2}\big{(}\mathbf{W}_{t}\big{)}\).
Note that for any \(\mathbf{W}_{t}\), Lemma B.1 implies that \(\|\mathbf{W}_{t}\|_{2}\leq\sqrt{2c_{0}\eta t}+\|\mathbf{W}_{0}\|_{2}\). Then there holds
\[\big{\|}\mathbf{W}_{t}-\eta\nabla G_{1}(\mathbf{W}_{t})\big{\|}_{2} \leq\big{\|}\mathbf{W}_{t}\big{\|}_{2}+\eta\big{\|}\nabla L_{S^{ (i)}}(\mathbf{W}_{t})-\nabla L_{S^{(i)}}(\mathbf{W}_{t}^{(i)})\big{\|}_{2}\] \[\leq\sqrt{2c_{0}\eta t}+\|\mathbf{W}_{0}\|_{2}+2\eta\hat{\rho} \big{(}\sqrt{2c_{0}\eta t}+\|\mathbf{W}_{0}\|_{2}\big{)}\] \[\leq 2\big{(}\sqrt{2c_{0}\eta t}+\|\mathbf{W}_{0}\|_{2}\big{)},\]
where in the last inequality we used \(\eta\hat{\rho}\leq 1/8\). Similarly, we can show that \(\big{\|}\mathbf{W}_{t}^{(i)}-\eta\nabla G_{2}(\mathbf{W}_{t}^{(i)})\big{\|}_{2} \leq 2(\sqrt{2c_{0}\eta T}+\|\mathbf{W}_{0}\|_{2})\). Hence, we know \(\mathbf{W}_{t}-\eta\nabla G_{1}(\mathbf{W}_{t})\in\mathcal{B}\big{(}0,2(\sqrt{2c _{0}\eta T}\!+\!\|\mathbf{W}_{0}\|_{2})\big{)}\) and \(\mathbf{W}_{t}^{(i)}-\eta\nabla G_{2}(\mathbf{W}_{t}^{(i)})\in\mathcal{B}\big{(}0, 2(\sqrt{2c_{0}\eta T}\!+\!\|\mathbf{W}_{0}\|_{2})\big{)}\).
\(\|\mathbf{W}_{0}\|_{2}\))). On the other hand, similar to Lemma B.2, we can show that \(G_{1}(\mathbf{W})\) and \(G_{2}(\mathbf{W})\) is \(8\hat{\rho}\)-smooth for any \(\mathbf{W}\in\mathcal{B}\big{(}0,2(\sqrt{2c_{0}\eta T}+\|\mathbf{W}_{0}\|_{2}) \big{)}\). Combining the above results, we can get
\[G_{1}(\mathbf{W}_{t}-\eta\nabla G_{1}(\mathbf{W}_{t})) \leq G_{1}(\mathbf{W}_{t})-\eta\big{(}1-4\eta\hat{\rho}\big{)}\| \nabla G_{1}(\mathbf{W}_{t})\|_{2}^{2},\] (B.14) \[G_{2}(\mathbf{W}_{t}^{(i)}-\eta\nabla G_{2}(\mathbf{W}_{t}^{(i)}) ) \leq G_{2}(\mathbf{W}_{t}^{(i)})-\eta\big{(}1-4\eta\hat{\rho}\big{)}\| \nabla G_{2}(\mathbf{W}_{t}^{(i)})\|_{2}^{2}.\] (B.15)
If we can further show that
\[G_{1}(\mathbf{W}_{t}-\eta\nabla G_{1}(\mathbf{W}_{t}))\geq G_{1}(\mathbf{W}_ {t}^{(i)})-\frac{\tilde{\epsilon}_{t}}{2}\|\mathbf{W}_{t}-\mathbf{W}_{t}^{(i) }-\eta\nabla G_{1}(\mathbf{W}_{t})\|_{2}^{2},\] (B.16)
\[G_{2}(\mathbf{W}_{t}^{(i)}-\eta\nabla G_{2}(\mathbf{W}_{t}^{(i)}))\geq G_{2}( \mathbf{W}_{t})-\frac{\tilde{\epsilon}_{t}}{2}\|\mathbf{W}_{t}^{(i)}-\mathbf{ W}_{t}-\eta\nabla G_{2}(\mathbf{W}_{t})\|_{2}^{2},\] (B.17)
with \(\tilde{\epsilon}_{t}=C_{3,T}\big{(}B_{3}\big{(}\frac{\sqrt{\eta T}+\|\mathbf{ W}_{0}\|_{2}}{m^{2c-\frac{1}{2}}}+m^{1-2c}\big{)}\big{(}(1+\eta\hat{\rho})\| \mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}\|_{2}+2\big{(}\sqrt{2c_{0}\eta T}+\| \mathbf{W}_{0}\|_{2}\big{)}+\sqrt{2c_{0}}\big{)}\), where \(C_{3,T}=4B_{1}\big{(}2c_{0}\eta Tm^{-3c}+\big{(}m^{\frac{1}{2}-2c}+m^{\frac{1} {2}-3c}\big{)}\sqrt{2c_{0}\eta T}+m^{1-3c}+m^{\frac{1}{2}-2c}\big{)}\). Then combining (B.14), (B.15), (B.16) and (B.17) together yields
\[G_{1}(\mathbf{W}_{t})-G_{1}(\mathbf{W}_{t}^{(i)}) \geq\eta\big{(}1-4\eta\hat{\rho}\big{)}\|\nabla G_{1}(\mathbf{W}_ {t})\|_{2}^{2}-\frac{\tilde{\epsilon}_{t}}{2}\|\mathbf{W}_{t}-\mathbf{W}_{t}^ {(i)}-\eta\nabla G_{1}(\mathbf{W}_{t})\|_{2}^{2},\] \[G_{2}(\mathbf{W}_{t}^{(i)})-G_{2}(\mathbf{W}_{t}) \geq\eta\big{(}1-4\eta\hat{\rho}\big{)}\|\nabla G_{2}(\mathbf{W}_ {t})\|_{2}^{2}-\frac{\tilde{\epsilon}_{t}}{2}\|\mathbf{W}_{t}^{(i)}-\mathbf{W}_ {t}-\eta\nabla G_{2}(\mathbf{W}_{t})\|_{2}^{2}.\]
Plugging the above two inequalities back into (B.13) yields
\[\langle\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)},\nabla L_{S^{\setminus i }}(\mathbf{W}_{t})-\nabla L_{S^{\setminus i}}(\mathbf{W}_{t}^{(i)})\rangle=G_ {1}(\mathbf{W}_{t})-G_{1}(\mathbf{W}_{t}^{(i)})+G_{2}(\mathbf{W}_{t}^{(i)})-G_ {2}(\mathbf{W}_{t})\] \[\geq 2\eta\big{(}1-4\eta\hat{\rho}\big{)}\big{\|}\nabla L_{S^{\setminus i }}(\mathbf{W}_{t})-\nabla L_{S^{\setminus i}}(\mathbf{W}_{t}^{(i)})\big{\|}_{ 2}^{2}-\tilde{\epsilon}_{t}\big{\|}\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}-\eta \big{(}\nabla L_{S^{\setminus i}}(\mathbf{W}_{t})-\nabla L_{S^{\setminus i}}( \mathbf{W}_{t}^{(i)})\big{)}\big{\|}_{2}^{2}.\]
The desired result has been proved.
Now, we give the proof of (B.16) and (B.17). For \(\alpha\in[0,1]\), let \(\mathbf{W}(\alpha)=\alpha\mathbf{W}_{t}+(1-\alpha)\mathbf{W}_{t}^{(i)}- \alpha\eta\big{(}\nabla L_{S^{\setminus i}}(\mathbf{W}_{t})-\nabla L_{S^{ \setminus i}}(\mathbf{W}_{t}^{(i)})\big{)}\). For any \(\alpha\in[0,1]\), it's obvious that \(\|\mathbf{W}(\alpha)\|_{2}\leq 2(\sqrt{2c_{0}\eta t}+\|\mathbf{W}_{0}\|_{2})\) by using Lemma B.2. Combining this observation with (B.8) we can obtain
\[\lambda_{\min}\big{(}\nabla^{2}L_{S^{\setminus i}}(\mathbf{W}( \alpha))\big{)}\] \[\geq-\max_{j\in[n]}\|\nabla^{2}f_{\mathbf{W}(\alpha)}(\mathbf{x} _{j})\|_{op}\big{|}f_{\mathbf{W}(\alpha)}(\mathbf{x}_{j})-y_{j}\big{|}\] \[\geq-C_{\mathbf{W}(\alpha)}\Big{(}\Big{(}\frac{B_{o^{\prime}}^{2} \varepsilon_{\mathbf{x}}}{m^{2c-\frac{1}{2}}}\|\mathbf{W}(\alpha)\|_{2}+\frac{B _{o^{\prime}}B_{\sigma}}{m^{2c-1}}\Big{)}\big{(}\|\mathbf{W}(\alpha)-\mathbf{W} _{t}^{(i)}\|_{2}+\|\mathbf{W}_{t}^{(i)}\|_{2}\big{)}+\sqrt{2c_{0}}\Big{)}\] \[\geq-C_{\mathbf{W}(\alpha)}\Big{(}B_{3}\big{(}\frac{\sqrt{\eta T}+ \|\mathbf{W}_{0}\|_{2}}{m^{2c-\frac{1}{2}}}+m^{1-2c}\big{)}\big{(}\|\mathbf{W} _{t}-\mathbf{W}_{t}^{(i)}\|_{2}+\eta\|\nabla\ell(\mathbf{W}_{t})-\ell(\mathbf{W }_{t}^{(i)})\|_{2}\] \[\quad+\|\mathbf{W}_{t}^{(i)}\|_{2}\big{)}+\sqrt{2c_{0}}\Big{)}\] \[\geq-\tilde{\epsilon}_{t},\] (B.18)
where the second inequality is due to (B.6), the third inequality is according to (B.1) and in the last inequality we used Lemma B.2. Similarly, let \(\widetilde{\mathbf{W}}(\alpha)=\alpha\mathbf{W}_{t}^{(i)}+(1-\alpha)\mathbf{W}_ {t}-\alpha\eta\big{(}\nabla L_{S^{\setminus i}}(\mathbf{W}_{t}^{(i)})-\nabla L_{S ^{\setminus i}}(\mathbf{W}_{t})\big{)}\), we can also control \(\lambda_{\min}\big{(}\nabla^{2}L_{S^{\setminus i}}(\widetilde{\mathbf{W}}( \alpha))\big{)}\) by \(-\tilde{\epsilon}_{t}\).
Let \(\Delta=\big{\|}\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}-\eta\big{(}\nabla L_{S^{ \setminus i}}(\mathbf{W}_{t})-\nabla L_{S^{\setminus i}}(\mathbf{W}_{t}^{(i)}) \big{)}\big{\|}_{2}^{2}\). We define
\[g_{1}(\alpha)=G_{1}(\mathbf{W}(\alpha))+\frac{\tilde{\epsilon}_{t}\alpha^{2}}{2} \Delta,\qquad g_{2}(\alpha)=G_{2}(\widetilde{\mathbf{W}}(\alpha))+\frac{ \tilde{\epsilon}_{t}\alpha^{2}}{2}\Delta.\]
From (B.18) we know that \(g_{1}^{\prime\prime}(\alpha)\geq 0\) for any \(\alpha\in[0,1]\). Hence, \(g_{1}\) is convex on \([0,1]\). Then from the convexity of \(g_{1}\) we can get that
\[0=g_{1}^{\prime}(0)\leq g_{1}(1)-g_{1}(0)\leq G_{1}(\mathbf{W}_{t}-\eta \nabla G_{1}(\mathbf{W}_{t}))+\frac{\tilde{\epsilon}_{t}}{2}\Delta-G_{1}( \mathbf{W}_{t}^{(i)}),\]
which completes the proof of (B.16). We can also show \(g_{2}(\alpha)\) is convex on \([0,1]\) and prove (B.17) in a similar way. The proof is completed.
Based on Lemma B.1, Lemma B.2 and Lemma B.3 we can establish the following uniform stability bounds for three-layer neural networks.
**Theorem B.4** (Uniform Stability).: _Suppose Assumptions 1 and 2 hold. Let \(S\) and \(S^{(i)}\) be constructed in Definition 2. Let \(\{\mathbf{W}_{t}\}\) and \(\{\mathbf{W}_{t}^{(i)}\}\) be produced by \((2)\) with \(\eta\leq 1/(8\hat{\rho})\) based on \(S\) and \(S^{(i)}\), respectively. Assume \((6)\) holds. Then, for any \(t\in[T]\), there holds_
\[\big{\|}\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}\big{\|}_{2}\leq\frac{2e\eta T \sqrt{2c_{0}\hat{\rho}(\rho\eta T+2)}}{n}.\]
Proof.: Similar to (A.7), by the update rule \(\mathbf{W}_{t+1}=\mathbf{W}_{t}-\eta\nabla L_{S}(\mathbf{W}_{t})\) we know
\[\big{\|}\mathbf{W}_{t+1}-\mathbf{W}_{t+1}^{(i)}\big{\|}_{2}^{2} \leq(1+p)\big{\|}\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}-\eta\big{(} \nabla L_{S^{\setminus i}}(\mathbf{W}_{t})-\nabla L_{S^{\setminus i}}( \mathbf{W}_{t}^{(i)})\big{)}\big{\|}_{2}^{2}\] \[\quad+\frac{2\eta^{2}(1+1/p)}{n^{2}}\big{(}\big{\|}\nabla\ell( \mathbf{W}_{t};z_{i})\big{\|}_{2}^{2}+\big{\|}\nabla\ell(\mathbf{W}_{t}^{(i)}; z_{i}^{\prime})\big{\|}_{2}^{2}\big{)}.\] (B.19)
From Lemma B.3 we know
\[\big{\|}\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}-\eta\big{(}\nabla L_ {S^{\setminus i}}(\mathbf{W}_{t})-\nabla L_{S^{\setminus i}}(\mathbf{W}_{t}^{ (i)})\big{)}\big{\|}_{2}^{2}\] \[=\big{\|}\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}\big{\|}_{2}^{2}+\eta ^{2}\big{\|}\nabla L_{S^{\setminus i}}(\mathbf{W}_{t})-\nabla L_{S^{\setminus i }}(\mathbf{W}_{t}^{(i)})\big{\|}_{2}^{2}-2\eta\Big{\langle}\mathbf{W}_{t}- \mathbf{W}_{t}^{(i)},\nabla L_{S^{\setminus i}}(\mathbf{W}_{t})-\nabla L_{S^{ \setminus i}}(\mathbf{W}_{t}^{(i)})\Big{\rangle}\] \[\leq\big{\|}\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}\big{\|}_{2}^{2}+ \eta^{2}\big{\|}\nabla L_{S^{\setminus i}}(\mathbf{W}_{t})-\nabla L_{S^{ \setminus i}}(\mathbf{W}_{t}^{(i)})\big{\|}_{2}^{2}-4\eta^{2}\big{(}1-4\eta \hat{\rho}\big{)}\big{\|}\nabla L_{S^{\setminus i}}(\mathbf{W}_{t})-\nabla L_ {S^{\setminus i}}(\mathbf{W}_{t}^{(i)})\big{\|}_{2}^{2}\] \[\quad+2\eta\tilde{\epsilon}_{t}\big{\|}\mathbf{W}_{t}-\mathbf{W} _{t}^{(i)}-\eta\big{(}\nabla L_{S^{\setminus i}}(\mathbf{W}_{t})-\nabla L_{S^{ \setminus i}}(\mathbf{W}_{t}^{(i)})\big{)}\big{\|}_{2}^{2}.\]
Note \(4\eta\hat{\rho}\leq 1/2\) implies \(1-4(1-4\eta\hat{\rho})<0\) and condition \((6)\) ensures that \(2\eta\tilde{\epsilon}_{t}<1\) for any \(t\in[T]\). Then from the above inequality we can get
\[(1-2\eta\tilde{\epsilon}_{t})\big{\|}\mathbf{W}_{t}-\mathbf{W}_{t}^{(i)}-\eta \big{(}\nabla L_{S^{\setminus i}}(\mathbf{W}_{t})-\nabla L_{S^{\setminus i}}( \mathbf{W}_{t}^{(i)})\big{)}\big{\|}_{2}^{2}\leq\big{\|}\mathbf{W}_{t}- \mathbf{W}_{t}^{(i)}\big{\|}_{2}^{2}.\] (B.20)
Now, plugging (B.20) back into (B.19) we have
\[\big{\|}\mathbf{W}_{t+1}\!-\!\mathbf{W}_{t+1}^{(i)}\big{\|}_{2}^{2}\!\leq\! \frac{1+p}{1\!-\!2\eta\tilde{\epsilon}_{t}}\big{\|}\mathbf{W}_{t}\!-\!\mathbf{W }_{t}^{(i)}\big{\|}_{2}^{2}\!+\!\frac{2\eta^{2}(1\!+\!1/p)}{n^{2}}\Big{(} \big{\|}\nabla\ell(\mathbf{W}_{t};z_{i})\big{\|}_{2}^{2}\!+\!\big{\|}\nabla \ell(\mathbf{W}_{t}^{(i)};z_{i}^{\prime})\big{\|}_{2}^{2}\Big{)}.\]
Applying the above inequality recursively and note that \(\mathbf{W}_{0}=\mathbf{W}_{0}^{(i)}\) we get
\[\big{\|}\mathbf{W}_{t+1}-\mathbf{W}_{t+1}^{(i)}\big{\|}_{2}^{2} \leq\frac{2\eta^{2}(1+1/p)}{n^{2}}\sum_{j=0}^{t}\big{(}\big{\|} \nabla\ell(\mathbf{W}_{j};z_{i})\big{\|}_{2}^{2}+\big{\|}\nabla\ell(\mathbf{W}_{ j}^{(i)};z_{i}^{\prime})\big{\|}_{2}^{2}\big{)}\!\!\prod_{\tilde{j}=j+1}^{t}\frac{1+p}{1-2 \eta\tilde{\epsilon}_{\tilde{j}}}\] \[\leq\frac{2\eta^{2}(1+1/p)(1+p)^{t}}{n^{2}(1-2\eta\tilde{\epsilon} _{\tilde{j}})^{t}}\sum_{j=0}^{t}\big{(}\big{\|}\nabla\ell(\mathbf{W}_{j};z_{i}) \big{\|}_{2}^{2}+\big{\|}\nabla\ell(\mathbf{W}_{j}^{(i)};z_{i}^{\prime})\big{\|}_{2 }^{2}\big{)}.\]
Let \(p=1/t\) and note that \((1+1/t)^{t}\leq e\), we have
\[\big{\|}\mathbf{W}_{t+1}-\mathbf{W}_{t+1}^{(i)}\big{\|}_{2}^{2}\leq\frac{2e\eta^{ 2}(1+t)}{n^{2}}\sum_{j=0}^{t}\big{(}\big{\|}\nabla\ell(\mathbf{W}_{j};z_{i}) \big{\|}_{2}^{2}+\big{\|}\nabla\ell(\mathbf{W}_{j}^{(i)};z_{i}^{\prime})\big{\|}_{2 }^{2}\big{)}\prod_{\tilde{j}=j+1}^{t}\frac{1}{1-2\eta\tilde{\epsilon}_{\tilde{j}}}.\] (B.21)
According to Lemma B.1 and Lemma A.1, Assumption 2 and noting that \(\|\mathbf{W}_{j}-\mathbf{W}_{0}\|_{2}\leq\sqrt{2c_{0}\eta j}\) for any \(j\leq t\), we know
\[\|\nabla\ell(\mathbf{W}_{j};z)\|_{2}^{2} \leq 2\|\nabla\ell(\mathbf{W}_{j};z)-\nabla\ell(\mathbf{W}_{0};z) \|_{2}^{2}+2\|\nabla\ell(\mathbf{W}_{0};z)\|_{2}^{2}\] \[\leq 2\hat{\rho}^{2}\|\mathbf{W}_{j}-\mathbf{W}_{0}\|_{2}^{2}+4 \hat{\rho}\ell(\mathbf{W}_{0};z)\leq 4c_{0}\hat{\rho}(\hat{\rho}\eta j+1).\]
Similarly, we have
\[\|\nabla\ell(\mathbf{W}_{j}^{(i)};z)\|_{2}^{2}\leq 4c_{0}\hat{\rho}( \hat{\rho}\eta j+1).\]
Combining the above three inequalities together, we get
\[\big{\|}\mathbf{W}_{t+1}-\mathbf{W}_{t+1}^{(i)}\big{\|}_{2}^{2} \leq\frac{8c_{0}e\eta^{2}(1+t)^{2}\hat{\rho}\big{(}\hat{\rho}\eta t+2)\big{)} }{n^{2}}\prod_{j=j+1}^{t}\frac{1}{1-2\eta\tilde{\epsilon}_{j}^{2}}.\]
Similar to the proof of Theorem A.5, we can derive the following stability result by induction
\[\big{\|}\mathbf{W}_{t+1}-\mathbf{W}_{t+1}^{(i)}\big{\|}_{2}\leq \frac{2e\eta T\sqrt{2c_{0}\hat{\rho}(\hat{\rho}\eta T+2)}}{n}.\]
Here, the condition \(\frac{1}{(1-2\eta\tilde{\epsilon}_{j})^{t}}\leq\left(\frac{1}{1-1/(t+1)}\right) ^{t}\leq e\) is ensured by condition (6), i.e.,
\[m\!\gtrsim \big{(}(\eta T\mathcal{B}_{T})^{2}+\frac{(\eta T)^{\frac{7}{2}} \mathcal{B}_{T}}{n}\big{)}^{\frac{1}{5c-\frac{1}{2}}}+\big{(}(\eta T)^{\frac{3 }{2}}\mathcal{B}_{T}^{2}+\frac{(\eta T)^{3}\mathcal{B}_{T}}{n}\big{)}^{\frac{1 }{4c-1}}\!+\big{(}(\eta T)^{2}\mathcal{B}_{T}+\frac{(\eta T)^{\frac{7}{2}}}{n }\big{)}^{\frac{1}{5c-1}}\] \[+\big{(}(\eta T)^{\frac{3}{2}}\mathcal{B}_{T}\!+\!\frac{(\eta T)^ {3}}{n}\big{)}^{\frac{1}{4c-\frac{3}{2}}}\]
with \(\mathcal{B}_{T}=\sqrt{\eta T}+\|\mathbf{W}_{0}\|_{2}\). The proof is the same as Theorem A.5, we omit it for simplicity.
We can combine Theorem B.4 and Lemma 1 together to get the upper bound of the generalization error.
Proof of Theorem 6.: The proof is similar to that of Theorem 2. From (B.21) we know that
\[\big{\|}\mathbf{W}_{t+1}-\mathbf{W}_{t+1}^{(i)}\big{\|}_{2}^{2} \leq\frac{4e^{2}\eta^{2}\hat{\rho}(1+t)}{n^{2}}\sum_{j=0}^{t}\Big{(}\ell( \mathbf{W}_{j};z_{i})+\ell(\mathbf{W}_{j}^{(i)};z_{i}^{\prime})\Big{)},\]
where we used the self-bounding property of the smooth loss (Lemma A.1).
Then, taking an average over \(i\in[n]\) and noting that \(\mathbb{E}\big{[}\ell(\mathbf{W}_{j};z_{i})\big{]}=\mathbb{E}\big{[}\ell( \mathbf{W}_{j}^{(i)};z_{i}^{\prime})\big{]}\), we have
\[\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\big{\|}\mathbf{W}_{t+1}- \mathbf{W}_{t+1}^{(i)}\big{\|}_{2}^{2}\leq\frac{8e^{2}\eta^{2}\hat{\rho}(1+t)}{ n^{2}}\sum_{j=0}^{t}\mathbb{E}\big{[}L_{S}(\mathbf{W}_{j})\big{]},\]
Combining the above stability bounds with Lemma 1 together and noting that \(L_{S}(\mathbf{W}_{t})\leq\frac{1}{t}\sum_{j=1}^{t-1}L_{S}(\mathbf{W}_{j})\)[35], the desired result is obtained.
### Proofs of Optimization Bounds
To show optimization error bounds, we first introduce the following lemma on the bound of GD iterates.
**Lemma B.5**.: _Suppose Assumptions 1 and 2 hold, and \(\eta\leq 1/(8\hat{\rho})\). Assume (6) and (7) hold. Then for any \(t\in[T]\), there holds_
\[1\vee\mathbb{E}[\|\mathbf{W}_{\frac{1}{\hat{\eta}^{2}}}^{*}-\mathbf{W}_{t}\|_{2 }^{2}]\leq\Big{(}\frac{16e^{2}\eta^{3}t^{2}\hat{\rho}^{2}}{n^{2}}+\frac{16e \eta^{2}t\hat{\rho}}{n}\Big{)}\sum_{s=0}^{t-1}\mathbb{E}\big{[}L_{S}(\mathbf{ W}_{s})\big{]}+2\|\mathbf{W}_{\frac{1}{\hat{\eta}^{2}}}^{*}-\mathbf{W}_{0}\|_{2 }^{2}+2\eta T\big{[}L(\mathbf{W}_{\frac{1}{\hat{\eta}^{2}}}^{*})-L(\mathbf{W} ^{*})\big{]}.\]
Proof.: For any \(\mathbf{W}\in\mathbb{R}^{md}\) and \(\alpha\in[0,1]\), define \(\mathbf{W}(\alpha):=\mathbf{W}_{t}+\alpha(\mathbf{W}-\mathbf{W}_{t})\). Similar to (B.18), according to Lemma B.1 we can show that
\[\lambda_{\min}\big{(}\nabla^{2}L_{S}(\mathbf{W}(\alpha))\big{)}\!\geq\!-C_{ \mathbf{W}(\alpha)}\Big{(}\big{(}\frac{B_{\sigma^{\prime}}^{2}c_{\mathbf{x}}} {m^{2c-1/2}}\|\mathbf{W}(\alpha)\|_{2}+\frac{B_{\sigma^{\prime}}B_{\sigma}}{m ^{2c-1}}\big{)}\big{(}\|\mathbf{W}\!-\!\mathbf{W}_{t}\|_{2}+\|\mathbf{W}_{t} \|_{2}\big{)}\!+\!\sqrt{2c_{0}}\Big{)},\]
where \(C_{\mathbf{W}(\alpha)}=\frac{B_{\sigma^{\prime}}^{2}B_{\sigma^{\prime}}c_{ \mathbf{x}}^{2}}{m^{3c}}\big{\|}\mathbf{W}(\alpha)^{(2)}\big{\|}_{2}^{2}+ \Big{(}\frac{B_{\sigma^{\prime}}B_{\sigma^{\prime}}c_{\mathbf{x}}^{2}}{m^{2c -\frac{1}{2}}}+\frac{2B_{\sigma^{\prime\prime}}B_{\sigma^{\prime}}c_{\mathbf{ x}}}{m^{3c-\frac{1}{2}}}\Big{)}\|\mathbf{W}(\alpha)^{(2)}\|_{2}+\frac{B_{ \sigma^{\prime\prime}}B_{\sigma}^{2}}{m^{3c-\frac{1}{2}}}+\frac{2B_{\sigma^{ \prime}}^{2}c_{\mathbf{x}}}{m^{2c-\frac{1}{2}}}\]
Let \(\hat{C}_{\mathbf{W}}=4B_{1}\big{(}m^{-3c}(\|\mathbf{W}\|_{2}^{2}+4c_{0}\eta T +2\|\mathbf{W}_{0}\|_{2}^{2})+m^{\frac{1}{2}-2c}(\|\mathbf{W}\|_{2}+\sqrt{2c_ {0}\eta T}+\|\mathbf{W}_{0}\|_{2})\big{)}\). According to Lemma B.2, we can verify that \(C_{\mathbf{W}(\alpha)}\leq\hat{C}_{\mathbf{W}}\) for any \(\alpha\in[0,1]\).
Now, let
\[g(\alpha)\!:=\!L_{S}(\mathbf{W}(\alpha))\!+\!\frac{\alpha^{2}\hat {C}_{\mathbf{W}}}{2}\Big{(}\big{(}\frac{B_{\sigma^{\prime}}^{2}c_{\mathbf{x}} }{m^{2c-1/2}}\|\mathbf{W}(\alpha)\|_{2}+\frac{B_{\sigma^{\prime}}B_{\sigma}}{ m^{2c-1}}\big{)}\big{(}\|\mathbf{W}\!-\!\mathbf{W}_{t}\|_{2}\!+\!\|\mathbf{W}_{t} \|_{2}\big{)}\!+\!\sqrt{2c_{0}}\Big{)}\] \[\qquad\times(1\vee\mathbb{E}[\|\mathbf{W}-\mathbf{W}_{t}\|_{2}^{ 2}]).\]
It is obvious that \(g(\alpha)\) is convex in \(\alpha\in[0,1]\). Similar to the proof of Lemma A.6, by convexity of \(g\) and smoothness of the loss we can show that
\[\frac{1}{t}\sum_{s=0}^{t-1}\mathbb{E}[L_{S}(\mathbf{W}_{s})]+\frac {\mathbb{E}\big{[}\|\mathbf{W}_{t}-\mathbf{W}\|_{2}^{2}\big{]}}{2\eta t}\] \[\leq\mathbb{E}[L_{S}(\mathbf{W})]+\frac{\mathbb{E}\big{[}\| \mathbf{W}\!-\!\mathbf{W}_{0}\|_{2}^{2}\big{]}}{2\eta t}+\frac{\hat{C}_{ \mathbf{W}}}{2t}\sum_{s=0}^{t-1}\Big{(}\big{(}\frac{B_{\sigma^{\prime}}^{2}c_{ \mathbf{x}}}{m^{2c-1/2}}\|\mathbf{W}(\alpha)\|_{2}+\frac{B_{\sigma^{\prime}} B_{\sigma}}{m^{2c-1}}\big{)}\big{(}\mathbb{E}[\|\mathbf{W}\!-\!\mathbf{W}_{s}\|_{2}]\] \[\quad+\sqrt{2c_{0}\eta T}+\|\mathbf{W}_{0}\|_{2}\big{)}+\sqrt{2c_ {0}}\Big{)}(1\vee\mathbb{E}[\|\mathbf{W}\!-\!\mathbf{W}_{s}\|_{2}^{2}]).\] (B.22)
Combining the above inequality with Theorem 6 and let \(\mathbf{W}=\mathbf{W}_{\frac{1}{\hat{\eta}^{2}}}^{*}\), we have
\[\frac{\mathbb{E}\big{[}\|\mathbf{W}_{t}-\mathbf{W}_{\frac{1}{\hat {\eta}^{2}}}^{*}\|_{2}^{2}\big{]}}{2\eta t}\] \[\leq\frac{1}{t}\sum_{s=1}^{t-1}\big{[}L(\mathbf{W}_{\frac{1}{ \hat{\eta}^{2}}}^{*})-\mathbb{E}[L(\mathbf{W}_{s})]\big{]}+\frac{\|\mathbf{W}_{ \frac{1}{\hat{\eta}^{2}}}^{*}-\mathbf{W}_{0}\|_{2}^{2}}{2\eta t}+\Big{(} \frac{4e^{2}\eta^{2}t\hat{\rho}^{2}}{n^{2}}+\frac{4e\eta\hat{\rho}}{n}\Big{)} \sum_{s=0}^{t-1}\mathbb{E}\big{[}L_{S}(\mathbf{W}_{s})\big{]}\] \[+\!\frac{1}{2t}\!\sum_{s=0}^{t-1}\!\!\hat{C}_{\mathbf{W}_{\frac{1}{ \hat{\eta}^{2}}}^{*}}\Big{(}\!\frac{B_{\sigma^{\prime}}^{2}c_{\mathbf{x}}}{m^{2c -1/2}}\|\mathbf{W}(\alpha)\|_{2}\!+\!\frac{B_{\sigma^{\prime}}B_{\sigma}}{m^{2c -1}}\big{)}\big{(}\mathbb{E}[\|\mathbf{W}_{\frac{1}{\hat{\eta}^{2}}}^{*}- \mathbf{W}_{s}\|_{2}]\!+\!\sqrt{2c_{0}\eta T}\!+\!\|\mathbf{W}_{0}\|_{2}\big{)} \!+\!\sqrt{2c_{0}}\Big{)}(1\vee\mathbb{E}[\|\mathbf{W}_{\frac{1}{\hat{\eta}^{2}}}^{*} -\mathbf{W}_{s}\|_{2}^{2}])\] \[\leq L(\mathbf{W}_{\frac{1}{\hat{\eta}^{2}}}^{*})-L(\mathbf{W}^{ *})+\frac{\|\mathbf{W}_{\frac{1}{\hat{\eta}^{2}}}^{*}-\mathbf{W}_{0}\|_{2}^{2}}{2 \eta t}+\Big{(}\frac{4e^{2}\eta^{2}t\hat{\rho}^{2}}{n^{2}}+\frac{4e\eta\hat{ \rho}}{n}\Big{)}\sum_{s=0}^{t-1}\mathbb{E}\big{[}L_{S}(\mathbf{W}_{s}) \big{]}\] \[+\!\frac{1}{2t}\!\sum_{s=0}^{t-1}\!\!\hat{C}_{\mathbf{W}_{\frac{1}{ \hat{\eta}^{2}}}^{*}}\Big{(}\!\frac{B_{\sigma^{\prime}}^{2}c_{\mathbf{x}}}{m^{2c -1/2}}\|\mathbf{W}(\alpha)\|_{2}\!+\!\frac{B_{\sigma^{\prime}}B_{\sigma}}{m^{2c -1}}\big{)}\big{(}\|\mathbf{W}_{\frac{1}{\hat{\eta}^{2}}}^{*}\!-\!\mathbf{W}_{s}\|_{ 2}\!+\!\sqrt{2c_{0}\eta T}\!+\!\|\mathbf{W}_{0}\|_{2}\big{)}\!+\!\sqrt{2c_{0}} \Big{)}(1\vee\mathbb{E}[\|\mathbf{W}_{\frac{1}{\hat{\eta}^{2}}}^{*}-\mathbf{W}_{s} \|_{2}^{2}]),\] (B.23)
where in the second inequality we used \(L(\mathbf{W}_{\frac{1}{\hat{\eta}^{2}}}^{*}
According to Lemma B.2 we can get
\[\|{\bf W}_{\frac{1}{\eta T}}^{*}-{\bf W}_{s}\|_{2}\leq\|{\bf W}_{\frac{1}{\eta T} }^{*}-{\bf W}_{0}\|_{2}+\|{\bf W}_{s}-{\bf W}_{0}\|_{2}\leq\|{\bf W}_{\frac{1}{ \eta T}}^{*}-{\bf W}_{0}\|_{2}+\sqrt{2c_{0}\eta T}.\] (B.24)
Then there holds
\[\big{(}\frac{B_{\sigma^{\prime}}^{2}c_{\bf x}}{m^{2c-1/2}}\|{\bf W }(\alpha)\|_{2}+\frac{B_{\sigma^{\prime}}B_{\sigma}}{m^{2c-1}}\big{)}\big{(}\| {\bf W}_{\frac{1}{\eta T}}^{*}-{\bf W}_{s}\|_{2}+\sqrt{2c_{0}\eta T}+\|{\bf W}_ {0}\|_{2}\big{)}+\sqrt{2c_{0}}\] \[\leq\Big{(}\frac{B_{\sigma^{\prime}}^{2}c_{\bf x}}{m^{2c-1/2}}(2 \sqrt{2c_{0}\eta T}+2\|{\bf W}_{0}\|_{2}+\|{\bf W}_{\frac{1}{\eta T}}^{*}-{ \bf W}_{0}\|_{2})+\frac{B_{\sigma^{\prime}}B_{\sigma}}{m^{2c-1}}\Big{)}(2 \sqrt{2c_{0}\eta T}+\|{\bf W}_{0}\|_{2}+\|{\bf W}_{\frac{1}{\eta T}}^{*}-{\bf W }_{0}\|_{2})+\sqrt{2c_{0}}.\] (B.25)
Plugging the above inequality back into (B.23) and multiplying both sides by \(2\eta t\) yields
\[\mathbb{E}\big{[}\|{\bf W}_{t}-{\bf W}_{\frac{1}{\eta T}}^{*}\|_{ 2}^{2}\big{]} \leq\|{\bf W}_{\frac{1}{\eta T}}^{*}-{\bf W}_{0}\|_{2}^{2}+\eta \hat{C}_{{\bf W}_{\frac{1}{\eta T}}^{*}}\Big{(}\Big{(}\frac{B_{\sigma^{ \prime}}^{2}c_{\bf x}}{m^{2c-1/2}}(2\sqrt{2c_{0}\eta T}+\|{\bf W}_{0}\|_{2}+\| {\bf W}_{\frac{1}{\eta T}}^{*}-{\bf W}_{0}\|_{2})+\frac{B_{\sigma^{\prime}}B _{\sigma}}{m^{2c-1}}\Big{)}\] \[\quad\times(2\sqrt{2\eta Tc_{0}}+\|{\bf W}_{0}\|_{2}+\|{\bf W}_{ \frac{1}{\eta T}}^{*}-{\bf W}_{0}\|_{2})+\sqrt{2c_{0}}\Big{)}\sum_{s=0}^{t-1}( 1\vee\mathbb{E}[\|{\bf W}_{\frac{1}{\eta T}}^{*}-{\bf W}_{s}\|_{2}^{2}])\] \[\quad+\Big{(}\frac{8e^{2}\eta^{3}t^{2}\hat{\rho}^{2}}{n^{2}}+ \frac{8e\eta^{2}t\hat{\rho}}{n}\Big{)}\sum_{s=0}^{t-1}\mathbb{E}\big{[}L_{S}( {\bf W}_{s})\big{]}+2\eta T\big{[}L({\bf W}_{\frac{1}{\eta T}}^{*})-L({\bf W} ^{*})\big{]}.\]
Let \(x=\max_{s\in[T]}\mathbb{E}[\|{\bf W}_{\frac{1}{\eta T}}^{*}-{\bf W}_{s}\|_{2} ^{2}]\lor 1\). Then the above inequality implies
\[x \leq\|{\bf W}_{\frac{1}{\eta T}}^{*}-{\bf W}_{0}\|_{2}^{2}+\eta T \hat{C}_{{\bf W}_{\frac{1}{\eta T}}^{*}}\Big{(}\Big{(}\frac{B_{\sigma^{ \prime}}^{2}c_{\bf x}}{m^{2c-1/2}}(2\sqrt{2c_{0}\eta T}+\|{\bf W}_{0}\|_{2}+\| {\bf W}_{\frac{1}{\eta T}}^{*}-{\bf W}_{0}\|_{2})+\frac{B_{\sigma^{\prime}}B_ {\sigma}}{m^{2c-1}}\Big{)}\] \[\times(2\sqrt{2\eta Tc_{0}}+\|{\bf W}_{0}\|_{2}+\|{\bf W}_{\frac{1 }{\eta T}}^{*}-{\bf W}_{0}\|_{2})+\sqrt{2c_{0}}\Big{)}x+\Big{(}\frac{8e^{2} \eta^{3}t^{2}\hat{\rho}^{2}}{n^{2}}+\frac{8e\eta^{2}t\hat{\rho}}{n}\Big{)} \sum_{s=0}^{t-1}\mathbb{E}\big{[}L_{S}({\bf W}_{s})\big{]}\] \[+2\eta T\big{[}L({\bf W}_{\frac{1}{\eta T}}^{*})-L({\bf W}^{*}) \big{]}.\]
Note that condition (7) implies that \(\eta T\hat{C}_{{\bf W}_{\frac{1}{\eta T}}^{*}}\big{(}(\frac{B_{\sigma^{\prime} }^{2}c_{\bf x}}{m^{2c-1/2}}(2\sqrt{2c_{0}\eta T}+\|{\bf W}_{0}\|_{2}+\|{\bf W}_{ \frac{1}{\eta T}}^{*}-{\bf W}_{0}\|_{2})+\frac{B_{\sigma^{\prime}}B_{\sigma}}{ m^{2c-1}})(2\sqrt{2\eta Tc_{0}}+\|{\bf W}_{0}\|_{2}+\|{\bf W}_{\frac{1}{\eta T}}^{*}-{\bf W }_{0}\|_{2})+\sqrt{2c_{0}}\big{)}\leq\frac{1}{2}\), then there holds
\[x\leq\Big{(}\frac{16e^{2}\eta^{3}t^{2}\hat{\rho}^{2}}{n^{2}}+ \frac{16e\eta^{2}t\hat{\rho}}{n}\Big{)}\sum_{s=0}^{t-1}\mathbb{E}\big{[}L_{S}( {\bf W}_{s})\big{]}+2\|{\bf W}_{\frac{1}{\eta T}}^{*}-{\bf W}_{0}\|_{2}^{2}+2 \eta T\big{[}L({\bf W}_{\frac{1}{\eta T}}^{*})-L({\bf W}^{*})\big{]}.\]
It then follows that
\[1\vee\mathbb{E}[\|{\bf W}_{\frac{1}{\eta T}}^{*}-{\bf W}_{t}\|_{2}^{2}]\leq \Big{(}\frac{16e^{2}\eta^{3}t^{2}\hat{\rho}^{2}}{n^{2}}+\frac{16e\eta^{2}t\hat{ \rho}}{n}\Big{)}\sum_{s=0}^{t-1}\mathbb{E}\big{[}L_{S}({\bf W}_{s})\big{]}+2\|{ \bf W}_{\frac{1}{\eta T}}^{*}-{\bf W}_{0}\|_{2}^{2}+2\eta T\big{[}L({\bf W}_{ \frac{1}{\eta T}}^{*})-L({\bf W}^{*})\big{]}.\]
This completes the proof.
Proof of Theorem 7.: Combining (B.25) and (B.22) with \({\bf W}={\bf W}_{\frac{1}{\eta T}}^{*}\) together yields
\[\frac{1}{T}\sum_{s=0}^{T-1}\mathbb{E}[L_{S}({\bf W}_{s})] \leq\mathbb{E}[L_{S}({\bf W}_{\frac{1}{\eta T}}^{*})]+\frac{\|{\bf W}_{ \frac{1}{\eta T}}^{*}-{\bf W}_{0}\|_{2}^{2}}{2\eta T}+\frac{\hat{C}_{{\bf W}_{ \frac{1}{\eta T}}^{*}}}{2T}\!\sum_{s=0}^{T-1}\!\Big{(}\big{(}\frac{B_{\sigma^ {\prime}}^{2}c_{\bf x}}{m^{2c-1/2}}(2\sqrt{2c_{0}\eta T}+\|{\bf W}_{0}\|_{2}+\| {\bf W}_{\frac{1}{\eta T}}^{*}-{\bf W}_{0}\|_{2})\] \[\quad+\frac{B_{\sigma^{\prime}}B_{\sigma}}{m^{2c-1}}\big{)}\big{(} \mathbb{E}[\|{\bf W}_{\frac{1}{\eta T}}^{*}-{\bf W}_{s}\|_{2}]+2\sqrt{2c_{0} \eta T}+\|{\bf W}_{0}\|_{2}\big{)}+\sqrt{2c_{0}}\Big{)}(1\vee\mathbb{E}[\|{\bf W}_{ \frac{1}{\eta T}}^{*}-{\bf W}_{s}\|_{2}^{2}])\] \[\leq\mathbb{E}[L_{S}({\bf W}_{\frac{1}{\eta T}}^{*})]+\frac{1}{2T }\hat{C}_{{\bf W}_{\frac{1}{\eta T}}^{*}}\,\hat{B}_{{\bf W}_{\frac{1}{\eta T}}^{*}} \sum_{s=0}^{T-1}1\vee\mathbb{E}[\|{\bf W}_{\frac{1}{\eta T}}^{*}-{\bf W}_{s}\|_{2}^ {2}]+
where \(\hat{B}_{\mathbf{W}^{*}_{\frac{1}{\eta^{T}}}}=\big{(}\frac{B_{s^{2}}^{2}c_{s}}{m^{ 2e-1/2}}(2\sqrt{2c_{0}\eta T}+\|\mathbf{W}^{*}_{\frac{1}{\eta^{T}}}-\mathbf{W}_ {0}\|_{2})+\frac{B_{s^{\prime}}B_{s}}{m^{2e-1}}\big{)}(2\sqrt{2\eta Tc_{0}}+\| \mathbf{W}_{0}\|_{2}+\|\mathbf{W}^{*}_{\frac{1}{\eta^{T}}}-\mathbf{W}_{0}\|_{2 })+\sqrt{2c_{0}}\) and in the last inequality we used (B.24).
By monotonically decreasing of \(\{L_{S}(\mathbf{W}_{t})\}\) and Lemma B.5, we further know
\[\mathbb{E}[L_{S}(\mathbf{W}_{T})] \leq\mathbb{E}[L_{S}(\mathbf{W}^{*}_{\frac{1}{\eta^{T}}})]+ \frac{\hat{C}_{\mathbf{W}^{*}_{\frac{1}{\eta^{T}}}}\hat{B}_{\mathbf{W}^{*}_{ \frac{1}{\eta^{T}}}}}{2T}\sum_{s=0}^{T-1}1\vee\mathbb{E}[\|\mathbf{W}^{*}_{ \frac{1}{\eta^{T}}}-\mathbf{W}_{s}\|_{2}^{2}]+\frac{\|\mathbf{W}^{*}_{\frac{1 }{\eta^{T}}}-\mathbf{W}_{0}\|_{2}^{2}}{2\eta T}\] \[\leq\mathbb{E}[L_{S}(\mathbf{W}^{*}_{\frac{1}{\eta^{T}}})]+\hat{ C}_{\mathbf{W}^{*}_{\frac{1}{\eta^{T}}}}\hat{B}_{\mathbf{W}^{*}_{\frac{1}{\eta^{T}}}} \Big{(}\Big{(}\frac{8e^{2}\eta^{3}T^{2}\hat{\rho}^{2}}{n^{2}}+\frac{8e\eta^{2} T\hat{\rho}}{n}\Big{)}\!\sum_{s=0}^{T-1}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where we used condition (7) and \(L(\mathbf{W}_{\frac{1}{\eta T}}^{*})-L(\mathbf{W}^{*})\leq\frac{1}{2\eta T}\| \mathbf{W}^{*}-\mathbf{W}_{0}\|_{2}^{2}\).
Combining the above two inequalities with \(\Lambda_{\frac{1}{\eta T}}=L(\mathbf{W}_{\frac{1}{\eta T}}^{*})+\frac{1}{2\eta T }\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2}^{2}-L(\mathbf{W}^{*} )\leq\frac{1}{2\eta T}\|\mathbf{W}^{*}-\mathbf{W}_{0}\|_{2}^{2}\) together we get
\[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]\] \[=\Big{[}\mathbb{E}[L(\mathbf{W}_{T})-L_{S}(\mathbf{W}_{T})\Big{]} +\mathbb{E}\Big{[}L_{S}(\mathbf{W}_{T})-\big{(}L_{S}(\mathbf{W}_{\frac{1}{ \eta T}}^{*})+\frac{1}{2\eta T}\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_ {0}\|_{2}^{2}\big{)}\Big{]}\] \[\quad+\Big{[}L_{S}(\mathbf{W}_{\frac{1}{\eta T}}^{*})+\frac{1}{2 \eta T}\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2}^{2}-L(\mathbf{ W}^{*})\Big{]}\] \[=\mathcal{O}\Big{(}\frac{\eta T}{n}\Big{(}\frac{\eta T}{n}+1\Big{)} \Big{[}L(\mathbf{W}_{\frac{1}{\eta T}}^{*})+\frac{1}{2\eta T}\|\mathbf{W}_{ \frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2}^{2}\Big{]}+\frac{1}{\eta T}\big{(} \|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2}^{2}+\|\mathbf{W}^{*} -\mathbf{W}_{0}\|_{2}^{2}\big{)}\Big{)}.\]
If \(n\gtrsim\eta T\), then
\[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]=\mathcal{O}\Big{(}\frac{\eta T }{n}\Big{[}L(\mathbf{W}_{\frac{1}{\eta T}}^{*})+\frac{1}{2\eta T}\|\mathbf{W}_ {\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2}^{2}\Big{]}+\frac{1}{\eta T}\big{(} \|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2}^{2}+\|\mathbf{W}^{*} -\mathbf{W}_{0}\|_{2}^{2}\big{)}\Big{)}.\]
Finally, note that \(L(\mathbf{W}_{\frac{1}{\eta T}}^{*})+\frac{1}{2\eta T}\|\mathbf{W}_{\frac{1}{ \eta T}}^{*}-\mathbf{W}_{0}\|_{2}^{2}=L(\mathbf{W}^{*})+\Lambda_{\frac{1}{ \eta T}}\) and \(\|\mathbf{W}_{\frac{1}{\eta T}}^{*}-\mathbf{W}_{0}\|_{2}^{2}\leq\|\mathbf{W}^{* }-\mathbf{W}_{0}\|_{2}^{2}\), we have
\[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]=\mathcal{O}\Big{(}\frac{\eta T }{n}L(\mathbf{W}^{*})+\frac{1}{\eta T}\|\mathbf{W}^{*}-\mathbf{W}_{0}\|_{2}^{2 }\Big{)}.\]
The proof is completed.
Proof of Corollary 9.: **Part (a)**. We first consider the case \(c\in[9/16,1]\). To ensure conditions (6) and (7) hold, we choose \(m\asymp(\eta T)^{4}\) for this case. Then according to Theorem 8 and Assumption 3, there holds
\[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]=\mathcal{O}\Big{(}\frac{\eta T }{n}+(\eta T)^{3-8\mu}\Big{)}.\]
Here, the condition \(\mu\geq 1/2\) ensures that \(3-8\mu<0\). Hence, the bound will vanish as \(\eta T\) tends to \(0\). Further, if \(n^{\frac{1}{2(8\mu-3)}}\lesssim\eta T\lesssim\sqrt{n}\) (the existence of \(\eta T\) is ensured by \(\mu\geq 1/2\)), then there holds \(\eta T/n=\mathcal{O}(n^{-1/2})\) and \((\eta T)^{3-8\mu}=\mathcal{O}(n^{-1/2})\). That is
\[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]=\mathcal{O}\big{(}\frac{1}{ \sqrt{n}}\big{)}.\]
For the case \(c\in(1/2,9/16)\), we choose \(m\asymp(\eta T)^{\frac{1}{4c-2}}\), and \(nT\gtrsim n^{\frac{2c-1}{8\mu-3}}\lesssim\eta T\lesssim\sqrt{n}\). From Theorem 8 and Assumption 3 we have
\[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]=\mathcal{O}\big{(}\frac{1}{ \sqrt{n}}\big{)}.\]
The first part of the theorem is proved.
**Part (b).** For the case \(c\in[9/16,1]\), by choosing \(m\asymp(\eta T)^{4}\) and \(\eta T\gtrsim n^{\frac{1}{8\mu-3}}\), from Theorem 8 and Assumption 3 we have
\[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]=\mathcal{O}\big{(}\frac{1}{n} \big{)}.\]
For the case \(c\in(1/2,9/16)\), by choosing \(m\asymp(\eta T)^{\frac{1}{4c-2}}\) and \(\eta T\gtrsim n^{\frac{4c-2}{4c+2\mu-3}}\), from Theorem 8 and Assumption 3 we have
\[\mathbb{E}[L(\mathbf{W}_{T})-L(\mathbf{W}^{*})]=\mathcal{O}\big{(}\frac{1}{n} \big{)}.\]
The proof is completed.
### More Discussion on Related Works
[36] derived a lower bound on the minimum eigenvalue of the Hessian for the three-layer NN, where the first layer activation is linear, and the second activation is smooth. They proved the weak convexity of the empirical risk scales with \(m^{\frac{1}{2}-2c}\) when optimizing the first and third layers of weights with Lipschitz and convex losses. We train the first and the second layers of the network with general smooth activation functions for both layers. Our result shows that the weak convexity of the least square loss scales with \(m^{\frac{1}{2}-2c}\).
|
2306.09239 | Exploiting the Brain's Network Structure for Automatic Identification of
ADHD Subjects | Attention Deficit Hyperactive Disorder (ADHD) is a common behavioral problem
affecting children. In this work, we investigate the automatic classification
of ADHD subjects using the resting state Functional Magnetic Resonance Imaging
(fMRI) sequences of the brain. We show that the brain can be modeled as a
functional network, and certain properties of the networks differ in ADHD
subjects from control subjects. We compute the pairwise correlation of brain
voxels' activity over the time frame of the experimental protocol which helps
to model the function of a brain as a network. Different network features are
computed for each of the voxels constructing the network. The concatenation of
the network features of all the voxels in a brain serves as the feature vector.
Feature vectors from a set of subjects are then used to train a PCA-LDA
(principal component analysis-linear discriminant analysis) based classifier.
We hypothesized that ADHD-related differences lie in some specific regions of
the brain and using features only from those regions is sufficient to
discriminate ADHD and control subjects. We propose a method to create a brain
mask that includes the useful regions only and demonstrate that using the
feature from the masked regions improves classification accuracy on the test
data set. We train our classifier with 776 subjects and test on 171 subjects
provided by The Neuro Bureau for the ADHD-200 challenge. We demonstrate the
utility of graph-motif features, specifically the maps that represent the
frequency of participation of voxels in network cycles of length 3. The best
classification performance (69.59%) is achieved using 3-cycle map features with
masking. Our proposed approach holds promise in being able to diagnose and
understand the disorder. | Soumyabrata Dey, Ravishankar Rao, Mubarak Shah | 2023-06-15T16:22:57Z | http://arxiv.org/abs/2306.09239v1 | # Exploiting the Brain's Network Structure for Automatic Identification of ADHD Subjects
###### Abstract
Attention Deficit Hyperactive Disorder (ADHD) is a common behavioral problem affecting children. In this work, we investigate the automatic classification of ADHD subjects using the resting state Functional Magnetic Resonance Imaging (fMRI) sequences of the brain. We show that the brain can be modeled as a functional network, and certain properties of the networks differ in ADHD subjects from control subjects. We compute the pairwise correlation of brain voxels' activity over the time frame of the experimental protocol which helps to model the function of a brain as a network. Different network features are computed for each of the voxels constructing the network. The concatenation of the network features of all the voxels in a brain serves as the feature vector. Feature vectors from a set of subjects are then used to train a PCA-LDA (principal component analysis-linear discriminant analysis) based classifier. We hypothesized that ADHD-related differences lie in some specific regions of the brain and using features only from those regions is sufficient to discriminate ADHD and control subjects. We propose a method to create a brain mask that includes the useful regions only and demonstrate that using the feature from the masked regions improves classification accuracy on the test data set. We train our classifier with \(776\) subjects and test on \(171\) subjects provided by The Neuro Bureau for the ADHD-\(200\) challenge. We demonstrate the utility of graph-motif features, specifically the maps that represent the frequency of participation of voxels in network cycles of length 3. The best classification performance (69.59%) is achieved using 3-cycle map features with masking. Our proposed approach holds promise in being able to diagnose and understand the disorder.
## 1 Introduction
Attention Deficit Hyperactivity Disorder (ADHD) is a common behavioral disorder affecting children. Approximately 3-5% of school aged children are diagnosed with ADHD. Currently, no well known biological measure exists to diagnose ADHD. Instead doctors rely on behavioral symptoms to identify it. To understand the cause of the disorder more fundamentally, researchers are using new structural and functional imaging tools like MRI and fMRI. fMRI has been widely used to study the functioning of brain. It provides high quality visualization of spatio-temporal activity within a brain, which can be used to compare the functioning of normal brains against those with disorders.
fMRI has been used for different functional studies of brain. Some of the researchers have used task-related fMRI data, in which the test subjects perform conscious tasks depending on the input stimuli.
Others used resting state brain fMRI data. The brain remains active even during rest, when it is not engaged in an attentive task. Raichle et al. [1] identified several brain areas such as the MPFC, PCC and precuneus that are active during rest. These areas form part of a functional network known as the resting-state network or default mode network (DMN) [2], [3]. The literature [4], [2], [3] tends to use interchangeably the concepts of resting state brain networks and the DMN as defined by Raichle in [1]. We compare the brain regions that we have found in the current ADHD data set with the components of the DMN described by Raichle in [1]. It is believed that the DMN may be responsible for synchronizing all parts of the brain's activity; disruptions to this network may cause a number of complex brain disorders [5]. Researchers have studied neural substrates relevant to ADHD related behaviors, such as attention lapses, and identified the DMN as the key areas to better understand the problem [6]. In this study we use the resting state brain fMRI data and hypothesize that the differences between ADHD conditioned and control brains lie in the variation of functional connections of DMN.
Many studies have been performed to identify functional differences related to ADHD. Most of the approaches use group label analysis to deduce the statistical differences between ADHD conditioned and control groups. Structural MRI analysis suggests that there are abnormalities in ADHD brains, specifically in the frontal lobes, basal ganglia, parietal lobe, occipital lobe and cerebellum [7; 8; 9; 10]. In another set of studies, ADHD brains were analyzed using task-related fMRI data. Bush et al. [11] found significant low activity in the anterior cingulate cortex when ADHD subjects were asked to perform the Counting Stroop during fMRI. Durston et al. [12] showed that ADHD conditioned children have difficulty performing the go/nogo task and display decreased activity in the frontostriatal regions. Teicher et al. [13] demonstrated that boys with ADHD have higher T2 relaxation time in the putamen which is directly connected to a child's capacity to sit still. A third set of work was done using the resting state brain fMRI to locate any abnormalities in the DMN. Castellanos et al. [14] performed Generalized Linear Model based regression analysis on the whole brain with respect to three frontal foci of DMN, and found low negative correlated activity in precuneus/anterior cingulate cortex in ADHD subjects. Tian et al. [15] found functional abnormalities in the dorsal anterior cingulate cortex; Cao et al. [16] showed decreased regional homogeneity in the frontal-striatal-cerebellar circuits, but increased regional homogeneity in the occipital cortex among boys with ADHD. Zang et al. [17] verified decreased Amplitude of Low-Frequency Fluctuation (ALFF) in the right inferior frontal cortex, left sensorimotor cortex, bilateral cerebellum, and the verms, as well as increased ALFF in the right anterior cingulate cortex, left sensorimotor cortex, and bilateral brainstem.
While group level analysis can suggest statistical differences between two groups, it may not be that useful for clinical diagnosis at the individual level. There have been relatively few investigations at the individual level of classification of ADHD subjects [18; 19; 20; 21]. One such study is performed by Zhu
Figure 1: Overview of our approach: Compute an \(N\times N\) correlation matrix (N is the number of voxels ) using fMRI data; Compute the adjacency matrix by thresholding the low correlation values to generate a network; Compute network features such as node degree and cycle count for each node of the network; Generate a mask for the brain regions which are believed to be most effective for classification; Extract feature values within the generated brain mask and classify subjects using the PCA-LDA classifier.
et al. [22] who used a PCA-LDA based classifier to separate ADHD and control subjects at individual level. Unlike our network connectivity feature, which can connect all the synchronous regions of the whole brain, they used a regional homogeneity based feature for classification. Also the experiments were performed on only \(20\) subjects, which are not conclusive.
Our algorithm exploits the topological differences between the functional networks of the ADHD and controlled brains. The different steps of our approach are described in the Fig 1. The input to our algorithm is brain fMRI sequences of the subjects. fMRI data can be viewed as a 4-D video such that the 3-D volume of the brain is divided into small voxels and imaged for a certain duration. The data can also be viewed as a time series of intensity values for each of the voxels. The correlation of these intensity time-series can be an indication of how synchronous the activities of two voxels are, and higher correlation values suggest that two voxels are working in synchronization. A functional network structure is generated for the brain of each of the subjects under study by computing the correlations for all possible pairs of voxels and establishing a connections between any pairs of voxels if their correlation value is sufficiently high. Different network features, such as degree maps, cycle maps and weight maps are computed from the network to capture topological differences between ADHD and control subjects. We have provided a detailed description of all the network features in the later sections of the article. A brain mask is computed that includes only the regions with useful information to classify ADHD and control subjects. For the rest of the article, we refer to this mask as a 'useful region mask'. The details of the useful region mask computation procedure are described in 2.2. Finally, the network features from the voxels within the useful region mask are extracted to train a PCA-LDA based classifier. We have tested the performance of each of the network features computed on the training data set from the Kennedy Krieger Institute. We selected two different kinds of network features, degree map and 3-cycle map, for the experiments on the full data set.
In our work, we have performed experiments on a large challenging data set which includes subjects from different races, age groups, and data capturing sites. We propose a new approach for the automatic classification of ADHD subjects, and believe that our work will be helpful to the medical imaging community.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
**Center** & **Sub Cut** & **Age (yrs.)** & **Male** & **Female** & **Control** & **Combined** & **Hyperactive** & **Inattentive** \\ \hline \multicolumn{10}{|c|}{**Training Data Set**} \\ \hline Kennedy Krieger Institute & \(83\) & \(8\)-\(13\) & \(46\) & \(37\) & \(61\) & \(16\) & \(1\) & \(5\) \\ Neuro Image Sample & \(48\) & \(11\)-\(22\) & \(31\) & \(17\) & \(23\) & \(18\) & \(6\) & \(1\) \\ New York University & \(222\) & \(7\)-\(18\) & \(145\) & \(77\) & \(99\) & \(77\) & \(2\) & \(44\) \\ Oregon Health \& Sci. Univ. & \(79\) & \(7\)-\(12\) & \(43\) & \(36\) & \(42\) & \(23\) & \(2\) & \(12\) \\ Peking University & \(152\) & \(8\)-\(17\) & \(102\) & \(50\) & \(93\) & \(22\) & \(0\) & \(37\) \\ University of Pittsburg & \(89\) & \(10\)-\(20\) & \(46\) & \(43\) & \(89\) & \(0\) & \(0\) & \(0\) \\ Wash. Uni. in St. Louis & \(61\) & \(7\)-\(22\) & \(33\) & \(28\) & \(61\) & \(0\) & \(0\) & \(0\) \\ \hline \multicolumn{10}{|c|}{**Test Data Set**} \\ \hline Kennedy Krieger Institute & \(11\) & \(8\)-\(12\) & \(10\) & \(1\) & \(8\) & \(3\) & \(0\) & \(0\) \\ Neuro Image Sample & \(25\) & \(13\)-\(26\) & \(12\) & \(13\) & \(14\) & \(11\) & \(0\) & \(0\) \\ New York University & \(41\) & \(7\)-\(17\) & \(28\) & \(13\) & \(12\) & \(22\) & \(0\) & \(7\) \\ Oregon Health \& Sci. Univ. & \(34\) & \(7\)-\(12\) & \(17\) & \(17\) & \(27\) & \(5\) & \(1\) & \(1\) \\ Peking University & \(51\) & \(8\)-\(15\) & \(32\) & \(19\) & \(27\) & \(9\) & \(1\) & \(14\) \\ University of Pittsburg & \(9\) & \(14\)-\(17\) & \(7\) & \(2\) & \(5\) & \(0\) & \(0\) & \(4\) \\ Brown University & \(26\) & \(8\)-\(18\) & \(9\) & \(17\) & - & - & - & - \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the data set released for ADHD-200 competition. A total of eight centers contributed to the data. The labels of the Brown University test set are not yet released.
Materials and Method
### Data
We use the data provided by the Neuro Bureau for the ADHD 200 competition which consists of \(776\) training subjects and \(197\) test subjects. Eight different centers contributed to the compilation of the whole data set, which makes the data diverse as well as complex. Different phenotypic information, such as age, gender, handedness, IQ, is also provided for each subject. Consider Table 1 for an overview of the data set. All research conducted by ADHD-200 data contributing sites was conducted with local IRB approval, and contributed in compliance with local IRB protocols. In compliance with HIPAA Privacy Rules, all data used for the experiments of this article is fully anonymized. The competition organizers made sure that the 18 patient identifiers are removed, as well as face information.
For all our experiments we have used preprocessed resting state fMRI data registered in a \(4\times 4\times 4\) mm voxel resolution Montreal Neurological Institute (MNI) space, with nuisance variance removed, filtered using a bandpass filter (0.009 Hz \(<\)f \(<\)0.08 Hz) and blurred with a \(6\)-mm FWHM Gaussian filter. All the fMRI scans are motion corrected to the first image of the time series. We have used a binary mask, provided with each of the subjects, to find out the voxels inside the brain volume. All the fMRI data volumes are of size \(49\times 58\times 47\) voxels, but the number of sample across the time vary based on the center where data is captured. Further information regarding the data and the preprocessing steps is provided in [23].
Though no quality control is performed on the data, a quality score is provided with each image file of all the subjects. The voxel-wise z-scores are thresholded and summed over all the voxels to compute the quality score of a image file. Images with low scores are considered to be better. We have not considered the quality scores for our study.
### Method
Network motifs such as node degree distribution, cycle etc. are analyzed in different disciplines of science to understand the systems being studied and neuroscience is not an exception [24], [25], [26]. We used different graph theoretic concepts for our study. We assume that the activity of a brain can be modeled as a functional network where the voxels are considered as the nodes, which are connected with each other based on the similarity of their activity over the time domain. In this article we have used the terms voxel and node interchangeably for the same meaning. The time series of a node is represented as a bold face notation. As the first step of the algorithm, we extract the time series for all the voxels and reorganized it as a separate 2-d matrix for each of the subjects in the data set. This is illustrated in second step of Figure 1. Next, the correlation between all possible voxel pairs is computed. If a subject contains \(N\) number of voxels, a correlation matrix of size \(N\times N\) is constructed, where the \(i^{th}\) row of the matrix corresponds to the pairwise correlation values of the \(i^{th}\) voxel with all other voxels within the anatomical mask of the subject.
For any two voxels, if the time series are **u** and **v** respectively, the correlation can be computed as,
\[r=\frac{(T\sum_{i=1}^{T}u_{i}v_{i})-(\sum_{i=1}^{T}u_{i})(\sum_{i=1}^{T}v_{i} )}{\sqrt{[T\sum_{i=1}^{T}u_{i}^{2}-(\sum_{i=1}^{T}u_{i})^{2}][T\sum_{i=1}^{T} v_{i}^{2}-(\sum_{i=1}^{T}v_{i})^{2}]}}, \tag{1}\]
where \(T\) is the length of the time series, **u** = [\(u_{1},u_{2},...,u_{T}\)], **v** = [\(v_{1},v_{2},...,v_{T}\)].
We normalize all the time series between [\(-1\), \(1\)] before correlation computation. Next, we threshold all the values of the correlation matrix to get a binary map of zeros and ones. This binary map can be
considered as the adjacency matrix of a graph where the \(i^{th}\) voxel is connected to all the voxels for which non-zero values are present in the \(i^{th}\) row of the matrix. Note that we can consider two voxels to be connected by an edge when the correlation is high positive, high negative or simply the absolute value of the correlation is high. We have computed three different networks considering high positive, high negative and high absolute correlation values respectively.
#### 2.2.1 Network Feature Computation
Once the graphs are constructed, for each subject of the data set, we compute different network features which can provide certain functional differences between the activity patterns of ADHD and control subjects' brain. The feature values from all the voxels of a network construct the feature map such as Degree Map, Cycle Map etc. The descriptions of different network features computed are given below.
**Degree:** For each node in a network, the degree is the count of the other nodes it is connected to. In other words, the degree of a node is the number of edges attached to it.
**Varying Distance Degree:** Instead of considering the count of all the edges of a node as its degree, we group the edges based on their physical length and compute a separate degree for each of the groups. So, if we have \(n\) threshold values for edge length, say \(\{l_{1},l_{2},...,l_{n}\}\), we can compute \(n\) degrees, \(\{d_{1},d_{2},...,d_{n}\}\), of a node \(v\), where \(d_{i}\) is the count of all the edges connected to \(v\) with length between \(l_{i-1}\) to \(l_{i}\). Refer to the Figure 2 for details. We use the Euclidian distance measure for the calculation of edge length. For the experiments, we have used threshold values 20, 40, and 80 and mm. where the average brain volume is approximately of size \(172\times 140\times 140\) mm. Hence, we get 4 degrees per node which count edges of length 0-20 20-40, 40-80 and greater than 88 mm. respectively. The thresholds are selected through an intuitive basis such that different degrees should capture local to global connectivity pattern. The average percentage of degrees from close to far range are found as \(70.44\%\), \(16.54\%\), \(8.40\%\) and \(4.62\%\).
**L-cycle Count:** A path in a network is a sequence of distinct nodes which can be traversed in the given order using the connecting edges. A cycle, on the other hand, is a closed path in the network where the starting and ending node is the same and all other nodes are distinct. The L-cycle count of a node is the number of all possible distinct \(L\) length cycles containing the node. Figure 2 illustrates this idea. L-cycle count for a node is calculated by traversing through all the L-length path starting from the node and counting the paths which leads to the starting node. The traversing can be performed using the breadth first search algorithm. We have used different cycle lengths for our experiments.
**Weight Sum:** Instead of constructing an adjacency matrix using a threshold on the correlation values, we assume every node is connected to all other nodes by the weighted edges. The weight of the connecting edge of a node pair is their correlation value. As the correlation values can be positive and negative, we can separately add up all the positive, negative and absolute edge weights of a node to get its sum of positive,
Figure 2: **(A)** The degree of the node, highlighted in yellow, is the count of all the green nodes connected to it (i.e. \(8\)), while the varying distance degree is the counts of all the connected nodes in each of the bins defined by the three edge length thresholds (\(l_{1},l_{2},l_{3}\)) marked in blue. In this example the varying distance degrees of the yellow node are \(\{4,2,2\}\). **(B)** Shows all the distinct 3-cycles that containing the node \(3\).
negative and absolute weights.
#### 2.2.2 PCA-LDA Classification
Once we finish computation of the network features, we extract the features from all of the voxels within the useful region mask. The mask generation algorithm is described in the next subsection. Concatenation of the feature values extracted from all the voxels generates a feature vector per subject. A PCA-LDA based classifier is trained separately using different set of feature vectors computed for different types of network features. Finally, the classifier is used for automatic classification of the ADHD subjects.
It is expected that the characteristics of the networks computed are represented by their feature vectors. A feature vector of a network represents a point in the feature space where the dimensionality of the space is same as the length of the vector. If the feature vectors of ADHD and control subjects are separable then their corresponding points in the feature space should cluster in different locations. When a classifier is trained, it learns to partition the feature space in such a way that the feature vectors from each of the groups are ideally clustered in separate segments. Given a feature vector of a test example, the classifier can identify which segments of the feature space it belongs to and classify the test subject accordingly. Linear Discriminant Analysis (LDA) is a widely used data classification technique which maximizes the ratio of between-class variance to the within-class variance to produce maximal separability. Mathematically, the objective is to maximize the following function :
\[J(w)=\frac{w^{T}S_{B}w}{w^{T}S_{W}w} \tag{2}\]
where \(S_{B}\) and \(S_{W}\) are between class and within class scatter matrix, and can be formulated as follows:
\[S_{B}=\sum_{i=1}^{n_{A}}(x_{i}{}^{(A)}-\mu^{(A)})(x_{i}{}^{(A)}-\mu^{(A)})^{T} +\sum_{i=1}^{n_{C}}(x_{i}{}^{(C)}-\mu^{(C)})(x_{i}{}^{(C)}-\mu^{(C)})^{T} \tag{3}\]
\[S_{W}=(\mu^{(A)}-\mu^{(C)})(\mu^{(A)}-\mu^{(C)})^{T}, \tag{4}\]
\(n_{A}\) and \(n_{C}\) are the number of subjects, \(\mu^{(A)}\) and \(\mu^{(C)}\) are the mean feature vectors, \(x_{i}{}^{A}\) and \(x_{i}{}^{C}\) are the \(i\)th feature vectors of the ADHD and control group respectively.
In many cases, the dimension of the feature space becomes so high that the proper partitioning of the space is difficult. For example, in our case, the dimension of the feature space is the number of voxels within the useful region mask which is several thousands. Again, most of the dimensions do not contain any significant data variance. Principal Component Analysis (PCA) is a procedure to find out a set of orthogonal directions, called principal components, along which the variance of the data is maximum. It then projects the data into the smaller dimensional subspace composed of the principal components. The classifier can work efficiently on the subspace which is significantly smaller in dimension than the original feature space. We use first 40 and first 100 principal components for the experiments on KKI and full data set respectively as they cover more than \(98\%\) of data variance. We have included a plot of principal component vs. percent of data variance in the supplementary materials. Refer to [27] for details about PCA.
### Useful Region Mask
Different research studies have proposed several Regions Of Interests (ROI) for fMRI analysis. These different ROIs vary in size and number. In some studies they are identified based on the anatomical structure of the brain and in other studies they depend on the functional responsibility. Tzourio-Mazoyer et al. [28] identified the ROIs based on similar functional responses in the brain. Craddock et al. [29]
generated a homogenous functional connectivity map from resting state fMRI data. Smith et al. [30] identified several co-varying functional subnetworks in the resting state brain. However, it is still unclear which ROIs are the best for resting state functional connectivity analysis. Also it is not known if all the ROIs detected by one method are required for ADHD classification or if the use of a subset of ROIs is more efficient. To find these answers, we use a novel method to identify the useful region mask for the classification of ADHD and control subjects. The algorithm for the useful region mask generation is as follows:
**step 1**: For each of the subjects, used for mask generation algorithm, we do the following:
* Divide the brain volume into small cube-shaped regions. Each of the regions is typically \(5\times 5\times 5\) voxels except the regions at the boundary of the brain volume.
* Select a random subset of the regions. We include each region in the subset with probability \(p\).
* Generate degree map by extracting the degrees for the voxels within the selected subset of regions.
**step 2**: Train the PCA-LDA based classifier and calculate the detection accuracy on the test data set.
**step 3**: Perform the step 1 and step 2 for \(m\) number of times, each time generating a different random subset, calculating the detection accuracy and recording it.
**step 4**: Choose the random sub sets corresponding to the top \(10\%\) of the detection accuracy as the candidates for generating the useful region mask. We count the occurrence of each of the regions in all
Figure 3: **(A)**This part of the figure explains the useful region mask generation algorithm on a single brain slice. The figure is just a graphical example, not the real data. In actual experiments brain volumes are used instead of slices and cube regions are used instead of square subdivision areas. **(a)** Divide the slice into square regions. **(b)** Select random sub sets of square regions marked in dark green. **(c)** Select the sub sets with top \(10\%\) of detection rate. **(d)** Generate a probability map based on the regions occurrence in top \(10\%\) subset. **(e)** Threshold the probability map to produce the useful region mask. **(B)** This part shows the flowchart for the mask generation algorithm.
of the candidate sub sets and normalize the counts between 0 to 1 by dividing it by the number of candidate sub sets. This gives us the probability of inclusion of each of the regions in the mask.
**step 5**: Finally the useful region mask is generated using a threshold _th_ to prune the regions with low probability.
We experimentally verified that highest detection rate achieved when \(p\) is 0.40 and _th_ is 0.60. The experiment results are included in the supplementary materials. The value of \(m\) was kept as 500 so that the number of iterations should be large enough but computationally feasible. Figure 3 (A) is an illustration of the proposed algorithm on a cartoon \(2\)-D slice of a brain while Figure 3 (B) is the flowchart for the mask generation algorithm. Note that other network features may also be used in the algorithm but we simply use degree map feature. We assume that the regions, which are useful for identifying ADHD conditioned brains, should not vary depending on the feature used for the detection of the mask. We have tested the idea computing useful region mask using 3-cycle map feature also. We found that the final detection rates are very similar (check the supplementary materials).
## 3 Experiments and Results
First, we verified the performance of each of the network features computed on a subset of the training data. We used fMRI data of \(83\) subjects from the Kennedy Krieger Institute data set. Among the \(83\) subjects, the first \(44\) subjects are used for training and the remaining \(39\) for testing. The performances of
Figure 4: The plots shows how detection rates for different network features change with correlation threshold. **(A)** Degree map positive correlations, **(B)** degree map negative correlations, **(C)** degree map absolute correlations, **(D)** varying distance degree map positive correlation, **(E)**\(3\) cycle map positive correlation, **(F)**\(4\) cycle map positive correlation, **(G)** weight map positive correlation.
Figure 5: The figure shows different slices to demonstrate the useful region mask computed. The masked regions are highlighted in orange color and overlaid on the structural images of a sample subject.
each of the features is computed with or without using the useful region mask. The mask is generated on the KKI training set comprising the first 44 subjects of the KKI subset and using the algorithm described in 2.3. Each time a random subset of regions is selected, the classification performance is measured by leave-one-out cross verification, i.e. take \(43\) subjects for training and test on the remaining one subject, repeat the process \(44\) times, testing each of the \(44\) subjects one at a time and averaging the correct detection count. Figure 5 shows the computed mask on different slices of the brain. Table 2 list the information of the different clusters found in the useful region mask and the ROIs they are overlapped with. To empirically select the correlation threshold to be used for our experiments, we varied it from \(0.4\) to \(0.8\) with an increment of \(0.1\) in every step. In each step, detection rates for different network features are computed on the KKI test set of 39 subjects. The plots for correlation threshold vs. detection rate are shown in Figure 4. To generate the plot for the weight map, we compute the sum of the edge weights considering only the edges which have weights greater than the correlation thresholds used within that step. Note that the detection rate for each feature is measured for positive, negative and absolute correlation values. However, the features computed from the positive correlation values have always outperformed the other two cases. Hence, we have not reported the other two cases in the paper. Since for all the network features, other than the 4-cycle map, the best performance is consistently achieved when correlation threshold is 0.80, we choose to use this value for all the experiments on the full data set.
Table 3 summarizes the best performance obtained for each of the network features and the correspond
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**ROIs** & **[x, y, z] centers in \(mm\).** & **size in \(mm\).3** & \multicolumn{3}{c|}{**standard deviation in \(mm\).**} \\ \hline & & & x & y & z \\ \hline Precuneus Cortex & [0, -66, 42] & 7872 & 5,4894 & 6,6435 & 10,3592 \\ \hline Cingulate Gyrs & [0, -36, 52]; [0, 6, 42] & 13056 & 4.5593 & 11.3751 & 10.9128 \\ \hline Temporal Pole & [56, 14, -18] & 5312 & 4.7728 & 5,5878 & 5.7664 \\ \hline Superior Temporal Gyrs & [60, -18, -8]; [-60, -20, -4] & 3392; 6400 & 7.1938; 6,6817 & 9.4413; 11.6393 & 4.0790; 5.7075 \\ \hline Inferior Temporal Gyrs & [54, -30, -20]; [-60, -48, -10] & 1856; 2816 & 7.6293; 5,4892 & 6.7262; 8.2390 & 8.2617; 5.3582 \\ \hline Pre-central Gyrs & [-6, -22, 62] & 8000 & 16.7262 & 8.5099 & 5.2886 \\ \hline Lingual Gyrus & [6, -64, 4] & 19072 & 12.5240 & 11.4946 & 5.8835 \\ \hline Right Amygdala & [24, -2, -18] & 2176 & 9.6639 & 7.3186 & 7.1020 \\ \hline \end{tabular}
\end{table}
Table 2: Shows list of the clusters and their approximate centers, sizes and standard deviations found using the most useful region mask algorithm. The coordinates are calculated on the HarvardOxford-corr-maxprob-thr0-1mm standard atlas provided with the FSL \(4.1\). We list the ROIs of Harvard-Oxford Cortical and Subcortical Structural Atlases for which more than \(50\%\) of the volumes are selected in the useful region mask. Atlas tool of FSL view is used for this purpose
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Feature** & **Correlation Threshold** & \multicolumn{2}{c|}{**Performance (\%)**} & \multicolumn{2}{c|}{**Performance (\%)**} \\ & & **using useful region mask** & **without useful region mask** \\ \hline Degree Map positive & \(0.80\) & \(76.92\) & \(69.23\) \\ \hline Degree Map negative & \(0.80\) & \(71.79\) & \(69.23\) \\ \hline Degree Map absolute & \(0.80\) & \(74.36\) & \(71.79\) \\ \hline Varying Distance Degree Map & \(0.80\) & \(76.92\) & \(69.23\) \\ \hline \(3\)-cycle-map & \(0.80\) & \(74.36\) & \(71.79\) \\ \hline \(4\)-cycle-map & \(0.70\) & \(74.36\) & \(69.23\) \\ \hline Weight Map positive & \(0.80\) & \(76.92\) & \(69.23\) \\ \hline BOW time series histogram & - & \(69.23\) & \(66.67\) \\ \hline BOW Degree Map histogram & 0.80 & \(69.23\) & \(66.67\) \\ \hline BOW time series and Degree Map histogram & 0.80 & \(69.23\) & \(66.67\) \\ \hline \end{tabular}
\end{table}
Table 3: Initial test results shows the performance of all the network features computed on the Kennedy Krieger Institute’s data set. Positive, negative and absolute keywords are used to indicate that positive, negative and absolute correlation values are considered for network generation. If any keyword is not specifically mentioned, then the positive correlation values are used only.
ing correlation threshold values. The performance in the table signifies the percentage of total number of correct detection (control and ADHD) among total number of test subjects. Note that for all the features, the performance without using useful regions mask is lower compare to when we use the mask. This demonstrate the utility of the voxel selection through the generated mask. In one of the recent studies Solmaz et al. used Bag of Word features for automatic classification of the ADHD subjects [31]. We used their method for the purpose of comparison of the performances with our method. For our experiments using the Bag of Words feature, each subject is represented by \(75\) and \(100\) bin histograms when we used raw time series and degree map features respectively. A third kind of experiment performed by representing each of the subjects as a concatenation of two types of histograms resulting in a \(175\) bin histogram. The details of the Bag of Word method are provided in the supplementary materials.
We perform thorough experiments on the full data set using positive degree map and positive 3-cycle map features. We trained our classifier with the full training data, which has \(776\) subjects from \(7\) different centers, and test on the \(171\) subjects from \(6\) centers released for the ADHD-\(200\) competition. Again, we compared the performance with and without using the useful region mask. We reused the same mask generated using first 44 subjects of KKI. It is worth mentioning that the mask selects 6916 voxels from which features are extracted. The correct detection rate, specificity and sensitivity for each of the test centers and for overall centers are reported in Table 4. Since the subject labels of the Brown University test set have not yet been released, we cannot compute the performance measures on that subset.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{|c|}{**Deg. Map (mask)**} & \multicolumn{3}{|c|}{**Deg. Map (no mask)**} & \multicolumn{3}{|c|}{**3-cycle Map (mask)**} & \multicolumn{3}{|c|}{**3-cycle Map (no mask)**} \\ \hline \multicolumn{1}{|c|}{} & **Det. Rate** & **Spec.** & **Sens.** & **Det. Rate** & **Spec.** & **Sens.** & **Det. Rate** & **Spec.** & **Sens.** & **Det. Rate** & **Spec.** & **Sens.** \\ \hline
**KKI** & 72.72 & 1 & 0 & 72.72 & 1 & 0 & 72.72 & 1 & 0 & 72.72 & 1 & 0 \\ \hline
**Neuro Image** & 68 &.7857 &.5454 & 64 &.7143 &.5454 & 72 &.7857 &.6364 & 68 &.8572 &.4545 \\ \hline
**NYU** & 70.73 &.9167 &.6207 & 65.85 &.7500 &.6207 & 70.73 &.8333 &.6552 & 63.41 &.8333 &.5517 \\ \hline
**OHSU** & 70.59 &.7778 &.4286 & 64.70 &.7037 &.4286 & 73.52 &.8148 & 4286 & 70.59 &.7407 &.5714 \\ \hline
**Peking** & 64.71 &.8889 &.3750 & 60.78 &.8889 &.2917 & 62.74 &.9259 &.2917 & 56.86 &.9630 &.1250 \\ \hline
**Pittsburgh** & 77.78 & 1 &.5000 & 66.67 &.8000 &.5000 & 77.78 & 1 &.5000 & 66.67 & 1 &.2500 \\ \hline
**Overall** & 69.05 &.8602 &.4872 & 64.342 &.7957 &.4615 & 69.59 &.8710 & 4872 & 64.33 &.8710 &.3718 \\ \hline \end{tabular}
\end{table}
Table 4: Shows the detection rate, specificity and sensitivity of the classification experiments on the test data set released for the ADHD-200 competition. Comparison of the performances are shown when useful region mask is used and not used for the degree map and 3-cycle map features.
Figure 6: The figure shows average difference of degrees of the control group from the ADHD group for the voxels within the useful region mask. The average difference is calculated using the 83 subjects of KKI training set. The dark red to white color map is used to represents higher degree of control subjects and blue to green color map is used to show the opposite. The control group shows higher connectivity in the Cingulate Gyrus region on slices with Z coordinates \(10\) and \(15\) and Paracingulate Gyrus region on slices with Z coordinates \(19\) and \(23\).
Discussion
We have modeled the brain as a functional network which is expected to represent the interaction of the different active regions of the brain. We assumed that ADHD is a problem caused due to the partial failure of the brain's communication network and the affected subjects can be distinguished from control subjects using the topological differences of their respective functional networks. To verify the idea, we have extracted different network features to train a PCA-LDA based automatic classifier. Figure 6 shows that the average degree map, computed for the ADHD and control subjects of the KKI data set, is able to capture some difference of connectivity in the Cingulate Gyrus and the Paracingulate Gyrus regions of brain. We also proposed that the features from the whole brain are not required for the classification, but some key areas hold useful information. Our results shows that the inclusion of features from the whole brain can negatively impact the classification accuracy. This resulted in a novel algorithm to compute the useful region mask which helped to improve the classification performance.
The different network features computed are expected to capture different characteristics of the functional network. The degree map and the weight map can capture how densely the nodes of the network is connected. This can give us a measure of how synchronously different regions of a brain are interacting. Varying distance degree map, on the other hand, can also reveal the fact that how the synchronous regions are distributed over the brain. While degree map only captures pairwise interactions of voxels, it ignores higher-order interactions, such as among three voxels simultaneously. We know from brain anatomy that there are such multiply connected brain regions. Hence, cycle maps offer a different perspective from which a given network may be viewed. The utility of using network motifs such as cycles to describe networks has been described in [25].
Figure 5 and Table 2 presents the ROIs found through our adaptive labeling technique described in Section 2.3. These ROIs were used in the classification including regions such as the cingulate and precuneus which is consistent with the findings of Castellanos et al. [14]. The cingulate and precuneus regions are known to be part of the default-mode network [2]. Many regions in the Table 2 have also been identified by Assaf et al. [32], such as the precuneus, temporal pole, superior temporal gyrus, and pre-central gyrus. Regions in Table 2 that are consistent with those reported by Uddin et al. [33] include the inferior temporal gyrus and lingual gyrus. Interestingly, Table 2 identifies the right amygdala, which did not show up in the analysis of Castellanos et al. [14] or Assaf et al. [32] or Uddin et al. [33]. The limbic system is known to play a role in ADHD, and a study by Plessen et al. [34] reported disrupted connectivity between the amygdala and OFC in the children with ADHD. Hence the value of our technique is that it provides an independent and automatic source of hypotheses about the brain regions that are implicated in the diagnosis and classification of ADHD. In this sense, our technique for ROI identification can be considered to be a model-free method. Furthermore, our classifier is agnostic to any particular theory of ADHD, and works strictly on a machine-learning approach to separating ADHD patients from controls by utilizing labeled data. Hence the technique described in this paper is applicable to other types of brain disorders where one can create labeled data for the accompanying brain scans.
The curves in Figure 4 show that for all the network features, high performance value is achieved when correlation threshold \(0.80\) is used to construct the network. In four out of seven cases the performances are the highest, in other two cases they are one of the highest and in one case it is slightly lower that the highest. The results are not surprising since they indicate that the difference of connection structure for highly correlated voxels matters the most for classification.
Considering the results in Table 4, we observe that in 5 out of 6 data sets, the 3-cycle maps with voxel selection give the best detection rate. Only on one data set, the Peking data set, the 3-cycle map with voxel selection gives marginally worse performance than the degree map with voxel selection. To the best of our knowledge, this is the first time that the utility of cycle-related features has been demonstrated in the fMRI imaging literature. The study in [26] showed that cycle-related features are useful in discriminat
ing biological networks from man-made networks, but did not investigate various types of fMRI-derived networks.
We note that calculating cycle-related features is more computationally intensive than the degree map, and the computation increases exponentially with cycle length. The use of GPUs can reduce the cost of computation, as earlier studies with fMRI images have shown [35]. If standardized libraries for cycle computation become available on GPU platforms, it will promote the use of such features in fMRI research.
The use of the degree map provides a good compromise between classification performance and computational cost. It is easy to compute, and provides classification performance that is only marginally worse than that of the 3-cycle maps in most cases. One limitation of our study is that we have not used any specific measure to remove different signal to noise ratios which may be introduced in the data due to the difference of experimental setups among the sites. Also, some of the recent studies [36][37] indicate that the correlations of different brain regions are sensitive to the motion of the head even though the data is preprocessed for motion correction. We have not performed any explicit step to counter this problem. Finally, we note that we used a single classifier, the PCA-LDA method to investigate the utility of different network features. It is possible that other classifiers such as neural networks or support vector machines may give better performance. Such investigations need to be carried out in the future.
## Acknowledgements
The project described was supported by Award Number R21CA129263 from the National Cancer Institute. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Caner Institute or the National Institutes of Health
Special thanks to The Neuro Bureau and all the data contributing sites for their efforts in compiling the large data set and making it publicly available. The goal of this project is that different disciplines of science may help to better understand the neural basis of ADHD.
## References
* [1] M. E. Raichle, A. M. MacLeod, A. Z. Snyder, W. J. Powers, D. A. Gusnard, and G. L. Shulman, "A default mode of brain function," _Proceedings of the National Academy of Sciences of the United States of America_, vol. 98, no. 2, pp. 676-682, 2001. [Online]. Available: [http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=14647&tool=pmcentrez&rendertype=abstract](http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=14647&tool=pmcentrez&rendertype=abstract).
* [2] J. S. Damoiseaux, S. A. R. B. Rombouts, F. Barkhof, P. Scheltens, C. J. Stam, S. M. Smith, and C. F. Beckmann, "Consistent resting-state networks across healthy subjects," _Proceedings of the National Academy of Sciences_, vol. 103, no. 37, pp. 13 848-13 853, 2006. [Online]. Available: [http://www.pnas.org/content/103/37/13848.abstract](http://www.pnas.org/content/103/37/13848.abstract)
* [3] M. D. Greicius, G. Srivastava, A. L. Reiss, and V. Menon, "Default-mode network activity distinguishes alzheimer's disease from healthy aging: Evidence from functional mri," _Proceedings of the National Academy of Sciences of the United States of America_, vol. 101, no. 13, pp. 4637-4642, 2004. [Online]. Available: [http://www.pnas.org/content/101/13/4637.abstract](http://www.pnas.org/content/101/13/4637.abstract)
* [4] V. L. Cherkassky, R. K. Kana, T. A. Keller, and M. A. Just, "Functional connectivity in a baseline resting-state network in autism." _Neuroreport_, vol. 17, no. 16, pp. 1687-1690, Nov. 2006. [Online]. Available: [http://dx.doi.org/10.1097/01.wnr.0000239956.45448.4c](http://dx.doi.org/10.1097/01.wnr.0000239956.45448.4c)
* [5] M. E. Raichle, "The brain's dark energy." _Scientific American_, vol. 302, no. 3, pp. 44-49, 2010. [Online]. Available: [http://www.nature.com/doifinder/10.1038/scientificamerican0310-44](http://www.nature.com/doifinder/10.1038/scientificamerican0310-44)
* [6] D. H. Weissman, K. C. Roberts, K. M. Visscher, and M. G. Woldorff, "The neural bases of momentary lapses in attention," _Nature Neuroscience_, vol. 9, no. 7, pp. 971-978, 2006. [Online]. Available: [http://dx.doi.org/10.1038/nn1727](http://dx.doi.org/10.1038/nn1727)
* [7] F. X. Castellanos, J. N. Giedd, W. L. Marsh, S. D. Hamburger, A. C. Vaituzis, D. P. Dickstein, S. E. Sarfatti, Y. C. Vauss, J. W. Snell, N. Lange, and et al., "Quantitative brain magnetic resonance imaging in attention-deficit hyperactivity disorder." _Archives of General Psychiatry_, vol. 53, no. 7, pp. 607-616, 1996. [Online]. Available: [http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=8660127](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=8660127)
* [8] S. Overmeyer, E. T. Bullmore, J. Suckling, A. Simmons, S. C. Williams, P. J. Santosh, and E. Taylor, "Distributed grey and white matter deficits in hyperkinetic disorder: Mri evidence for anatomical abnormality in an attentional network." _Psychological Medicine_, vol. 31, no. 8, pp. 1425-1435, 2001. [Online]. Available: [http://www.ncbi.nlm.nih.gov/pubmed/11722157](http://www.ncbi.nlm.nih.gov/pubmed/11722157)
* [9] E. R. Sowell, P. M. Thompson, S. E. Welcome, A. L. Henkenius, A. W. Toga, and B. S. Peterson, "Cortical abnormalities in children and adolescents with attention-deficit hyperactivity disorder." _Lance_, vol. 362, no. 9397, pp. 1699-1707, 2003. [Online]. Available: [http://www.ncbi.nlm.nih.gov/pubmed/14643117](http://www.ncbi.nlm.nih.gov/pubmed/14643117)
* [10] L. J. Seidman, E. M. Valera, N. Makris, M. C. Monuteaux, D. L. Boriel, K. Kelkar, D. N. Kennedy, V. S. Caviness, G. Bush, M. Aleardi, and et al., "Dorsolateral prefrontal and anterior cingulate cortex volumetric abnormalities in adults with attention-deficit/hyperactivity disorder identified by magnetic resonance imaging." _Biological Psychiatry_, vol. 60, no. 10, pp. 1071-1080, 2006. [Online]. Available: [http://www.ncbi.nlm.nih.gov/pubmed/16876137](http://www.ncbi.nlm.nih.gov/pubmed/16876137)
* [11] G. Bush, J. A. Frazier, S. L. Rauch, L. J. Seidman, P. J. Whalen, M. A. Jenike, B. R. Rosen, and J. Biederman, "Anterior cingulate cortex dysfunction in attention-deficit/hyperactivity disorder revealed by fmri and the counting stroop." _Biological Psychiatry_, vol. 45, no. 12, pp. 1542-1552, 1999. [Online]. Available: [http://www.ncbi.nlm.nih.gov/pubmed/10376114](http://www.ncbi.nlm.nih.gov/pubmed/10376114)
* [12] S. Durston, "Differential patterns of striatal activation in young children with and without adhd," _Biological Psychiatry_, vol. 53, no. 10, pp. 871-878, 2003. [Online]. Available: [http://linkinghub.elsevier.com/retrieve/pii/S0006322302019042](http://linkinghub.elsevier.com/retrieve/pii/S0006322302019042)
* [13] M. H. Teicher, C. M. Anderson, A. Polcari, C. A. Glod, L. C. Maas, and P. F. Renshaw, "Functional deficits in basal ganglia of children with attention-deficit/hyperactivity disorder shown with functional magnetic resonance imaging relaxometry." _Nature Medicine_, vol. 6, no. 4, pp. 470-473, 2000. [Online]. Available: [http://www.ncbi.nlm.nih.gov/pubmed/10742158](http://www.ncbi.nlm.nih.gov/pubmed/10742158)
* [14] F. X. Castellanos, D. S. Margulies, C. Kelly, L. Q. Uddin, M. Ghaffari, A. Kirsch, D. Shaw, Z. Shehzad, A. Di Martino, B. Biswal, and et al., "Cingulate-precuneus interactions: a new locus of dysfunction in adult attention-deficit/hyperactivity disorder," _Biological Psychiatry_, vol. 63, no. 3, pp. 332-337, 2008. [Online]. Available: [http://eprints.soton.ac.uk/50138/](http://eprints.soton.ac.uk/50138/)
* [15] L. Tian, T. Jiang, Y. Wang, Y. Zang, Y. He, M. Liang, M. Sui, Q. Cao, S. Hu, M. Peng, and et al., "Altered resting-state functional connectivity patterns of anterior cingulate cortex in adolescents with attention deficit hyperactivity disorder." _Neuroscience Letters_, vol. 400, no. 1-2, pp. 39-43, 2006. [Online]. Available: [http://www.ncbi.nlm.nih.gov/pubmed/16510242](http://www.ncbi.nlm.nih.gov/pubmed/16510242)
* [16] Q. Cao, Y. Zang, L. Sun, M. Sui, X. Long, Q. Zou, and Y. Wang, "Abnormal neural activity in children with attention deficit hyperactivity disorder: a resting-state functional magnetic resonance imaging study." _NeuroReport_, vol. 17, no. 10, pp. 1033-1036, 2006. [Online]. Available: [http://www.ncbi.nlm.nih.gov/pubmed/16791098](http://www.ncbi.nlm.nih.gov/pubmed/16791098)
* [17] Y.-F. Zang, Y. He, C.-Z. Zhu, Q.-J. Cao, M.-Q. Sui, M. Liang, L.-X. Tian, T.-Z. Jiang, and Y.-F. Wang, "Altered baseline brain activity in children with adhd revealed by resting-state functional mri." _Brain & Development_, vol. 29, no. 2, pp. 83-91, 2007. [Online]. Available: [http://www.ncbi.nlm.nih.gov/pubmed/16919409](http://www.ncbi.nlm.nih.gov/pubmed/16919409)
* [18] S. Dey, A. R. Rao, and M. Shah, "Exploiting the brain's network structure in identifying adhd subjects," _Frontiers in Systems Neuroscience_, vol. 6, 2012. [Online]. Available: [https://www.frontiersin.org/articles/10.3389/fnsys.2012.00075](https://www.frontiersin.org/articles/10.3389/fnsys.2012.00075)
* [19] ----, "Attributed graph distance measure for automatic detection of attention deficit hyper-active disordered subjects," _Frontiers in Neural Circuits_, vol. 8, 2014. [Online]. Available: [https://www.frontiersin.org/articles/10.3389/fncir.2014.00064](https://www.frontiersin.org/articles/10.3389/fncir.2014.00064)
* [20] S. Dey, "Automatic detection of brain functional disorder using imaging data," _Electronic Theses and Dissertations_, vol. 662, 2014. [Online]. Available: [http://purl.fcla.edu/fcla/etd/CFE0005786](http://purl.fcla.edu/fcla/etd/CFE0005786)
* [21] R. Rao, S. Dey, M. Shah, and B. Solmaz, "Method and system for modeling and processing fMRI image data using a bag-of-words approach," Utility US9 072 496B2, 02 01, 2013.
* 120, 2008. [Online]. Available: [http://www.sciencedirect.com/science/article/pii/S1053811907010610](http://www.sciencedirect.com/science/article/pii/S1053811907010610)
* [23] NITRC, "Adhd-200 data processing," [http://nitrc.org/plugins/mwiki/index.php/neurobureau:AthenaPipeline](http://nitrc.org/plugins/mwiki/index.php/neurobureau:AthenaPipeline).
* [24] O. Sporns, "Graph theory methods for the analysis of neural connectivity patterns."
* [25] R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon, "Network motifs: Simple building blocks of complex networks," _Science_, vol. 298, no. 5594, pp. 824-827, 2002. [Online]. Available: [http://www.sciencemag.org/content/298/5594/824.abstract](http://www.sciencemag.org/content/298/5594/824.abstract)
* [26] A. Ma'ayan, G. A. Cecchi, J. Wagner, A. R. Rao, R. Iyengar, and G. Stolovitzky, "Ordered cyclic motifs contribute to dynamic stability in biological and engineered networks," _Proceedings of the National Academy of Sciences_, vol. 105, no. 49, pp. 19 235-19 240, 2008. [Online]. Available: [http://www.pnas.org/content/105/49/19235.abstract](http://www.pnas.org/content/105/49/19235.abstract)
* [27] H. Abdi and L. J. Williams, "Principal component analysis," _Wiley Interdisciplinary Reviews: Computational Statistics_, vol. 2, no. 4, pp. 433-459, 2010. [Online]. Available: [http://dx.doi.org/10.1002/wics.101](http://dx.doi.org/10.1002/wics.101)
* [28] N. Tzourio-Mazoyer, B. Landeau, D. Papathanassiou, F. Crivello, O. Etard, N. Delcroix, B. Mazoyer, and M. Joliot, "Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain," _NeuroImage_, vol. 15, no. 1, pp. 273-289, January 2002.
* [29] R. C. Craddock, G. James, P. E. Holtzheimer, X. P. Hu, and H. S. Mayberg, "A whole brain fmri atlas generated via spatially constrained spectral clustering," _Human Brain Mapping_, pp. n/a-n/a, 2011. [Online]. Available: [http://dx.doi.org/10.1002/hbm.21333](http://dx.doi.org/10.1002/hbm.21333)
* [30] S. M. Smith, P. T. Fox, K. L. Miller, D. C. Glahn, P. M. Fox, C. E. Mackay, N. Filippini, K. E. Watkins, R. Toro, A. R. Laird, and C. F. Beckmann, "Correspondence of the brain's functional architecture during activation and rest," _Proceedings of the National Academy of Sciences_, 2009. [Online]. Available: [http://www.pnas.org/content/early/2009/07/17/0905267106.abstract](http://www.pnas.org/content/early/2009/07/17/0905267106.abstract)
* [31] B. Solmaz, S. Dey, A. R. Rao, and M. Shah, "Adhd classification using bag of words approach on network features," D. R. Haynor and S. Ourselin, Eds., vol. 8314, no. 1. SPIE, 2012, p. 83144T. [Online]. Available: [http://link.aip.org/link/?PSI/8314/83144T/1](http://link.aip.org/link/?PSI/8314/83144T/1)
* 256, 2010. [Online]. Available: [http://www.sciencedirect.com/science/article/pii/S1053811910008013](http://www.sciencedirect.com/science/article/pii/S1053811910008013)
* [33] L. Q. Uddin, A. Clare Kelly, B. B. Biswal, F. Xavier Castellanos, and M. P. Milham, "Functional connectivity of default mode network components: Correlation, anticorrelation, and causality," _Human Brain Mapping_, vol. 30, no. 2, pp. 625-637, 2009. [Online]. Available: [http://dx.doi.org/10.1002/hbm.20531](http://dx.doi.org/10.1002/hbm.20531)
* [34] K. J. Plessen, R. Bansal, H. Zhu, R. Whiteman, J. Amat, G. A. Quackenbush, L. Martin, K. Durkin, C. Blair, J. Royal, K. Hugdahl, and B. S. Peterson, "Hippocampus and amygdala morphology in attention-deficit/hyperactivity disorder," _Arch Gen Psychiatry_, vol. 63, no. 7, pp. 795-807, 2006. [Online]. Available: [http://archpsyc.ama-assn.org/cgi/content/abstract/63/7/795](http://archpsyc.ama-assn.org/cgi/content/abstract/63/7/795)
* [35] A. R. Rao, R. Bordawekar, and G. Cecchi, "Fast computation of functional networks from fmri activity: a multi-platform comparison," B. M. Dawant and D. R. Haynor, Eds., vol. 7962, no. 1. SPIE, 2011, p. 79624L. [Online]. Available: [http://link.aip.org/link/?PSI/7962/79624L/1](http://link.aip.org/link/?PSI/7962/79624L/1)
* 438, 2012,!ee:title?Neuroergonomics: The human brain in action and at work!/ce:title?. [Online]. Available: [http://www.sciencedirect.com/science/article/pii/S1053811911008214](http://www.sciencedirect.com/science/article/pii/S1053811911008214)
* 2154, 2012. [Online]. Available: [http://www.sciencedirect.com/science/article/pii/S1053811911011815](http://www.sciencedirect.com/science/article/pii/S1053811911011815) |
2307.06223 | Residues of quadratic Weyl group multiple Dirichlet series | We give explicit formulas for the residue of the Chinta-Gunnells average
attached to a finite irreducible root system, at the polar divisor
corresponding to a simple short root. The formula describes the residue in
terms of the average attached to the root subsystem orthogonal to the relevant
simple root. As a consequence, we obtain similar formulas for the residues of
quadratic Weyl group multiple Dirichlet series over the rational function field
and over the Gaussian field. The residue formula also allows us to obtain a new
expression for the Chinta-Gunnells average of a finite irreducible root system,
as an average over a maximal parabolic subgroup of a rational function that has
an explicit description reflecting the combinatorics of the root system. | Adrian Diaconu, Bogdan Ion, Vicenţiu Paşol, Alexandru A. Popa | 2023-07-12T15:16:03Z | http://arxiv.org/abs/2307.06223v1 | # Residues of quadratic Weyl group multiple Dirichlet series
###### Abstract.
We give explicit formulas for the residue of the Chinta-Gunnells average attached to a finite irreducible root system, at the polar divisor corresponding to a simple short root. The formula describes the residue in terms of the average attached to the root subsystem orthogonal to the relevant simple root. As a consequence, we obtain similar formulas for the residues of quadratic Weyl group multiple Dirichlet series over the rational function field and over the Gaussian field. The residue formula also allows us to obtain a new expression for the Chinta-Gunnells average of a finite irreducible root system, as an average over a maximal parabolic subgroup of a rational function that has an explicit description reflecting the combinatorics of the root system.
## 1. Introduction
The genesis of the concept of Weyl group multiple Dirichlet series (WMDS) can be traced back to the work of Goldfeld and Hoffstein [25], where (using present terminology) a quadratic double Dirichlet series over \(\mathbb{Q}\), of Cartan type \(A_{2}\), was constructed as the Mellin transform of an Eisenstein series of half-integral weight for the congruence subgroup \(\Gamma_{0}(4)\); the study of the same object, in an equivalent form, was previously proposed by Siegel [36]. Other examples, obtained as integral transforms of Eisenstein series (and other automorphic objects) on the metaplectic double covers of \(\mathrm{GL}_{3}\) and \(\mathrm{GSp}(4)\), were investigated; for the relevant results, see the survey [9] and the references therein. The main application at the time was to obtain non-vanishing results for quadratic twists of central values of automorphic \(L\)-functions and their derivatives. In higher rank, such constructions, based on integral transforms of Eisenstein series on covers of reductive groups are difficult to obtain and analyze. These initial investigations revealed the structural properties of such multiple Dirichlet series, and it has gradually emerged [9, 10, 18, 21] that these properties can be used to define a class of multiple Dirichlet series without making use of integral transforms of automorphic forms. The general principles used to construct and analyze multiple Dirichlet series associated to finite reduced root systems were laid out in [3, 6, 13, 14].
Following [13, 14], a coarse description of the class of _finite_ Weyl group _quadratic_ multiple Dirichlet series proceeds as follows. Let \(\mathbb{K}\) denote a global field, let \(\Phi\) be a finite (reduced) root system of rank \(r\), and let \(W\) denote its Weyl group. Let \(S\) be a finite set of places, which includes the set of _infinite_ places, and in characteristic \(0\), the set of places dividing \(2\), and large enough so that the ring \(\mathcal{O}_{S}\) of \(S\)-integers has class number \(1\). The quadratic Weyl group multiple Dirichlet series attached to the root system \(\Phi\) is a series of \(r\) complex variables of the form
\[\mathcal{Z}_{\Phi}(s_{1},\ldots,s_{r})=\sum\frac{H(m_{1},\ldots,m_{r})}{|m_{1} |^{s_{1}}\cdot\ldots\cdot|m_{r}|^{s_{r}}},\]
the sum ranging over the set of \(r\)-tuples of non-zero integers in \(\mathcal{O}_{S}\) modulo units. The coefficients \(H(m_{1},\ldots,m_{r})\) are required to satisfy a _twisted multiplicativity_ property involving the quadratic symbol, which reflects the combinatorics of the root system \(\Phi\). The twisted multiplicativity reduces the description of \(H(m_{1},\ldots,m_{r})\) to the case where all components are powers of the same prime \(p\). The generating series
\[\sum H(p^{n_{1}},\ldots,p^{n_{r}})|p|^{-n_{1}s_{1}}\cdot\ldots\cdot|p|^{-n_{r}s _{r}}\]
are called the \(p\)-parts of \(\mathcal{Z}_{\Phi}\). Chinta and Gunnells [13] have constructed the \(p\)-parts through an averaging technique that uses an action of \(W\) on the space of rational functions in \(r\) variables. Their construction leads to the series \(\mathcal{Z}_{\Phi}\) that has meromorphic continuation to \(\mathbb{C}^{r}\) and satisfies a group of functional equations isomorphic to \(W\).
More precisely, let \(Q\) denote the root lattice of \(\Phi\), and denote by \(V\) the \(\mathbb{R}\)-span of \(\Phi\). Let \(\mathbb{F}=\mathbb{Q}(u)\), with \(u\) a formal parameter. The standard basis of \(\mathbb{F}[Q]\), the \(\mathbb{F}\)-group ring of \(Q\), is denoted by \(\{\mathbf{x}^{\lambda}\}_{\lambda\in Q}\); \(\mathbb{F}(Q)\) denotes the field of fractions of \(\mathbb{F}[Q]\), viewing the latter as the space of Laurent polynomials in the monomials \(\mathbf{x}^{\lambda}\). We fix a basis \(\Pi(\Phi)=\{\alpha_{i}\}_{1\leq i\leq r}\) of \(\Phi\); denote \(x_{i}=\mathbf{x}^{\alpha_{i}}\) and treat \(\mathbf{x}=(x_{1},\ldots,x_{r})\) as a multivariable. Chinta and Gunnells defined a rational function in \(\mathbf{x}=(x_{1},\ldots,x_{r})\) which, as power series, has the form
\[Z_{\Phi}(\mathbf{x};u)=\sum a_{\lambda}(u)\mathbf{x}^{\lambda},\]
with polynomial coefficients in the extra parameter \(u\). The parameter \(u\) is present in the Chinta-Gunnells action and formalizes the role played by the quadratic Gauss sum.
The coefficients of the \(p\)-parts of \(\mathcal{Z}_{\Phi}\) are (in our normalization, which is slightly different from the one in [13])
\[H(p^{n_{1}},\ldots,p^{n_{r}})=a_{n_{1}\alpha_{1}+\cdots+n_{r}\alpha_{r}}(|p|^{ -1/2}).\]
The function \(Z_{\Phi}(\mathbf{x};u)\) is uniquely determined by its invariance under the Chinta-Gunnells action and the normalization \(Z_{\Phi}(0;u)=1\). For \(\mathbb{K}=\mathbb{F}_{q}(t)\), where \(q\equiv 1\,\mathrm{mod}\ 4\) and \(\mathbb{F}_{q}\) is the finite field with \(q\) elements, we have ([24, Proposition 4.2]1, generalizing prior observations in particular cases [16, 11]),
Footnote 1: In [24], the WMDS \(\mathcal{Z}_{\Phi}\) (denoted there by \(\mathcal{Z}^{*}\)) is constructed slightly differently, starting with a series \(\mathcal{Z}\) with \(p\)-parts that correspond to the _numerator_ of \(Z_{\Phi}(\mathbf{x};u)\), defined in Convention 2.5. One can show that this construction of \(\mathcal{Z}_{\Phi}\) is equivalent to the one described in §1.2-1.3.
\[\mathcal{Z}_{\Phi}(s_{1},\ldots,s_{r})=Z_{\Phi}(q^{-s_{1}},\ldots,q^{-s_{r}};q ^{1/2}). \tag{1.1}\]
This is a manifestation of the same local-to-global phenomenon that classically connects the zeta function of the projective line and its Euler factors. In the case of the affine root system \(D^{{(1)}}_{4}\), the local-to-global theorem holds [19] with a correction factor that reflects the contribution of the imaginary roots.
Aside from its role in the definition of WMDS, the Chinta-Gunnells average \(Z_{\Phi}(\mathbf{x};u)\) has direct connections with spherical Whittaker functions on metaplectic covers of \(p\)-adic groups [17, 34], metaplectic Demazure-Lusztig operators [15], the combinatorial theory of crystal graphs [5, 33], quantum groups and solvable lattice models in statistical mechanics [4]. We also note that the Chinta-Gunnells action itself emerges canonically from the metaplectic representations of affine Hecke algebras [35].
Our main results give a precise description of the residues of the series \(Z_{\Phi}(\mathbf{x},u)\), and of \(\mathscr{Z}_{\Phi}(s_{1},\ldots,s_{r})\) for \(\mathbb{K}=\mathbb{F}_{q}(t)\), \(q\equiv 1\bmod 4\), and for \(\mathbb{K}=\mathbb{Q}(\sqrt{-1})\). Before stating them, let us introduce some notation; for the full details we refer to Section 2. The Chinta-Gunnells action of \(w\in W\) on \(f\in\mathbb{F}(Q)\) will be denoted by \(f\!\!\int\!\!\!\int\!\!\!\int\!\!\!\!\int\!\!\!\!\int\!\!\!\!\int\!\!\!\!\int \!\!\!\!\int\!\!\!\!\int\!\!\!\!\int\!\!\!\!\int\!\!\!\!\int\!\!\!\!\!\int \!\!\!\!\int\!\!\!\!\!\int\!\!\!\!\!\int\!\!\!\!\!\int\!\!\!\!\!\int\!\!\!\! \int\!\!\!\!\!\int\!\!\!\!\!\int\!\!\!\!\!\int\!\!\!\!\!\int\!\!\!\!\!\int \!\!\!\!\!\!\int\!\!\!\!\!\int\!\!\!\!\!\int\!\!\!\!\!\!\int\!\!\!\!\!\!\int \!\!\!\!\!\!\int\!\!\!\!\!\int\!\!\!\!\!\!\int\!\!\!\!\!\!\int\!\!\!\!\!\!\! \int\!\!\!\!\!\!\int\!\!\!\!\!\!\int\!\!\!\!\!\!\int\!\!\!\!\!\!\!\int\!\!\!\! \!\!\!\int\!\!\!\!\!\!\int\!\!\!\!\!\!\int\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\! \int\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\! \int\!\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\!\int\!\!\!\!\! \!\!\!\!\int\!\!\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\! \int\!\!\!\!\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\!\!\!\!\int\!\!\!\!\!\!\!\!\! \int\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \int\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The same convention can be adopted for \(\mathcal{Z}_{\Phi_{0}}\), using the basis \(\Pi_{\Phi_{0}}\) of \(\Phi_{0}\) induced by the fixed basis \(\Pi(\Phi)\). We will use the inclusion \(\Pi_{\Phi_{0}}\subset\Phi^{+}\) to express \(\mathcal{Z}_{\Phi_{0}}\) using the multivariable \(\mathbf{s}\). An immediate consequence of Theorem A and the local-to-global principle (1.1) is the following theorem, evaluating the corresponding residue of \(\mathcal{Z}_{\Phi}(\mathbf{s})\).
**Theorem B**.: _Let \(\mathbb{K}=\mathbb{F}_{q}(t)\), \(q\equiv 1\bmod 4\). Let \(\Phi\) be an irreducible root system not of type \(G_{2}\), and let \(\alpha_{i}\) be a short simple root. Then_
\[\lim_{s_{i}\to 1/2}(1-q^{1/2-s_{i}})\mathcal{Z}_{\Phi}(\mathbf{s})=\mathcal{Z}_{ \Phi_{0}}(\mathbf{s})|_{s_{i}=1/2}\cdot\prod_{\begin{subarray}{c}\alpha\in \Phi^{+}\\ \langle\alpha,\alpha_{i}\rangle=1\end{subarray}}\frac{1}{(1-q^{1-2\mathbf{s}_ {\alpha}})|_{s_{i}=1/2}}. \tag{1.3}\]
Again, we remark that each of the factors on the right-hand side of (1.3) can be interpreted in terms of an evaluation of the rank \(1\) WMDS \(\mathcal{Z}_{A_{1}}(s)=\zeta_{\mathbb{A}^{1}_{\mathcal{F}_{q}}}(s+1/2)=\frac{ 1}{1-q^{1/2-s}}\).
An analogue of Theorem B holds over number fields as well. To avoid technicalities, we illustrate it over the Gaussian field.
**Theorem C**.: _Let \(\mathbb{K}=\mathbb{Q}(\sqrt{-1})\). Let \(\Phi\) be an irreducible root system not of type \(G_{2}\), and let \(\alpha_{i}\) be a short simple root. Then,_
\[\lim_{s_{i}\to 1/2}(s_{i}-1/2)\mathcal{Z}_{\Phi}(\mathbf{s})=\frac{\pi}{8} \mathcal{Z}_{\Phi_{0}}(\mathbf{s})|_{s_{i}=1/2}\cdot\prod_{\begin{subarray}{ c}\alpha\in\Phi^{+}\\ \langle\alpha,\alpha_{i}\rangle=1\end{subarray}}\zeta^{(2)}_{\mathbb{K}}(2 \mathbf{s}_{\alpha})|_{s_{i}=1/2}, \tag{1.4}\]
_where \(\zeta^{(2)}_{\mathbb{K}}(s)\) is the Dedekind zeta function of \(\mathbb{K}\) with the Euler factor at the prime dividing 2 removed._
To put this result in context, note that when \(\Phi\) is of type \(A_{1}\), then \(Z_{\Phi}(s)=\zeta^{(2)}_{\mathbb{K}}(s+1/2)\), while \(\Phi_{0}\) and the product on the right-hand side are trivial. The theorem in this case reduces to the classical formula for the residue of the Dedekind zeta function at \(s=1\), which explains the constant \(\pi/8\).
The proof of this theorem is given in Appendix C, and it makes essential use of Theorem A. We do not strive for a more general result over number fields, as it is not the main focus of this paper. Rather, our goal is to illustrate in a concrete case the general phenomenon that results over function fields have number field counterparts.
The extent and precise formulation of this phenomenon, relating the residues of higher order WMDS to similar objects associated to smaller rank root systems, is not yet clear. The only other examples known at this time relate the residues of the _cubic_ WMDS of type \(A_{3}\) over \(\mathbb{F}_{q}(t)\), \(q\equiv 1\bmod 4\), and over number fields [2, 11], to the Friedberg-Hoffstein-Lieman cubic double Dirichlet series [22]. There are indications that, for quadratic WMDS associated to affine root systems, this phenomenon is still present. For \(\mathbb{K}=\mathbb{F}_{q}(t)\), \(q\equiv 1\bmod 4\), we verified this for affine root systems of type \(D^{{}_{(1)}}_{4}\), and \(A^{{}_{(1)}}_{r}\) of small rank, and we are in the process of extending this result to all simply-laced affine root systems [20].
For \(i\) a node in the Dynkin diagram of \(\Phi\), we denote by \(\Phi^{i}\subset\Phi\) the maximal parabolic root subsystem obtained by excluding the node \(i\), and we denote by \(W^{i}\subset W\) the corresponding maximal parabolic subgroup. For \(\alpha\in\Phi\), let \(n_{i}(\alpha)\in\mathbb{Z}\) denote the coefficient of \(\alpha_{i}\) in the expansion of \(\alpha\) in the basis \(\Pi(\Phi)\).
For each of the nodes \(i\) specified in Table 1 (we use the standard labelling of nodes in the Dynkin diagram [1]; see also SS5.4 and Appendix A), we give a new formula for \(Z_{\Phi}(\mathbf{x};u)\) as an average over \(W^{i}\) of
a rational function that is described in terms of the root system \(\Phi\). More specifically, we construct a rooted tree \(\mathcal{K}_{\Phi}(\alpha_{i})\) whose vertices are positive roots, and such that the tree root is \(\alpha_{i}\). This rooted tree resembles the Kostant cascade construction [28, 29]. The roots that appear as vertices are used to define an explicit rational function \(K_{\Phi,\alpha_{i}}(\mathbf{x})\), which depends neither on \(x_{i}\) nor on \(u\). We refer to SS7.1 for the details and we include a few examples below.
**Theorem D**.: _Let \(\Phi\) be an irreducible root system not of type \(G_{2}\), and let \(i\) one of the admissible nodes specified in Table 1. We have,_
\[Z_{\Phi}(\mathbf{x};u)\cdot\prod_{\begin{subarray}{c}\alpha\in\Phi^{+}\\ m_{\alpha}=2\\ n_{i}(\alpha)\geqslant 2\end{subarray}}(1-u^{2}\mathbf{x}^{2\alpha})=\frac{ \sum_{w\in W^{i}}\,\frac{1}{1-ux_{i}}K_{\Phi,\alpha_{i}}(\mathbf{x})\bigg{|} \,w}{\Delta_{\Phi^{i}}(\mathbf{x})}. \tag{1.5}\]
We include here some examples of the rational function \(K_{\Phi,\alpha_{i}}\).
* If \(\Phi\) is of type \(A_{r}\), using the symmetry of the Dynkin diagram, we can assume that \(2i\leqslant r+1\). Then, \[K_{\Phi,\alpha_{i}}(\mathbf{x})=\prod_{j=1}^{i-1}\frac{1}{1-x_{i-j}x_{i+j}}.\]
* If \(\Phi\) is of type \(C_{r}\) and \(2i\leqslant r\), the function \(K_{\Phi,\alpha_{i}}\) is given by the same formula as for \(A_{r}\).
* If \(\Phi\) is of type \(B_{r}\), or \(F_{4}\), and \(i=r\) or, respectively, \(i=4\), then \(K_{\Phi,\alpha_{i}}=1\).
* If \(\Phi\) is of type \(D_{r}\) and \(r=2i\), then \[K_{\Phi,\alpha_{i}}(\mathbf{x})=\frac{1}{1-x_{1}x_{r-1}}\frac{1}{1-x_{r}} \frac{1}{1-x_{r-1}x_{r}}\prod_{j=1}^{i-2}\frac{1}{1-x_{i-j}x_{i+j}}\frac{1}{1 -x_{i+j}^{2}\cdot\ldots\cdot x_{r-2}^{2}x_{r-1}x_{r}}.\]
In principle one can obtain formulas for \(Z_{\Phi}(\mathbf{x};u)\) as averages over a smaller subgroup, by rewriting the definition using a system of coset representatives for the smaller subgroup. However the kernel functions obtained in this way are tremendously more complicated than our kernel function.
It might be of some interest to comment on the origin of the results described above. We discovered formulas of type (1.5) as part of our investigations of \(Z_{\Phi}(\mathbf{x};u)\) for affine root systems. For an affine root system, we adopt the notation set up in SS1.7 for its parabolic sub-systems and parabolic subgroups. When \(\Phi\) is an affine root system of type \(D_{4}^{{(1)}}\), three of the authors discovered in [19] that \(Z_{\Phi}(\mathbf{x};u)\) and \(Z_{\Phi}(\mathbf{x};u\mathbf{x}^{\delta})\) are related by a new type of functional equation that involves a \(3\times 3\) matrix \(B(\mathbf{x};u)\); here \(\delta=\alpha_{1}+\alpha_{2}+\alpha_{4}+\alpha_{4}+2\alpha_{5}\) is the minimal positive imaginary root and the Dynkin diagram \(D_{4}^{{(1)}}\) is labelled as in Figure 1. The matrix \(B\) has an inverse with polynomial entries, which was given explicitly in [19]. It turns out that
\begin{table}
\begin{tabular}{c|c c c c c c c c} \(\Phi\) & \(A_{r}\) & \(B_{r}\) & \(C_{r}\) & \(D_{r}\) & \(E_{6}\) & \(E_{7}\) & \(E_{8}\) & \(F_{4}\) \\ \hline \(i\) & Any & \(i=r\) & \(2i\leqslant r\) & \(2i\leqslant r+1\) & \(i\neq 4\) & \(i\in\{1,2,7\}\) & \(i\in\{1,8\}\) & \(i=4\) \\ & & & & & & & & \\ \end{tabular}
\end{table}
Table 1. Admissible nodes.
the sum \(B_{o,e}(\mathbf{x};u)\) of the entries in the third column of \(B(\mathbf{x};u)\) determines the entire matrix \(B\) and it can be expressed as follows
\[B_{o,e}(\mathbf{x};u)=\frac{u\mathbf{x}^{\delta}}{x_{5}}\sum_{w\in W^{5}}\left[ \left(1-ux_{5}\right)^{-1}\prod_{1\leqslant i<j\leqslant 4}(1-x_{i}x_{j})^{-1} \right]\Bigg{|}w\]
Under the evaluation \(x_{1}=0\), and ignoring the term \(u\mathbf{x}^{\delta}/x_{5}\), the above formula leads to the formula (1.5) corresponding to the parabolic sub-system \(\Phi^{1}\) of type \(D_{4}\) and the node \(i=5\)
\[(1-u^{2}\mathbf{x}^{2\theta_{1}})Z_{\Phi^{1}}(x_{2},\ldots,x_{5})=\frac{\sum_ {w\in W^{1,5}}\left[(1-ux_{5})^{-1}\prod_{2\leqslant i<j\leqslant 4}(1-x_{i}x_{j}) ^{-1}\right]\Bigg{|}w}{\Delta_{\Phi^{1,5}}(\mathbf{x})}.\]
Above, \(\theta_{1}=\alpha_{2}+\alpha_{3}+\alpha_{4}+2\alpha_{5}\) is the highest root of \(\Phi^{1}\), \(\Phi^{1,5}\) is the parabolic sub-system of \(\Phi^{1}\) obtained by excluding the node \(5\), and \(W^{1,5}\) is its Weyl group. The comparison of these two formulas suggests that \(B_{o,e}(\mathbf{x};u)\) is an "affinization" of the finite zeta average of type \(D_{4}\). We discovered a similar phenomenon for affine groups of type \(A_{r}^{(1)}\) with \(r\) odd. Based on the treatment of the case \(D_{4}^{(1)}\), Theorem D will play a role in deriving the extra functional equation for affine root systems in the ongoing work [20]. These facts prompted us to investigate the existence of such formulas for zeta averages associated to finite root systems, resulting in the discovery of Theorem D. The more fundamental Theorem A was obtained in the process of proving Theorem D.
To highlight the main difficulty encountered in the proofs of Theorem A and Theorem D, let us point out that, implicitly, both statements claim the existence of unexpected symmetries (in the form of extra functional equations) for certain objects. The residue in Theorem A must satisfy functional equations that correspond to the simple roots in \(\Phi_{0}\) that are not simple roots in \(\Phi\), and the average over the parabolic subgroup \(W^{i}\) in Theorem D must satisfy the functional equation that corresponds to the excluded simple root. The existence of the extra symmetries, together with uniqueness results concerning rational functions with prescribed symmetries, are the main elements of both proofs.
For Theorem A, the extra functional equation is proved by a detailed analysis of the Chinta-Gunnells action (Proposition 6.8). We use the uniqueness result of [12, Corollary 5.8], [23, Corollary 5.2], describing \(Z_{\Phi}(\mathbf{x};u)\) as the unique rational function invariant under \(W\) with the property that \(D_{\Phi}(\mathbf{x};u)Z_{\Phi}(\mathbf{x};u)\) is a polynomial with constant term \(1\), where \(D_{\Phi}(\mathbf{x};u)=\prod_{\alpha\in\Phi^{+}}(1-u^{2}\mathbf{x}^{m_{\alpha} \alpha})\). However, in order to apply it, we must first show that the residue lies in the correct ambient space. This is accomplished by revisiting, in Section 3, the analysis from [12, 23] on the support of the numerator of a rational function invariant under the Chinta-Gunnells action.
A key ingredient in the proof of Theorem D is the following uniqueness result. A rational function is uniquely determined by the residue at \(x_{i}=1/u\), the invariance under the Chinta-Gunnells action of \(W^{i}\), and
Figure 1. \(D_{4}^{(1)}\) diagram labelling
some properties of its polar divisor and the degree in \(x_{i}\); we refer to Lemma 7.6 for the precise conditions. The unique rational function whose residue is the one specified by Theorem A is precisely \(Z_{\Phi}({\bf x};u)\); it is remarkable that it is this precise specification of the residue that corresponds to a rational function with a larger group of functional equations. This characterization of \(Z_{\Phi}({\bf x};u)\) is different from the characterization of [12, 23] mentioned above.
We treat simply-laced and double-laced root systems on equal footing. However, we could have taken advantage of the following relationship between double-laced and simply-laced root systems. We denote by \(\Phi^{s}\subseteq\Phi\) the root sub-system consisting of all short roots, and by \(Q^{s}\) its root lattice. The inclusion of root lattices \(Q^{s}\subset Q\) induces canonical morphisms \(\mathbb{F}(Q^{s})\subset\mathbb{F}(Q)\). We regard \(Z_{\Phi^{s}}({\bf x};u)\) as an element of \(\mathbb{F}(Q)\) in this fashion. As it turns out (see Proposition 4.1), for \(\Phi\) a double-laced root system, we have
\[Z_{\Phi}({\bf x};u)=Z_{\Phi^{s}}({\bf x};u).\]
This is perhaps of independent interest.
Our results open a number of immediate questions. One set of questions is related to describing the residues, as well as formulas of type (1.5), for the _twisted_ quadratic Weyl group multiple Dirichlet series, constructed using the twisted Chinta-Gunnells action introduced in [14]. Such results would have implications for the description of the residues of Eisenstein series on metaplectic 2-covers of \(p\)-adic groups. A further set of questions is related to the extension of our results to the case of higher order WMDS. Preliminary computations show that simple-minded generalizations of our Theorem A and Theorem D are not true.
In the case of affine root systems, the corresponding version of Theorem A plays an important technical role in the determination of the correction factor that must appear in the affine version of the local-to-global principle. It would be interesting to see if formulas of type (1.5) hold in the affine case for an appropriate kernel function. If so, they would express the zeta average as a sum over a finite Weyl group, making the study of \(Z_{\Phi}({\bf x};u)\) more amenable.
**Acknowledgements.** Diaconu, Pasol and Popa were partially supported by the CNCS-UEFISCDI grant PN-III-P4-ID-PCE-2020-2498. Ion was partially supported by the Simons Foundation grant 420882.
## 2. The Chinta-Gunnells action
Let \(\Phi\) be a finite, irreducible, reduced root system of rank \(r\). We fix a basis \(\Pi(\Phi)=\{\alpha_{i}\}_{1\leqslant i\leqslant r}\) and use \(\Phi^{\pm}\) to refer to the corresponding sets of positive, and respectively negative, roots. The root sub-systems of short, respectively long, roots are denoted by \(\Phi^{s}\) and, respectively, \(\Phi^{\ell}\). If \(\Phi\) is simply-laced, we consider all roots to be _short_. We extend this notation and convention to any subset of \(\Phi\).
Let \(Q=\bigoplus_{i=1}^{r}\mathbb{Z}\alpha_{i}\) be the root lattice of \(\Phi\). We denote by \(W\) the Weyl group of \(\Phi\). For \(\alpha\in\Phi\), let \(\sigma_{\alpha}\) denote the corresponding reflection. For simplicity, we use \(\sigma_{i}\), \(1\leqslant i\leqslant r\), to refer to the reflections corresponding to simple roots.
There is a unique \(W\)-invariant inner product on \(V=Q\otimes_{\mathbb{Z}}\mathbb{R}\) normalized such that the short roots have square length 2. We use \(\langle\cdot,\cdot\rangle\) to denote this scalar product and \(q(\lambda)=\frac{1}{2}\langle\lambda,\lambda\rangle\) to refer to the associated quadratic form, which takes integral values with this normalization. To each root \(\alpha\) we associate the positive
integer
\[m_{\alpha}=\frac{2}{\gcd(2,q(\alpha))}=\begin{cases}2&\text{ if $q(\alpha)$ is odd}\\ 1&\text{ if $q(\alpha)$ is even}.\end{cases}\]
If \(\Phi\) is simply-laced or of type \(G_{2}\) we have \(m_{\alpha}=2\) for all \(\alpha\in\Phi\). If \(\Phi\) is double-laced (\(B_{r}\), \(C_{r}\) and \(F_{4}\)), then \(m_{\alpha}\) is \(2\) if \(\alpha\) is short, and \(m_{\alpha}\) is \(1\) if \(\alpha\) is long. For simplicity, we denote \(m_{i}=m_{\alpha_{i}}\). We also consider the even sub-lattice of \(Q\) defined as
\[Q_{\text{ev}}=\{\lambda\in Q:\langle\lambda,\alpha_{i}\rangle\equiv 0\text{ mod }2,\text{ for }1\leqslant i\leqslant r\}.\]
Note that \(m_{\alpha}\alpha\in Q_{\text{ev}}\) for \(\alpha\in\Phi\), since the Cartan ratios \(2\langle\alpha,\beta\rangle/\langle\beta,\beta\rangle\) are integral for \(\alpha,\beta\in\Phi\).
For each \(w\) in \(W\) let \(\ell(w)\) be the length of a reduced (i.e. shortest) decomposition of \(w\) in terms of simple reflections. For \(w\) in \(W\) we have \(\ell(w)=|\Phi(w)|\), where \(\Phi(w):=\{\alpha\in\Phi^{+}:w\alpha\in\Phi^{-}\}\). There is a unique element of \(W\) of maximal length, denoted by \(w_{\circ}\). In this case, \(\Phi(w_{\circ})=\Phi^{+}\).
If \(w=\sigma_{i_{\ell}}\cdots\sigma_{i_{1}}\) is a reduced decomposition, then
\[\Phi(w)=\{\alpha_{i_{1}}\prec\sigma_{i_{1}}(\alpha_{i_{2}})\prec\ldots\prec \sigma_{i_{1}}\sigma_{i_{2}}\cdots\sigma_{i_{\ell-1}}(\alpha_{i_{\ell}})\}, \tag{2.1}\]
with the order \(\prec\) dependent on the chosen reduced expression for \(w\). We will also need the following well-known property of the set \(\Phi(w)\)[32, (2.2.4)].
**Lemma 2.1**.: _Assume that \(w,w^{\prime}\in W\) and \(w^{-1}\alpha\in\Phi^{+}\) for all \(\alpha\in\Phi(w^{\prime})\). Then, \(\ell(w^{\prime}w)=\ell(w^{\prime})+\ell(w)\) and_
\[\Phi(w^{\prime}w)=\Phi(w)\cup w^{-1}\Phi(w^{\prime}).\]
_Reduced expressions of \(w\), \(w^{\prime}\), concatenate to a reduced expression of \(w^{\prime}w\). Moreover, the order \(\prec\) on \(\Phi(w^{\prime}w)\) is the concatenation of the order relations on \(\Phi(w)\) and \(w^{-1}\Phi(w^{\prime})\)._
We also use the following notation
\[\Phi^{s}(w)=\{\beta\in\Phi(w)\mid m_{\beta}=2\}\quad\text{and}\quad\ell_{s}(w) =|\Phi^{s}(w)|.\]
The order relation on \(\Phi(w)\) induced by a reduced expression of \(w\) restricts to an order relation on \(\Phi^{s}(w)\). If \(\Phi\) is simply-laced or of type \(G_{2}\), then \(\Phi^{s}(w)=\Phi(w)\); when \(\Phi\) is double-laced, then \(\Phi^{s}(w)=\Phi(w)\cap\Phi^{s}\).
Let \(\mathbb{F}=\mathbb{Q}(u)\), with \(u\) a formal parameter. The standard basis of \(\mathbb{F}[Q]\), the \(\mathbb{F}\)-group ring of \(Q\), is denoted by \(\{\mathbf{x}^{\lambda}\}_{\lambda\in Q}\). We regard \(\mathbb{F}[Q]\) as the ring of Laurent polynomials in the monomials \(\mathbf{x}^{\lambda}\), and we denote by \(\mathbb{F}(Q)\) its field of fractions. Denote \(x_{i}=\mathbf{x}^{\alpha_{i}}\) and treat \(\mathbf{x}=(x_{1},\ldots,x_{r})\) as a multivariable. For \(\lambda\in Q\), let \(n_{i}(\lambda)\in\mathbb{Z}\) denote the coefficient of \(\alpha_{i}\) in the expansion of \(\lambda\) in the basis \(\Pi(\Phi)\). With this notation, if \(\lambda\in Q\), then
\[\mathbf{x}^{\lambda}=\prod x_{i}^{n_{i}(\lambda)}.\]
The canonical Weyl group left action on \(Q\) induces the following action of \(W\) on \(\mathbb{F}[Q]\) and \(\mathbb{F}(Q)\)
\[w\mathbf{x}^{\lambda}=\mathbf{x}^{w^{-1}\lambda},\quad w\in W,\ \lambda\in Q.\]
The corresponding action on the multivariable \(\mathbf{x}\) is \((w\mathbf{x})_{i}=w\mathbf{x}_{i}=w\mathbf{x}^{\alpha_{i}}=\mathbf{x}^{w^{-1} \alpha_{i}}\).
Let \(1\leqslant i\leqslant r\). The involution \(\varepsilon_{i}:\mathbb{F}(Q)\to\mathbb{F}(Q)\) is defined by
\[\varepsilon_{i}\mathbf{x}^{\lambda}=(-1)^{\langle\lambda,\alpha_{i}\rangle} \mathbf{x}^{\lambda},\quad\lambda\in Q.\]
On the multivariable \(\mathbf{x}\) it acts by \((\varepsilon_{i}\mathbf{x})_{j}:=\varepsilon_{i}x_{j}=\varepsilon_{i} \mathbf{x}_{j}^{\alpha}=(-1)^{\langle\alpha_{j},\alpha_{i}\rangle}x_{j}\). If \(m_{i}=1\) then \(\alpha_{i}\in Q_{\mathrm{ev}}\) and \(\varepsilon_{i}\mathbf{x}=\mathbf{x}\), so only the sign operators \(\varepsilon_{i}\) with \(m_{i}=2\) are non-trivial.
For \(\mu\in Q\), we denote \(\varepsilon^{\mu}=\prod\varepsilon_{i}^{n_{i}(\mu)}\). We have,
\[\varepsilon^{\mu}\mathbf{x}^{\lambda}=(-1)^{\langle\lambda,\mu\rangle} \mathbf{x}^{\lambda},\quad\text{and}\quad\varepsilon^{\mu}w\mathbf{x}^{ \lambda}=w\varepsilon^{w^{-1}\mu}\mathbf{x}^{\lambda},\quad\text{for all}\ \ w\in W,\mu,\lambda\in Q. \tag{2.2}\]
For \(f(\mathbf{x})=f(\mathbf{x};u)\in\mathbb{F}(Q)\), denote by \(f_{i}^{+}\), \(f_{i}^{-}\) its even and odd parts with respect to \(\varepsilon_{i}\), namely
\[f_{i}^{\pm}(\mathbf{x})=\frac{1}{2}(f(\mathbf{x})\pm f(\varepsilon_{i} \mathbf{x})).\]
We routinely omit the variable \(u\) from the notation \(f(\mathbf{x};u)\), as it is fixed throughout.
Chinta and Gunnells [13, 14] define a _right_ action of \(W\) on \(\mathbb{F}(Q)\), which for simple reflections is described by2
Footnote 2: This is the same action as the one defined in [15, eq. (7)], with \(n=2\) and \(v=u^{2}\).
\[f\big{|}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### The Chinta-Gunnells action can be also described as
\[f\Big{|}^{\text{\tiny CG}}_{\bar{\sigma}_{i}}(\mathbf{x})=\begin{cases}f(\sigma_{i} \mathbf{x})J(x_{i},0)+f(\sigma_{i}\varepsilon_{i}\mathbf{x})J(x_{i},1)&\text{ if }m_{i}=2\\ f(\sigma_{i}\mathbf{x})&\text{ if }m_{i}=1\end{cases}\]
where
\[J(x,\delta)=\frac{1}{2}\left(\frac{1-u/x}{1-ux}+\frac{(-1)^{\delta}}{x}\right),\quad\delta\in\{0,1\}.\]
The action of a general element of \(W\) can be expressed as follows. Recall that \(\ell_{s}(w)=|\Phi^{s}(w)|\), with \(\Phi^{s}(w)\) defined in SS2.2.
**Lemma 2.3**.: _Let \(w\in W\) and let \(\ell_{s}=\ell_{s}(w)\). For \(\underline{\delta}=(\delta_{\gamma})_{\gamma\in\Phi^{s}(w)}\in\{0,1\}^{\ell_{s}}\), define \(\varepsilon_{\underline{\delta}}=\varepsilon^{\sum_{\gamma\in\Phi^{s}(w)} \delta_{\gamma}\gamma}\). We fix a reduced decomposition of \(w\) and the corresponding order relation \(\prec\) on \(\Phi^{s}(w)\). Then, with the usual conventions on empty sums and products, we have_
\[f\Big{|}^{\text{\tiny CG}}_{w}(\mathbf{x})=\sum_{\underline{\delta}\in\{0,1\} ^{\ell_{s}}}f\left(w\varepsilon_{\underline{\delta}}\mathbf{x}\right)\prod_{ \beta\in\Phi^{s}(w)}J\left((-1)^{\langle\beta,\sum_{\gamma\prec\beta}\delta_{ \gamma}\gamma\rangle}\mathbf{x}^{\beta},\delta_{\beta}\right). \tag{2.6}\]
Proof.: For the purposes of this proof, let \(J_{2}(x,\delta)=J(x,\delta)\) and \(J_{1}(x,\delta)=\frac{1}{2}(1+(-1)^{\delta})\). Then, the Chinta-Gunnells action can be written uniformly as
\[f\Big{|}^{\text{\tiny CG}}_{\bar{\sigma}_{i}}(\mathbf{x})=f(\sigma_{i} \mathbf{x})J_{m_{i}}(x_{i},0)+f(\sigma_{i}\varepsilon_{i}\mathbf{x})J_{m_{i}}( x_{i},1).\]
Using this formula, we prove by induction on \(\ell(w)\) that
\[f\Big{|}^{\text{\tiny CG}}_{w}(\mathbf{x})=\sum_{\underline{\delta}\in\{0,1\} ^{\ell}}f\left(w\varepsilon_{\underline{\delta}}\mathbf{x}\right)\prod_{\beta \in\Phi(w)}J_{m_{\beta}}\left((-1)^{\langle\beta,\sum_{\gamma\prec\beta} \delta_{\gamma}\gamma\rangle}\mathbf{x}^{\beta},\delta_{\beta}\right), \tag{2.7}\]
where \(\underline{\delta}\) runs over \(\ell\)-tuplets \((\delta_{\gamma})_{\gamma\in\Phi(w)}\in\{0,1\}^{\ell}\) and \(\varepsilon_{\underline{\delta}}=\varepsilon^{\sum_{\gamma\in\Phi(w)}\delta_ {\gamma}\gamma}\). If \(\ell(w)=1\) then (2.7) is clear, and if it holds for \(w\), then, using Lemma 2.1, it is easy to check that it holds for \(w\sigma_{i}\) if \(\ell(w\sigma_{i})=\ell(w)+1\).
In (2.7) it is clear that if \(m_{\beta}=1\), only the \(\underline{\delta}\) with \(\underline{\delta}_{\beta}=0\) contribute in the product, with a factor \(J_{1}(x,0)=1\). Formula (2.6) follows immediately.
As an immediate application, we have the following divisibility property of the numerator \(N_{\Phi}(\mathbf{x};u)\) defined by (2.5).
**Lemma 2.4**.: _(i) If \(m_{i}=2\), then \(N_{\Phi}(\mathbf{x};u)\) is divisible by \(1+ux_{i}\)._
_(ii) If \(m_{\alpha}=1\), \(\alpha\in\Phi^{+}\), then \(N_{\Phi}(\mathbf{x};u)\) is divisible by \(1-u^{2}\mathbf{x}^{\alpha}\)._
Proof.: Assume \(m_{i}=2\). The function \(1|w\) has a pole at \(x_{i}=\pm 1/u\) if and only \(1\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The reason for considering \(N_{\Phi}({\bf x};u)\), rather than its quotient by its divisors in Lemma 2.4, will become apparent in the next section.
## 3. The numerator of the twisted China-Gunnells average
While our main interest is the zeta average \(Z_{\Phi}({\bf x};u)\), in SS6.11 we shall later encounter functions invariant under a _twisted_ Chinta-Gunnells action. Such functions were studied in detail in [12, 23], and since we need to consider a slightly more general setting we review here the results from loc. cit. that are needed.
Let \(\omega_{i}\), \(1\leqslant i\leqslant r\), denote the fundamental weights, and \(\alpha_{i}^{\vee}=2\alpha_{i}/\langle\alpha_{i},\alpha_{i}\rangle\), \(1\leqslant i\leqslant r\), the simple co-roots, so that \(\langle\omega_{i},\alpha_{j}^{\vee}\rangle=\delta_{i,j}\). The weight lattice of \(\Phi\) is \(P=\bigoplus_{i=1}^{r}\mathbb{Z}\omega_{i}\). Let \(P^{+}=\sum_{i=1}^{r}\mathbb{Z}_{\geqslant 0}\omega_{i}\), and \(P^{++}=\sum_{i=1}^{r}\mathbb{Z}_{>0}\omega_{i}\), be the set of dominant weights, and respectively regular dominant weights. With the notation
\[\rho=\sum_{i=1}^{r}\omega_{i}=\frac{1}{2}\sum_{\alpha\in\Phi^{+}}\alpha, \tag{3.1}\]
we have \(P^{++}=\rho+P^{+}\). On \(P\) we have the natural partial order relation defined by
\[\omega^{\prime}\leqslant\omega\quad\text{if and only if}\quad\omega-\omega^{ \prime}\in Q^{+}:=\sum_{i=1}^{r}\mathbb{Z}_{\geqslant 0}\alpha_{i}. \tag{3.2}\]
We fix \(\ell_{1},\ell_{2},\ldots,\ell_{r}\in\mathbb{Z}\) (which will be called _twist parameters_), and denote
\[\omega=\sum_{i=1}^{r}\ell_{i}\omega_{i}\in P,\quad\text{and}\quad\theta=\rho+\omega.\]
Following [12], we define a _twisted action3_ by
Footnote 3: After the change of variables \(u\mapsto\sqrt{q},x_{i}\mapsto x_{i}/\sqrt{q}\), the action \(\big{|}_{\omega}^{\text{cc}}\) coincides with \(|_{\ell}\) defined in [12] with \(\ell=(\ell_{1},\ldots,\ell_{r})\).
\[f\!\big{|}_{\omega}^{\text{CG}}\sigma_{i}({\bf x})=x_{i}^{\ell_{i}}\cdot\begin{cases} \frac{1-u/x_{i}}{1-ux_{i}}f_{i}^{+}(\sigma_{i}{\bf x})+\frac{1}{x_{i}}f_{i}^{- }(\sigma_{i}{\bf x})&\text{ if }m_{i}=2,\,\ell_{i}\text{ even},\\ \frac{1-u/x_{i}}{1-ux_{i}}f_{i}^{-}(\sigma_{i}{\bf x})+\frac{1}{x_{i}}f_{i}^{+ }(\sigma_{i}{\bf x})&\text{ if }m_{i}=2,\,\ell_{i}\text{ odd},\\ f(\sigma_{i}{\bf x})&\text{ if }m_{i}=1.\end{cases}\]
The usual Chinta-Gunnells action corresponds to \(\omega=0\).
We also consider the related action, denoted by \(f|_{\omega}w\), which for simple reflections is defined by
\[f|_{\omega}\sigma_{i}({\bf x}):=-x_{i}^{m_{i}}f\!\big{|}_{\omega}^{\text{CG}} \sigma_{i}({\bf x}).\]
As in (2.4), we define the corresponding twisted zeta average. For \(\omega\in P^{+}\), it was shown in [12] that the denominator of the twisted zeta average divides \(D_{\Phi}({\bf x};u)\). For \(\omega\in P\), the twisted zeta average may have additional poles when some \(x_{j}=0\). Therefore, for any \(\omega\in P\), we define the \(\mathbb{F}\)-vector space
\[\mathcal{N}_{\omega}=\{N\in\mathbb{F}[Q]\ :\ \ f=N/D_{\Phi}({\bf x};u)\text{ satisfies }f=f\!\big{|}_{\omega}^{\text{CG}} w.\text{ for all }w\in W\}, \tag{3.3}\]
We emphasize that, unlike in [12], we allow numerators that are _Laurent polynomials_.
If \(f(\mathbf{x})\in\mathbb{F}(Q)\) is even with respect to all the sign operators \(\varepsilon_{i}\) then, for any \(g(\mathbf{x})\in\mathbb{F}(Q)\) and \(w\in W\), we have
\[[f(\mathbf{x})g(\mathbf{x})]|_{\omega}w=f(w\mathbf{x})\cdot g(\mathbf{x})|_{ \omega}w.\]
It follows that the characterizing property of \(N\in\mathcal{N}_{\omega}\) is
\[N=\frac{1-u^{2}x_{i}^{m_{i}}}{1-u^{2}/x_{i}^{m_{i}}}\cdot N\big{|}_{\omega}^{ \text{\tiny CG}}\!\sigma_{i},\quad i=1,\dots,r. \tag{3.4}\]
The support of a Laurent polynomial \(N=\sum_{\lambda\in Q}c_{\lambda}\mathbf{x}^{\lambda}\) is \(\operatorname{Supp}(N)=\{\lambda\in Q\mid c_{\lambda}\neq 0\}.\) The equality (3.4) is equivalent to the following linear system satisfied by the coefficients
\[\begin{split} c_{\mu}-uc_{\lambda}&=c_{\lambda- \alpha_{i}}-uc_{\mu+\alpha_{i}},\qquad\text{if $m_{i}=2$ and $l_{i}+\langle\lambda,\alpha_{i}^{\vee}\rangle$ even},\\ c_{\mu}+u^{2}c_{\lambda}&=c_{\lambda-2\alpha_{i}} +u^{2}c_{\mu+2\alpha_{i}},\quad\text{if $m_{i}=2$ and $l_{i}+\langle\lambda,\alpha_{i}^{\vee}\rangle$ odd},\\ c_{\mu}+u^{2}c_{\lambda}&=c_{\lambda-\alpha_{i}} +u^{2}c_{\mu+\alpha_{i}},\qquad\text{if $m_{i}=1$},\end{split} \tag{3.5}\]
where \(\mu=\sigma_{i}(\lambda)+\langle\theta,\alpha_{i}^{\vee}\rangle\alpha_{i}\). Remark that if \(m_{i}=2\), then \(q(\alpha_{i})\) is odd, therefore \(\langle\lambda,\alpha_{i}\rangle\) and \(\langle\lambda,\alpha_{i}^{\vee}\rangle\) have the same parity.
The order relation \(\leqslant\) on \(P\), when restricted to a single \(W\)-orbit, has an equivalent description in terms of the Bruhat order on \(W\). For an element \(w\in W\) and \(1\leqslant i\leqslant r\), we write \(w<\sigma_{i}w\) if and only if \(\ell(w)<\ell(\sigma_{i}w)\). The transitive closure of this relation is called the (weak left) Bruhat order. For the basic properties of the Bruhat order we refer to [26, SS5.9]. From Lemma 2.1 it follows that \(w<\sigma_{i}w\) if and only if \(w^{-1}\alpha_{i}\in\Phi^{+}\). For each weight \(\eta\in P\) define \(\eta_{+}\) to be the unique dominant element in \(W\eta\), the orbit of \(\eta\). Let \(w_{\eta}\in W\) be the unique minimal length element such that \(w_{\eta}(\eta_{+})=\eta\).
**Proposition 3.1**.: _If \(\sigma_{i}\eta\neq\eta\), then \(w_{\sigma_{i}\eta}=\sigma_{i}w_{\eta}\). Furthermore, the following are equivalent_
1. \(w_{\sigma_{i}\eta}>w_{\eta}\)_;_
2. \(w_{\eta}^{-1}\alpha_{i}\in\Phi^{+}\)_;_
3. \(\langle\eta,\alpha_{i}\rangle>0\)_;_
4. \(\sigma_{i}\eta<\eta\)_._
_In consequence, for any \(\nu\in W\eta\), we have \(\nu\leqslant\eta\) if and only if \(w_{\nu}\geqslant w_{\eta}\)._
Proof.: The first claim is [27, Lemma 4.3]. The equivalence between (b) and (c) is proved in [27, Lemma 4.1]. The remaining implications are straightforward.
We note that, in particular, \(\eta_{+}\), and \(w_{\circ}\eta_{+}\), are the largest, and respectively smallest, elements of \(W\eta\).
Following [12], for each dominant weight \(\xi\in P^{+}\), we introduce the set
\[O_{\xi}:=\{\theta-w\xi\ :\ w\in W\}.\]
This is the usual \(W\)-orbit of \(\xi\), reflected in the origin and translated by \(\theta\).
We consider the relations (3.5) for elements \(\lambda,\mu\in O_{\xi}\) such that
\[\lambda=\theta-\eta,\quad\mu=\theta-\sigma_{i}\eta=\sigma_{i}\lambda+\alpha_{i }\langle\theta,\alpha_{i}^{\vee}\rangle,\text{ with }\sigma_{i}\eta\leqslant\eta.\]
The elements in the root lattice \(Q\) appearing on the right-hand side of (3.5) belong to orbits \(O_{\tau}\) with \(\tau>\xi\), by the following geometric lemma from [12], whose proof we include for completeness.
**Lemma 3.2**.: _Let \(\lambda=\theta-\eta\), \(\mu=\theta-\sigma_{i}\eta\) for some \(\eta\in P\) such that \(\sigma_{i}\eta\leqslant\eta\). Denoting \(\xi:=\eta_{+}\), for any \(m\geqslant 1\) we have_
1. _If_ \(\mu+m\alpha_{i}\in O_{\tau}\)_, then_ \(\tau>\xi\)_;_
2. _If_ \(\lambda-m\alpha_{i}\in O_{\tau}\)_, then_ \(\tau>\xi\)_._
Proof.: We prove the statement for \(\lambda\), the argument for \(\mu\) being similar. Let \(\tau\in P^{+}\) such that
\[\lambda-m\alpha_{i}=\theta-w\tau.\]
Since \(\eta_{+}=\xi\), we have \(\lambda=\theta-w_{\eta}\xi\), which gives \(\xi=w_{\eta}^{-1}w\tau-mw_{\eta}^{-1}\alpha_{i}\). We now have two cases: (i) if \(\sigma_{i}\eta<\eta\), then \(w_{\eta}^{-1}\alpha_{i}>0\) by Lemma 3.1; (ii) if \(\sigma_{i}\eta=\eta\), then \(\eta=\sigma_{i}w_{\eta}\xi\) and the definition of \(w_{\eta}\) implies \(\ell(\sigma_{i}w_{\eta})>\ell(w_{\eta})\). Again we obtain \(w_{\eta}^{-1}\alpha_{i}>0\), by Lemma 2.1.
In both cases it follows that \(\xi<w_{\eta}^{-1}w\tau\). Since \(\tau\) is dominant, we have \(w_{\eta}^{-1}w\tau\leqslant\tau\), showing that \(\tau>\xi\).
Using the above lemma, one can prove, along the same lines as in [12], the following result. We emphasize that in our statement we do not assume that the twisting parameters are nonnegative.
**Proposition 3.3**.: _Let \(\omega\in P\), and \(N\in\mathcal{N}_{\omega}\), \(N\neq 0\). Let \(\xi\in P^{+}\) be maximal with the property that_
\[O_{\xi}\cap\operatorname{Supp}(N)\neq\emptyset.\]
_Then \(\xi\) is strongly dominant and \(O_{\xi}\subseteq\operatorname{Supp}(N)\)._
Proof.: Let \(N=\sum c_{\lambda}\mathbf{x}^{\lambda}\). For a contradiction, assume that \(\xi\) is not in \(P^{++}\), so the parabolic subgroup \(P_{\xi}\) generated by the simple reflections fixing it is nontrivial. Let \(\sigma_{i}\in P_{\xi}\) and \(\lambda=\theta-\xi\), \(\mu=\theta-\sigma_{i}\xi=\lambda\). By Lemma 3.2, the equations (3.5) show that \(c_{\lambda}\) is determined by some coefficients labeled by elements in sets \(O_{\tau}\), with \(\tau>\xi\). Such coefficients must vanish by the maximality of \(\xi\), so \(c_{\lambda}=0\).
Let now \(\gamma\in O_{\xi}\), so that \(\gamma=\theta-\eta=\theta-w_{\eta}\xi\). Let \(w_{\eta}=\sigma_{i_{\ell}}\cdot\ldots\cdot\sigma_{i_{1}}\) be a reduced decomposition with \(\ell=|w_{\eta}|\). By the minimality of \(w_{\eta}\) and Lemma 3.1, it follows that \(\xi>\sigma_{i_{1}}\xi>\sigma_{i_{2}}\sigma_{i_{1}}\xi>\cdots>\eta\). Applying Lemma 3.2 a number of \(\ell\) times shows that \(c_{\gamma}=0\) for all \(\gamma\in O_{\xi}\), contradicting the hypothesis \(O_{\xi}\cap\operatorname{Supp}(N)\neq\emptyset\).
We conclude that \(\xi\in P^{++}\), so the set \(O_{\xi}\) contains \(|W|\) elements. If \(c_{\lambda}=0\), the argument in the previous paragraph shows that \(O_{\xi}\cap\operatorname{Supp}(N)=\emptyset\). Therefore, \(c_{\lambda}\neq 0\), and the argument of the previous paragraph shows that \(c_{\gamma}\neq 0\) for all \(\gamma\in O_{\xi}\). In conclusion, \(O_{\xi}\subseteq\operatorname{Supp}(N)\).
_Remark 3.4_.: The proof of Proposition 3.3 gives a slightly stronger result: every \(N\in\mathcal{N}_{\omega}\) is uniquely determined by the coefficients \(c_{\theta-\xi}\), \(\xi\in P^{++}\).
**Corollary 3.5**.: _Let \(0\neq N\in\mathcal{N}_{0}\), such that \(\operatorname{Supp}(N)\not\subset Q^{+}\). Then there is \(\lambda<0\) such that \(\lambda\in\operatorname{Supp}(N)\)._
Proof.: Take \(\xi\in P^{++}\) maximal with the property that \(O_{\xi}\cap\operatorname{Supp}(N)\neq\emptyset\). Since \(\xi\) is strongly dominant, we have \(\xi\geqslant\theta=\rho\).
We prove by contradiction that \(\xi>\rho\) for at least one such maximal \(\xi\). Indeed, assume that \(\xi=\rho\) for all such maximal \(\xi\). It follows that \(\rho\geqslant\tau\) for all \(\tau\in P^{+}\) such that \(O_{\tau}\cap\operatorname{Supp}(N)\neq\emptyset\). But then \(\rho-w\tau\geqslant\rho-\tau\geqslant 0\), so \(\operatorname{Supp}(N)\subset Q^{+}\), contradicting the hypothesis.
Therefore, there exists such a maximal element \(\xi\in P^{++}\) with \(\xi>\rho\). Proposition 3.3 implies that \(O_{\xi}\subset\operatorname{Supp}(N)\), and in particular, \(\lambda=\rho-\xi<0\) is in \(\operatorname{Supp}(N)\).
The following corollary is proved in [12] for \(\omega\) dominant.
**Corollary 3.6**.: _Let \(\omega\in P\), and assume \(N\in\mathcal{N}_{\omega}\) has \(\operatorname{Supp}(N)\subset Q^{+}\). Then,_
\[\operatorname{Supp}(N)\subseteq\bigcup_{\theta\geqslant\xi\in P^{+}}O_{\xi}.\]
_In particular, \(\operatorname{Supp}(N)\) is contained in the convex hull of the set \(\{\theta-\text{w}\theta\mid w\in W\}\)._
Proof.: By Proposition 3.3, for every maximal \(\xi\in P^{++}\) with the property that \(O_{\xi}\cap\operatorname{Supp}(N)\neq\emptyset\), we have \(O_{\xi}\subset\operatorname{Supp}(N)\). In consequence, \(\theta-\xi\geqslant 0\). Since every \(\tau\in P^{+}\) with \(O_{\tau}\cap\operatorname{Supp}(N)\neq\emptyset\) is smaller than such a maximal \(\xi\), it follows that \(\theta\geqslant\tau\) for all such \(\tau\). This is our first claim.
The second claim follows from the first and the following fact: if \(\theta\in P\), \(\xi\in P^{+}\) and \(\theta\geqslant\xi\), then \(W\xi\) is contained in the convex hull of \(W\theta\)[32, SS2.6].
Using that \(\xi\) and \(w_{\circ}\xi\) are the largest and, respectively, the smallest, elements of \(W\xi\) for \(\xi\in P^{+}\), we obtain the following information about the numerator \(N_{\Phi}\) of the untwisted zeta average, defined in (2.5).
**Corollary 3.7**.: _For each \(\lambda\in\operatorname{Supp}N_{\Phi}\) we have \(\lambda\leqslant\rho-w_{\circ}\rho=2\rho\)._
Proof.: Let \(\lambda\in\operatorname{Supp}N_{\Phi}\). By Corollary 3.6 there is \(\xi\leqslant\rho\), \(\xi\in P^{+}\), such that
\[\lambda=\rho-\text{w}\xi\leqslant\rho-w_{\circ}\xi\leqslant\rho-w_{\circ}\rho.\]
The second inequality holds because \(\rho-\xi\in Q^{+}\), and \(w_{\circ}\) maps \(\Phi^{+}\) to \(\Phi^{-}\).
Finally, we will need the following result, proved in [12] for \(\Phi\) simply-laced, and generalized in [23] for arbitrary \(\Phi\) (and covers of arbitrary degree).
**Proposition 3.8** ([12]).: _Let \(Z=N/D_{\Phi}(\mathbf{x};u)\) be a rational function invariant under the Chinta-Gunnells action of \(W\), and such that \(\operatorname{Supp}(N)\subseteq Q^{+}\), and \(N(0)=1\). Then, \(Z=Z_{\Phi}(\mathbf{x};u)\)._
Proof.: Since the proof is short, we include it here following [12]. By Remark 3.4, the coefficients \(c_{\rho-\xi}\) for \(\xi\in P^{++}\) uniquely determine \(N\). By Corollary 3.6, the coefficients \(c_{\rho-\xi}\) are non-zero only for \(\xi\leqslant\rho\). Since \(\rho\) is the smallest strongly dominant weight, we must have \(\xi=\rho\). We conclude that \(c_{0}\) uniquely determines \(N\), and therefore also uniquely determines \(Z\). The zeta average \(Z_{\Phi}(\mathbf{x};u)\) satisfies the conditions in the statement, so \(Z=Z_{\Phi}\).
Proposition 3.8 was generalized in [23]. It was shown there that, for \(\omega\) dominant, the dimension of the space \(\mathcal{N}_{\omega}\) equals the number of strongly dominant weights \(\xi\) such that \(\xi\leqslant\theta=\omega+\rho\), the maximal dimension allowed by Remark 3.4.
## 4 Zeta averages of double-laced root systems
In this section we assume that the root system \(\Phi\) is double-laced (i.e. of type \(B_{r}\), \(C_{r}\), \(F_{4}\)). By Lemma 2.4, the zeta average \(Z_{\Phi}({\bf x};u)\) may have poles only at \({\bf x}^{\alpha}=\pm 1/u\), with \(\alpha\in\Phi^{+}\) a short root, as long roots \(\alpha\) have \(m_{\alpha}=1\). This suggests a possible role for the (simply-laced) root sub-system \(\Phi^{s}\subset\Phi\) consisting of short roots. The positive short roots \((\Phi^{s})^{+}=\Phi^{s}\cap\Phi^{+}\) induce a unique canonical basis \(\Pi(\Phi^{s})\) of \(\Phi^{s}\), compatible with \((\Phi^{s})^{+}\). The basis \(\Pi(\Phi^{s})\) contains \(\Pi(\Phi)^{s}\), the short simple roots of \(\Phi\). We will continue to call the elements of \(\Pi(\Phi)\) simple roots, and we will refer to the elements of \(\Pi(\Phi^{s})\setminus\Pi(\Phi)^{s}\) as \(\Phi^{s}\)-simple roots.
Let \(Q^{s}\) denote the root lattice of \(\Phi^{s}\). The inclusion of root lattices \(Q^{s}\subset Q\) induces canonical morphisms \(\mathbb{F}(Q^{s})\subset\mathbb{F}(Q)\). The zeta average \(Z_{\Phi^{s}}({\bf x};u)\) defined using the basis \(\Pi(\Phi^{s})\) will be regarded as an element of \(\mathbb{F}(Q)\) via \(\mathbb{F}(Q^{s})\subset\mathbb{F}(Q)\). More explicitly, \(Z_{\Phi^{s}}\) is by definition a function of variables \(x_{j}\) for \(\alpha_{j}\) a short simple root, and of new variables \(x_{\gamma}\) for \(\gamma\) a \(\Phi^{s}\)-simple root. It is regarded as an element of \(\mathbb{F}(Q)\), and denoted by \(Z_{\Phi^{s}}({\bf x};u)\), via the substitution \(x_{\gamma}={\bf x}^{\gamma}\) for all \(\Phi^{s}\)-simple roots \(\gamma\).
In this section, we show that, in fact, \(Z_{\Phi}({\bf x};u)\) and \(Z_{\Phi^{s}}({\bf x};u)\) coincide. Although this result is not needed in the sequel, it is a straightforward application of the uniqueness property recalled in Proposition 3.8, and we include it because it might be of independent interest. This identity can be used to derive Theorem A for double-laced systems from the same result for simply-laced systems, but the proof of Theorem A that we present is uniform.
The group generated by the simple reflections corresponding the elements of \(\Pi(\Phi)^{\ell}\), the long simple roots of \(\Phi\), keeps \((\Phi^{s})^{+}\) stable. Therefore, this group permutes the elements of \(\Pi(\Phi^{s})\), and hence it can be identified with a subgroup of the group of automorphisms for the Dynkin diagram of \(\Phi^{s}\). This subgroup coincides with the full group of diagram automorphisms of \(\Phi^{s}\), except for the case of \(\Phi=C_{4}\). The orbit of the unique short simple root that has a long neighbor in the diagram of \(\Phi\) contains all the \(\Phi^{s}\)-simple roots; all the elements in this orbit are orthogonal. The explicit action of the simple long reflections of \(W\) on the Dynkin diagrams of \(\Phi^{s}\) is indicated in Table 2 below. We use the standard labelling of the Dynkin diagram of \(\Phi\) (see [1] and SS5.4). By convention, \(D_{2}=A_{1}\times A_{1}\) and \(D_{3}=A_{3}\).
\begin{table}
\begin{tabular}{l|l} \(\Phi\) & \\ \hline \(B_{r}\)\((r\geqslant 3)\) & \(A_{1}^{r}\): \(\bullet_{\alpha_{r}}\)\(\bullet_{\alpha_{r}+\alpha_{r-1}}\)\(\bullet_{\alpha_{r}+\cdots+\alpha_{2}}\)\(\bullet_{\alpha_{r}+\cdots+\alpha_{1}}\) \\ \(C_{r}\)\((r\geqslant 2)\) & \(D_{r}\): \(\bullet_{\alpha_{1}}\)\(\bullet_{\alpha_{2}}\)\(\bullet_{\alpha_{r-2}}\)\(\bullet_{\alpha_{r-1}+\alpha_{r}}\)\(\bullet_{\alpha_{r-1}}\)\(\bullet_{\alpha_{r}+\alpha_{2}}\)\(\bullet_{\alpha_{r}+\cdots+\alpha_{1}}\) \\ \(F_{4}\) & \(D_{4}\): \(\bullet_{\alpha_{3}}\)\(\bullet_{\alpha_{4}}\)\(\bullet_{\alpha_{1}}\)\(\bullet_{\alpha_{2}}\)\(\bullet_{\alpha_{r-1}}\)\(\bullet_{\alpha_{r-1}}\)\(\bullet_{\alpha_{r}+\alpha_{2}}\)\(\bullet_{\alpha_{r}+\alpha_{3}}\) \\ \end{tabular}
\end{table}
Table 2. Diagram automorphisms.
**Proposition 4.1**.: _Let \(\Phi\) be a double-laced irreducible root system, and regard \(Z_{\Phi^{s}}(\mathbf{x};u)\in\mathbb{F}(Q^{s})\subset\mathbb{F}(Q)\). Then,_
\[Z_{\Phi}(\mathbf{x};u)=Z_{\Phi^{s}}(\mathbf{x};u).\]
_More explicitly,_
\[Z_{B_{r}}(\mathbf{x};u)=\prod_{i=1}^{r}Z_{A_{1}}(x_{i}\cdot\ldots \cdot x_{r};u),\quad Z_{C_{r}}(\mathbf{x};u)=Z_{D_{r}}(x_{1},\ldots,x_{r-1},x_{ r-1}x_{r};u),\] \[Z_{F_{4}}(\mathbf{x};u)=Z_{D_{4}}(x_{3},x_{4},x_{2}x_{3},x_{1}x_ {2}x_{3};u).\]
Proof.: By Lemma 2.4, we can write \(Z_{\Phi^{s}}(\mathbf{x};u)=N(\mathbf{x};u)/D_{\Phi}(\mathbf{x};u)\) with a polynomial \(N(\mathbf{x};u)\) such that \(N(0,\ldots,0)=1\). By Proposition 3.8, our claim follows once we show that
\[Z_{\Phi^{s}}(\mathbf{x};u)=Z_{\Phi^{s}}(\mathbf{x};u)\big{|}_{\sigma_{i}}^{ \text{\rm CG}},\quad\text{for all }1\leqslant i\leqslant r. \tag{4.1}\]
If \(\alpha_{i}\) is a short root, then (4.1) follows from the invariance of \(Z_{\Phi^{s}}\) under \(\sigma_{i}\). If \(\alpha_{i}\) is a long root, then \(\sigma_{i}\) acts on \(\Phi^{s}\) by a diagram automorphism. Since a change of variables that corresponds to a diagram automorphism leaves the corresponding zeta average invariant, we obtain that
\[Z_{\Phi^{s}}(\mathbf{x};u)=Z_{\Phi^{s}}(\sigma_{i}\mathbf{x};u)=Z_{\Phi^{s}}( \mathbf{x};u)\big{|}_{\sigma_{i}}^{\text{\rm CG}}.\]
This finishes the proof.
## 5. The orthogonal root system
Throughout this section we assume that the root system \(\Phi\) is not of type \(G_{2}\). The case \(\Phi=G_{2}\) is considered in Appendix B. With this assumption, the roots \(\alpha\in\Phi\) with \(m_{\alpha}=2\) are precisely the short roots in \(\Phi\). Recall our convention that for simply-laced root systems we consider \(\Phi^{s}=\Phi\).
We fix \(1\leqslant i\leqslant r\) such that \(\alpha_{i}\) is _short_. As stated in Theorem A, the residue of \(Z_{\Phi}(\mathbf{x};u)\) at \(x_{i}=1/u\) can be expressed in terms of the zeta average of the root sub-system orthogonal to \(\alpha_{i}\). We first describe some of the properties of such orthogonal root systems.
For \(i\) a node in the Dynkin diagram of \(\Phi\), we denote by \(\Phi^{i}\subset\Phi\) the maximal standard parabolic root sub-system obtained by excluding the node \(i\), and we denote by \(\Phi^{(i)}\subset\Phi\) the standard parabolic root sub-system obtained by excluding the node \(i\) and its neighbors. We use \(\Pi(\Phi^{i})\) and \(\Pi(\Phi^{(i)})\) to refer to the corresponding bases. The corresponding parabolic subgroups of \(W\) are denoted by \(W^{i}\) and \(W^{(i)}\), respectively. We remark that \(W^{(i)}=\operatorname{Stab}_{W^{i}}(\alpha_{i})\), the parabolic subgroup of \(W^{i}\), generated by the simple reflections that fix \(\alpha_{i}\). For \(\alpha\in\Phi\), let \(n_{i}(\alpha)\in\mathbb{Z}\) denote the coefficient of \(\alpha_{i}\) in the expansion of \(\alpha\) in the basis \(\Pi(\Phi)\).
Consider now the orthogonal complement
\[\Phi_{0}=\alpha_{i}^{\perp}:=\{\alpha\in\Phi:\langle\alpha_{i},\alpha\rangle =0\}.\]
This is a root system containing \(\Phi^{(i)}\). We denote by \(W_{0}\) the Weyl group of \(\Phi_{0}\), and by \(\Pi(\Phi_{0})\) the basis compatible with the subset of positive roots \(\Phi_{0}^{+}:=\Phi^{+}\cap\Phi_{0}\). Of course, \(\Pi(\Phi_{0})\cap\Pi(\Phi)=\Pi(\Phi^{(i)})\). Let
\[\Pi_{\text{new}}(\Phi_{0})=\Pi(\Phi_{0})\setminus\Pi(\Phi^{(i)}).\]
The Dynkin diagram of \(\Phi_{0}\) is obtained by attaching the elements \(\beta\in\Pi_{\text{new}}(\Phi_{0})\) to the Dynkin diagram of \(\Phi^{(i)}\) according to the information provided by the inner products between \(\beta\) and the elements of \(\Pi(\Phi^{(i)})\). Since
the Weyl group acts simply transitively on the short roots of \(\Phi\), the isomorphism type of \(\Phi_{0}\) is independent of the node \(i\) and is recorded in Table 3. The conventions that we use for the information displayed in Table 3 are the following: we assume that \(r\geqslant 4\) for \(D_{r}\), \(r\geqslant 2\) for \(C_{r}\), and \(r\geqslant 3\) for \(B_{r}\); for \(C_{2}=B_{2}\), node \(1\) corresponds to a short root. For the second line, the following conventions apply: \(D_{2}=A_{1}\times A_{1}\); \(D_{3}=A_{3}\); \(C_{0}=\emptyset\); \(C_{1}=A_{1}^{\ell}\). The notation \(A_{1}^{s}\), and respectively \(A_{1}^{\ell}\), refers to a root system of type \(A_{1}\) generated by a short, and respectively long, root of \(\Phi\).
We provide an explicit description of the elements of \(\Pi_{\rm new}(\Phi_{0})\), and of the Dynkin diagram of \(\Phi_{0}\), for \(\Phi\) a classical root system, and \(\Phi=F_{4}\). The same information in the case of exceptional simply-laced root systems is found in Appendix A (and for \(\Phi=G_{2}\) in Appendix B). For a connected subset \(\mathcal{S}\) of nodes in the Dynkin diagram of \(\Phi\), we denote by \(\theta_{\mathcal{S}}\) the longest root in the corresponding root system (if \(\Phi\) is simply-laced), and by \(\theta_{\mathcal{S}}^{s}\), respectively \(\theta_{\mathcal{S}}^{\ell}\) the dominant short, respectively long root (if \(\Phi\) is double-laced).
_Remark 5.1_.: For the reader who prefers a more conceptual description of the Dynkin diagram of \(\Phi_{0}\), we summarize here the possible types of new simple roots that occur. A short/long element \(\beta\in\Pi_{\rm new}(\Phi_{0})\) is the dominant short/long root in a connected subdiagram \(D\) of the Dynkin diagram of \(\Phi\), which is minimal (with respect to inclusion) with the following properties
1. node \(i\) belongs to \(D\);
2. \(\alpha_{i}\) is orthogonal to the dominant short/long root of \(D\).
The Cartan type of \(D\), the index of the node \(i\) in the standard labeling of \(D\), and the role played by the root \(\beta\) in \(D\) are among the following:
1. \(A_{3}\) with the node \(i\) in position \(2\) and \(\beta\) the highest root;
2. \(D_{n}\), \(n\geqslant 4\), with the node \(i\) in position \(1\) and \(\beta\) the highest root;
3. \(C_{n}\), \(n\geqslant 2\), with \(i\) in position \(1\) and \(\beta\) the dominant short root;
4. \(C_{3}\) with the node \(i\) in position \(2\) and \(\beta\) the dominant long root.
For double-laced root systems, for the definition of the kernel functions \(K_{\alpha_{i},\Phi}(\mathbf{x})\) in SS7.3, we need the set \(\Pi_{\rm new}^{*}(\Phi_{0})\) that consists of the elements of \(\Pi_{\rm new}(\Phi_{0})\) that arise, as explained above, from a diagram \(D\) of type \(A_{3}\). This set is empty, unless \(\Phi=C_{r}\), \(r\geqslant 4\), and \(1<i<r-1\).
#### 5.4.1 \(A_{r},\ r\geqslant 1\):
The set \(\Phi^{+}\) consists of \(\alpha_{j}+\ldots+\alpha_{k}\) for \(j\leqslant k\). If \(i=1\), or \(i=r\), we have \(\Phi_{0}=\Phi^{(i)}\). If \(1<i<r\), then \(\Pi_{\rm new}(\Phi_{0})=\{\beta=\alpha_{i}+\alpha_{i-1}+\alpha_{i+1}\}\), and the Dynkin diagram of \(\Phi_{0}\) is
\begin{table}
\begin{tabular}{c|c c c c c c c} \(\Phi\) & \(A_{r}\) & \(D_{r}\) & \(E_{6}\) & \(E_{7}\) & \(E_{8}\) & \(B_{r}\) & \(C_{r}\) & \(F_{4}\) \\ \hline \(\Phi_{0}\) & \(A_{r-2}\) & \(D_{r-2}\times A_{1}\) & \(A_{5}\) & \(D_{6}\) & \(E_{7}\) & \(B_{r-1}\) & \(C_{r-2}\times A_{1}^{s}\) & \(B_{3}\) \\ \end{tabular}
\end{table}
Table 3. The Cartan type of the orthogonal root system
#### 5.4.2. \(B_{r},\ r\geqslant 3\):
The set \(\Phi^{+}\) consists of \(\alpha_{j}+\ldots+\alpha_{k}\), \(j\leqslant k\leqslant r\), and \(\alpha_{j}+\ldots+\alpha_{k}+2(\alpha_{k+1}+\ldots+\alpha_{r})\), \(j\leqslant k<r\). In this case \(\alpha_{r}\) is the only short root, and for \(i=r\) the set \(\Pi_{\rm new}(\Phi_{0})\) consists of \(\beta=\alpha_{r-1}+\alpha_{r}\). The Dynkin diagram of \(\Phi_{0}\) is:
#### 5.4.3. \(C_{r},\ r\geqslant 2\):
The set \(\Phi^{+}\) consists of \(\alpha_{j}+\ldots+\alpha_{k}\), \(j\leqslant k<r\), \(2(\alpha_{j}+\ldots+\alpha_{r-1})+\alpha_{r}\), \(j<r\), and \(\alpha_{j}+\ldots+\alpha_{k}+2(\alpha_{k+1}+\ldots+\alpha_{r-1})+\alpha_{r}\), \(j\leqslant k<r\). The dominant long and short roots are
\[\theta^{\ell}=2\alpha_{1}+\ldots 2\alpha_{r-1}+\alpha_{r},\quad\theta^{s}= \alpha_{1}+2\alpha_{2}+\ldots+2\alpha_{r-1}+\alpha_{r}.\]
The description of \(\Pi_{\rm new}(\Phi_{0})\) and the Dynkin diagram of \(\Phi_{0}\) is included in Table 4. In small rank cases, the situation depicted in the table reduces to the following: if \(r=2\), \(i=1\), then \(\Phi_{0}\) is of type \(A_{1}^{s}\) generated by \(\beta\); if \(r=3\), \(i=1\), then \(\Phi_{0}\) is of type \(A_{1}^{s}\times A_{1}^{\ell}\) generated by \(\beta\) and \(\alpha_{3}\); and when \(r=3\), \(i=2\), then \(\Phi_{0}\) is of type \(A_{1}^{s}\times A_{1}^{\ell}\) generated by \(\beta\) and \(\beta^{\prime}\).
#### 5.4.4. \(D_{r},\ r\geqslant 4\):
The set \(\Phi^{+}\) consists of any sum of distinct simple roots corresponding to a connected part of the Dynkin diagram, together with the roots \(\alpha_{j}+\ldots+\alpha_{k-1}+2\alpha_{k}+\ldots+2\alpha_{r-2}+\alpha_{r-1}+ \alpha_{r}\), \(1\leqslant j<k\leqslant r-2\). The longest root is \(\theta=\alpha_{1}+2\alpha_{2}+\ldots 2\alpha_{r-2}+\alpha_{r-1}+\alpha_{r}\). The description of \(\Pi_{\rm new}(\Phi_{0})\) and the Dynkin diagram of \(\Phi_{0}\) is included in Table 5. We emphasize that when \(i=r-3\), the node labeled by \(\beta\) is connected with nodes \(\alpha_{r-4}\), \(\alpha_{r-1}\), and \(\alpha_{r}\).
#### 5.4.5. \(F_{4}\):
The dominant roots are \(\theta^{\ell}=2\alpha_{1}+3\alpha_{2}+4\alpha_{3}+2\alpha_{4}\) and \(\theta^{s}=\alpha_{1}+2\alpha_{2}+3\alpha_{3}+2\alpha_{4}\). The description of \(\Pi_{\rm new}(\Phi_{0})\) and the Dynkin diagram of \(\Phi_{0}\) is included in Table 6.
\begin{table}
\begin{tabular}{l l l} Node \(i\) & \(\Pi_{\rm new}(\Phi_{0})\) & \(\Phi_{0}\) \\ \hline \(i=1\) & \(\beta=\theta^{s}\) & \\ \hline \(1<i<r-1\) & \(\beta=\theta_{\{i-1,i,i+1\}}\) & \\ & \(\beta^{\prime}=\theta_{\{i,\ldots,r\}}^{s}\) & \\ \hline \(i=r-1\) & \(\beta=\theta_{\{r-1,r\}}^{s}\) & \\ & \(\beta^{\prime}=\theta_{\{r-2,r-1,r\}}^{\ell}\) & \\ \hline \end{tabular}
\end{table}
Table 4. The orthogonal root system in type \(C_{r}\), \(r\geqslant 2\)
## 6. The residue as a zeta average
We continue to work under the hypothesis that \(\Phi\) is an irreducible root system not of type \(G_{2}\), and \(\alpha_{i}\) is a fixed short root. For \(e\in\{-1,0,1\}\), we denote
\[\Phi_{e}=\{\alpha\in\Phi:\langle\alpha_{i},\alpha\rangle=e\},\]
so that \(\Phi_{0}\) is the orthogonal root subsystem studied in the previous section. Since \(\alpha_{i}\) is short, for \(\alpha\in\Phi\) we have \(\langle\alpha_{i},\alpha\rangle=\pm 1\), if and only if \(\alpha\) is a short root and \(\alpha\notin\{\pm\alpha_{i}\}\cup\Phi_{0}\). Therefore, \(\Phi_{1}\), \(\Phi_{-1}\) consist of short roots and we have a disjoint union
\[\Phi=\{\pm\alpha_{i}\}\cup\Phi_{0}\cup\Phi_{-1}\cup\Phi_{1}\cup(\Phi^{\ell} \setminus\Phi_{0}^{\ell}).\]
\begin{table}
\begin{tabular}{c c c} Node \(i\) & \(\Pi_{\text{new}}(\Phi_{0})\) & \(\Phi_{0}\) \\ \hline \(i=3\) & \(\beta=\theta_{\{2,3\}}^{s}\) & \(\beta^{\prime}\) & \(\alpha_{1}\) \\ & \(\beta^{\prime}=\theta_{\{2,3,4\}}^{\ell}\) & \\ \hline \(i=4\) & \(\beta=\theta_{\{2,3,4\}}^{s}\) & \(\alpha_{2}\) & \(\alpha_{1}\) \\ \hline \end{tabular}
\end{table}
Table 6. The orthogonal root system in type \(F_{4}\)
\begin{table}
\begin{tabular}{c c c} Node \(i\) & \(\Pi_{\text{new}}(\Phi_{0})\) & \(\Phi_{0}\) \\ \hline \(i=1\) & \(\beta=\theta\) & \\ \hline \(1<i<r-2\) & \(\beta=\theta_{\{i-1,i,i+1\}}\) & \(\alpha_{1}\) \\ & \(\beta^{\prime}=\theta_{\{i,\ldots,r\}}\) & \\ \hline \(i=r\) & \(\beta=\theta_{\{r-3,\ldots,r\}}\) & \\ \hline \(i=r\) & \(\beta=\theta_{\{r-3,\ldots,r\}}\) & \\ \hline \(i=r\) & \(\beta=\theta_{\{r-3,r-2,r-1\}}\) & \\ \hline \(i=r-2\) & \(\beta^{\prime}=\theta_{\{r-3,r-2,r\}}\) & \\ & \(\beta^{\prime\prime}=\theta_{\{r-1,r-2,r\}}\) & \\ \hline \end{tabular}
\end{table}
Table 5. The orthogonal root system in type \(D_{r}\), \(r\geqslant 4\)
We consider the following modified version of \(Z_{\Phi}(\mathbf{x};u)\), which is obtained by removing from the denominator of \(Z_{\Phi}(\mathbf{x};u)\) the factors corresponding to roots in \(\Phi_{1}^{+}\)
\[Z_{\Phi}^{[i]}(\mathbf{x};u)=Z_{\Phi}(\mathbf{x};u)\cdot\prod_{\alpha\in\Phi_{ 1}^{+}}(1-u^{2}\mathbf{x}^{2\alpha}). \tag{6.1}\]
Let \(Q_{0}\) denote the root lattice of \(\Phi_{0}\). The inclusion of root lattices \(Q_{0}\subset Q\) induces canonical morphisms \(\mathbb{F}(Q_{0})\subset\mathbb{F}(Q)\). The zeta average \(Z_{\Phi_{0}}\) defined using the basis \(\Pi(\Phi_{0})\) will henceforth be regarded as an element of \(\mathbb{F}(Q)\) via \(\mathbb{F}(Q_{0})\subset\mathbb{F}(Q)\). More explicitly, \(Z_{\Phi_{0}}\) is by definition a function of variables \(x_{j}\) for \(\alpha_{j}\in\Pi(\Phi^{(i)})\) (with the notation in SS5.2), and of new variables \(x_{\beta}\) for \(\beta\in\Pi_{\mathrm{new}}(\Phi_{0})\); it is regarded as an element of \(\mathbb{F}(Q)\), and denoted by \(Z_{\Phi_{0}}(\mathbf{x};u)\), via the substitution \(x_{\beta}=\mathbf{x}^{\beta}\) for all \(\beta\in\Pi_{\mathrm{new}}(\Phi_{0})\). With his notation, Theorem A can be restated as follows.
**Theorem 6.1**.: _Let \(\Phi\) be an irreducible root system not of type \(G_{2}\), and let \(\alpha_{i}\) be a short simple root. Then,_
\[\operatorname*{Res}_{x_{i}=1/u}Z_{\Phi}^{[i]}(\mathbf{x};u)=\left.Z_{\Phi_{0}} (\mathbf{x};u)\right|_{x_{i}=1/u}. \tag{6.2}\]
The proof, which is technical, will occupy the remainder of this section.
Before we start developing the technical elements that are needed for the proof, we give a succinct overview of the main argument. Let
\[F(\underline{x}):=\operatorname*{Res}_{x_{i}=1/u}Z_{\Phi}^{[i]}(\mathbf{x};u), \tag{6.3}\]
where \(\underline{x}=(x_{1},\ldots,x_{i-1},x_{i+1},\ldots,x_{r})\). For \(\lambda\in Q\), we denote \(\underline{x}^{\lambda}=\mathbf{x}^{\lambda}|_{x_{i}=1/u}.\) With this notation, Theorem 6.1 states that \(F(\underline{x})\) equals the evaluation of \(Z_{\Phi_{0}}\) at \(x_{\beta}=\underline{x}^{\beta}\), for all \(\beta\in\Pi_{\mathrm{new}}(\Phi_{0})\).
The first step in the argument is to show that the evaluation at \(x_{i}=1/u\) of the numerator \(N_{\Phi}\) of \(Z_{\Phi}\) defined in (2.5) factors as follows
\[N_{\Phi}(\mathbf{x};u)|_{x_{i}=1/u}=2\prod_{\alpha\in\Phi_{-1}^{+}\cup(\Phi^{ \ell,+}\setminus\Phi_{0}^{+})}(1-u^{2}\underline{x}^{m_{\alpha}\alpha})\cdot N _{0}(\underline{x};u),\]
for \(N_{0}(\underline{x};u)\) a polynomial in \(\underline{x}\) such that \(N_{0}(\underline{0};u)=1\), and such that its degree is explicitly bounded with respect to all variables. We use a general property of roots \(\alpha\in\Phi^{+}\) such that \(\langle\alpha_{i},\alpha\rangle=-1\) that may be of independent interest (Lemma 6.2,) and as a consequence, in Proposition 6.4 we show that
\[F(\underline{x})=\frac{N_{0}(\underline{x};u)}{D_{\Phi_{0}}(\mathbf{x};u)|_{x _{i}=1/u}}\.\]
It is a direct consequence of its definition that \(F(\underline{x})\) is fixed by the Chinta-Gunnells action of the simple reflections corresponding to \(\Pi(\Phi^{(i)})\). The second step in the argument is to show the remarkable fact that, under the Chinta-Gunnells action of the reflection \(\sigma_{\beta}\in W\), \(\beta\in\Pi_{\mathrm{new}}(\Phi_{0})\), \(F(\underline{x})\) transforms exactly as \(Z_{\Phi_{0}}(\mathbf{x};u)|_{x_{i}=1/u}\) does under the action of the _simple_ reflections associated to \(\beta\) in the Weyl group of \(\Phi_{0}\). This is accomplished in Proposition 6.8, using an explicit description of the set \(\Phi(\sigma_{\beta})\) for \(\beta\) a dominant root for a root system \(D\) as in Remark 5.1.
The third part of the argument is to show that \(N_{0}(\underline{x};u)\) a polynomial in \(\underline{x}^{\gamma},\,\gamma\in\Pi(\Phi_{0})\). This is accomplished in Proposition 6.11, crucially relying on the results of Section 3, more specifically, on Proposition 3.3 and Corollary 3.5.
Finally, in SS6.11 we collect all these facts and use the uniqueness result of Proposition 3.8 to conclude the argument.
We begin with some preliminary results that are needed for the proof of Proposition 6.4.
**Lemma 6.2**.: _Let \(\alpha\in\Phi^{+}\) such that \(\langle\alpha_{i},\alpha\rangle=-1\). Then, there exists a simple root \(\alpha_{j}\) and \(w\in W\) such that \(\langle\alpha_{i},\alpha_{j}\rangle=-1\) and_
\[w\alpha_{i}=\alpha_{i},\ w\alpha_{j}=\alpha. \tag{6.4}\]
Proof.: Assume first that \(\Phi\) is simply-laced. Let \(\beta=\tau\alpha\) with \(\tau\in W^{(i)}\) be a smallest positive root (with respect to the order \(\leqslant\)) in the orbit \(W^{(i)}\alpha\). Clearly \(\langle\beta,\alpha_{i}\rangle=\langle\tau\alpha,\tau\alpha_{i}\rangle=-1\). Let \(\alpha_{j}\) be such that \(\sigma_{\beta}\alpha_{j}\in\Phi^{-}\). If \(\beta=\alpha_{j}\), then \(w=\tau^{-1}\) has the required properties since \(\langle\beta,\alpha_{i}\rangle=-1\) implies \(\langle\alpha_{j},\alpha_{i}\rangle=-1\). If \(\langle\beta,\alpha_{j}\rangle=1\), then \(\langle\alpha_{j},\alpha_{i}\rangle=-1\), otherwise we would have \(\sigma_{j}\in W^{(i)}\) and \(\sigma_{j}\beta=\beta-\alpha_{j}<\beta\), contradicting the minimality of \(\beta\). Let \(\gamma=\sigma_{j}\beta=\beta-\alpha_{j}\), and define \(w=\tau^{-1}\sigma_{\gamma}\). Note that \(\langle\gamma,\alpha_{i}\rangle=0\) and \(\langle\gamma,\alpha_{j}\rangle=-1\), so \(w\alpha_{i}=\alpha_{i}\) and
\[w\alpha_{j}=\tau^{-1}(\alpha_{j}+\gamma)=\tau^{-1}\beta=\alpha,\]
proving (6.4).
Assume now \(\Phi\) is double-laced. Observe that the hypothesis implies that \(\alpha\in\Phi^{s}\), so \(\Phi\) is not of type \(B_{r}\) (for which all short roots are mutually orthogonal). In particular \(\Phi^{s}\) is irreducible (see Table 2 in SS4.2), and we can use the simply-laced case just proved. We conclude that there exists a \(\Phi^{s}\)-simple root \(\alpha^{\prime}\) which is a neighbor of \(\alpha_{i}\) in the diagram of \(\Phi^{s}\), and \(w^{\prime}\) in the Weyl group generated by reflections in \(\Phi^{s}\), such that
\[w^{\prime}\alpha_{i}=\alpha_{i},\ w^{\prime}\alpha^{\prime}=\alpha.\]
If \(\alpha^{\prime}\) is a simple root in \(\Phi\), we take \(w=w^{\prime}\). If not, then \(\alpha_{i}\) does not have a long neighbor in the diagram of \(\Phi\) (see Table 2). In particular \(\alpha_{i}\) is fixed by the action of reflections associated to simple long roots, and there exists an element \(v\) in the subgroup generated by these reflections such that \(v\alpha^{\prime}=\alpha_{j}\), with \(\alpha_{j}\) a short neighbor of \(\alpha_{i}\). The element \(w=w^{\prime}v^{-1}\) then satisfies the required properties.
We use Lemma 6.2 to prove the following result.
**Lemma 6.3**.: _Let \(\alpha\in\Phi^{+}\) such that \(\langle\alpha_{i},\alpha\rangle=-1\). Then, for either choice of signs,_
\[\operatorname*{Res}_{\begin{subarray}{c}x_{i}=1/u\\ \mathbf{x}^{\alpha}=\pm 1/u\end{subarray}}Z_{\Phi}(\mathbf{x};u)=0.\]
Proof.: With \(w\) and \(\alpha_{j}\) as in (6.4), we define:
\[v:=\sigma_{\alpha_{i}+\alpha_{j}}w^{-1}.\]
We have \(\ell(v)=\ell(w)+3\). With the reduced expression \(\sigma_{\alpha_{i}+\alpha_{j}}=\sigma_{i}\sigma_{j}\sigma_{i}\), the last three elements in \(\Phi(v)\), as described in Lemma 2.1, are
\[w\alpha_{i}=\alpha_{i}\prec w(\alpha_{i}+\alpha_{j})=\alpha_{i}+\alpha \prec w\alpha_{j}=\alpha.\]
We take the double residue at \(x_{i}=1/u\), \({\bf x}^{\alpha}=\pm 1/u\) in the functional equation \(Z_{\Phi}({\bf x};u)=Z_{\Phi}({\bf x};u)\big{|}_{\begin{subarray}{c}{\mathbb{C}} \end{subarray}}^{\mathbb{C}\mathbb{C}}\), using (2.6) to express the right-hand side. We have
\[\mathop{\rm Res}_{\begin{subarray}{c}x_{i}=1/u\\ {\bf x}^{\alpha}=\pm 1/u\end{subarray}}Z_{\Phi}({\bf x};u)=\sum_{\underline{ \delta}\in\{0,1\}^{\varepsilon_{s}}}Z_{\Phi}(v\varepsilon_{\underline{\delta}}{ \bf x})\Big{|}_{\begin{subarray}{c}x_{i}=1/u\\ {\bf x}^{\alpha}=\pm 1/u\end{subarray}}\cdot\mathop{\rm Res}_{\begin{subarray}{c}x_{i}=1 /u\\ {\bf x}^{\alpha}=\pm 1/u\end{subarray}}\Pi_{\underline{\delta}}\;, \tag{6.5}\]
where \(\ell_{s}=|\Phi^{s}(v)|\) and
\[\Pi_{\underline{\delta}}=\Pi_{v,\underline{\delta}}({\bf x}):=\prod_{\nu\in \Phi^{s}(v)}J\left((-1)^{\langle\nu,\sum_{\rho<\iota}\delta_{\mu}\rangle}{ \bf x}^{\nu},\delta_{\nu}\right).\]
The function \(Z_{\Phi}(v\varepsilon{\bf x})\) has no pole at \(x_{i}=1/u\), \({\bf x}^{\alpha}=\pm 1/u\) for any sign function \(\varepsilon\), due to the following fact.
If \(\gamma\in\Phi^{s}\), \(\gamma=m\alpha_{i}+n\alpha\), for \(m,n\in{\mathbb{Z}}\) with \(|m+n|=1\), then \(\gamma\in\{\pm\alpha_{i},\pm\alpha\}\). Indeed, if \(\gamma\neq\pm\alpha_{i}\), then \(|\langle\gamma,\alpha_{i}\rangle|=|2m-n|\leqslant 1\), since \(\gamma\) and \(\alpha_{i}\) have the same length. It follows that \(3|m|\leqslant 2\), so \(m=0\) and \(|n|=1\). In consequence, \(\gamma\in\{\pm\alpha\}\), as claimed.
The poles of \(Z_{\Phi}(v\varepsilon{\bf x})\) occur at \({\bf x}^{v^{-1}\gamma}=\pm 1/u\), for \(\gamma\in\Phi^{s,+}\). As explained above, a double pole for \(x_{i}=1/u\), \({\bf x}^{\alpha}=\pm 1/u\), can only occur for \(v^{-1}\gamma\in\{\alpha_{i},\alpha\}\), which is impossible since \(\alpha_{i},\alpha\in\Phi(v)\).
For a similar reason, the product \(\Pi_{\underline{\delta}}\) can have a double pole at \(x_{i}=1/u\), \({\bf x}^{\alpha}=\pm 1/u\), only in the terms corresponding to \(\nu=\alpha_{i}\) and \(\nu=\alpha\) since, according to the fact above, for \(\nu\in\Phi^{s}(v)\setminus\{\alpha_{i},\alpha\}\) we have \(u^{2}{\bf x}^{2\nu}\neq 1\) under the double evaluation.
Therefore we concentrate on the product of the last three terms in \(\Pi_{\underline{\delta}}\), specifically
\[J\left({\bf x}^{\alpha_{i}}(-1)^{\langle\alpha_{i},\nu^{\prime}\rangle},\delta _{\alpha_{i}}\right)\cdot J\left({\bf x}^{\alpha_{i}+\alpha}(-1)^{\langle \alpha_{i}+\alpha,\nu^{\prime}\rangle+\delta_{\alpha_{i}}},\delta_{\alpha_{i}+ \alpha}\right)\cdot J\left({\bf x}^{\alpha}(-1)^{\langle\alpha,\nu^{\prime} \rangle+\delta_{\alpha_{i}}+\delta_{\alpha_{i}+\alpha}},\delta_{\alpha}\right), \tag{6.6}\]
where \(\nu^{\prime}=\sum_{\nu\prec\alpha_{i}}\delta_{\nu}\nu\). The double residue of the product (6.6) vanishes unless
\[(-1)^{\langle\alpha_{i},\nu^{\prime}\rangle}=1,\qquad\text{ and }\qquad(-1)^{\langle\alpha_{i},\nu^{\prime}\rangle+\delta_{\alpha_{i}}+ \delta_{\alpha_{i}+\alpha}}=\pm 1. \tag{6.7}\]
We group the terms in (6.5) in pairs corresponding to a tuples \(\underline{\delta}\) and \(\underline{\delta}^{\prime}\) that satisfy satisfying (6.7), coincide everywhere except for the last three components, and
\[\delta^{\prime}_{\alpha_{i}}=1-\delta_{\alpha_{i}},\quad\delta^{\prime}_{ \alpha_{i}+\alpha}=1-\delta_{\alpha_{i}+\alpha},\quad\delta^{\prime}_{\alpha} =1-\delta_{\alpha}.\]
Note that if \(\underline{\delta}\) satisfies (6.7) then also \(\underline{\delta}^{\prime}\) also satisfies (6.7) and \(\varepsilon_{\underline{\delta}^{\prime}}=\varepsilon_{\underline{\delta}}\). Therefore, it is enough to show that the double residue of \(\Pi_{\underline{\delta}}+\Pi_{\underline{\delta}^{\prime}}\) vanishes. More specifically, it is enough to show that the sum of double residues of the products in (6.6) corresponding to \(\underline{\delta}\) and \(\underline{\delta}^{\prime}\) vanishes. Now, \(\mathop{\rm Res}_{x=1/u}J(x,\delta)=(1-u^{2})/2\) is independent of \(\delta\) and the sum of double residues vanishes thanks to the identity
\[J(1/u^{2},0)+J(-1/u^{2},1)=0.\qed\]
We are now ready to complete the first step in the proof of Theorem 6.1.
**Proposition 6.4**.: _We have_
\[F(\underline{x})=\frac{N_{0}(\underline{x};u)}{D_{\Phi_{0}}({\bf x};u)|_{x_{i} =1/u}},\]
_with \(N_{0}(\underline{x};u)\) a polynomial in \(\underline{x}\) such that \(N_{0}(\underline{0};u)=1\) and any \(\lambda\in\operatorname{Supp}(N_{0}(\underline{x};u))\) satisfies \(\lambda\leqslant\sum_{\alpha\in\Phi_{0}^{+}}\alpha\)._
Proof.: By definition, we have
\[\mathop{\rm Res}_{x_{i}=1/u}Z_{\Phi}^{[i]}({\bf x};u)=\frac{N_{\Phi}({\bf x};u)| _{x_{i}=1/u}}{2\prod_{\alpha\in\Phi_{-1}^{+}\cup\Phi_{0}^{+}\cup(\Phi^{\ell_{i} +}\setminus\Phi_{0}^{+})}(1-u^{2}\underline{x}^{m_{\alpha}\alpha})},\]
where \(\Phi^{\ell,+}\) denotes the positive long roots. By Lemma 6.3 and Lemma 2.4, we also have
\[N_{\Phi}({\bf x};u)|_{x_{i}=1/u}=2\prod_{\alpha\in\Phi_{-1}^{+}\cup(\Phi^{\ell,+} \setminus\Phi_{0}^{+})}(1-u^{2}\underline{x}^{m_{\alpha}\alpha})\cdot N_{0}( \underline{x};u), \tag{6.8}\]
for a polynomial \(N_{0}(\underline{x};u)\) (the factor \(2\) comes from the fact that \(1+ux_{i}\) divides \(N_{\Phi}({\bf x};u)\) by Lemma 2.4). This gives the formula in the statement.
To analyze the support of \(N_{0}(\underline{x};u)\), we introduce some notation. For \(\lambda=\sum n_{j}\alpha_{j}\in Q\), denote
\[\underline{s}(\lambda)=\sum_{j\neq i}n_{j}\alpha_{j}\in Q,\]
and for a set of roots \(A\), let \(\underline{s}(A)=\sum_{\lambda\in A}\underline{s}(\lambda)\). Since \(\sigma_{i}:\Phi_{-1}^{+}\to\Phi_{1}^{+}\) is an isomorphism given by \(\alpha\mapsto\alpha+\alpha_{i}\), we have \(\underline{s}(\Phi_{1})=\underline{s}(\Phi_{-1})\). By Corollary 3.7, all the monomials \(\lambda\in\operatorname{Supp}(N_{\Phi}({\bf x};u)|_{x_{i}=1/u})\) have
\[\lambda\leq\underline{s}(\Phi^{+})=\underline{s}(\Phi_{0}^{+})+2\underline{s }(\Phi_{-1}^{+})+\underline{s}(\Phi^{\ell,+}\setminus\Phi_{0}^{+}).\]
Combining this with (6.8) finishes the proof.
From the Proposition 6.4 and (6.8), we deduce the following equivalent formulation of Theorem 6.1.
**Corollary 6.5**.: _Theorem 6.1 is equivalent to_
\[N_{\Phi}({\bf x};u)|_{x_{i}=1/u}=2\prod_{\alpha\in\Phi_{-1}^{+}\cup(\Phi^{\ell, +}\setminus\Phi_{0}^{+})}(1-u^{2}\underline{x}^{m_{\alpha}\alpha})\cdot N_{ \Phi_{0}}({\bf x};u)|_{x_{i}=1/u}.\]
We continue by developing the technical elements that are necessary for the proof of Proposition 6.8, which states that for every \(\beta\in\Pi_{\rm new}(\Phi_{0})\), the modified residue \(F(\underline{x})\) defined in (6.3) transforms under the action of \(\sigma_{\beta}\in W\) precisely as \(Z_{\Phi_{0}}({\bf x};u)|_{x_{i}=1/u}\) transforms under the action of the _simple_ reflection associated to \(\beta\) in \(W_{0}\). This is surprising, as the reflection \(\sigma_{\beta}\) is far from being a simple reflection in \(W\).
By Remark 5.1, the elements \(\beta\in\Pi_{\rm new}(\Phi_{0})\) are dominant roots in subdiagrams of \(\Phi\) of type \(A_{3}\), \(D_{n}\), or \(C_{n}\). For the proof of Proposition 6.8, we will need an explicit description of the set \(\Phi(\sigma_{\beta})\) in each case, which is provided in the next lemma. We use the standard labelling of the root systems from SS5.4.
**Lemma 6.6**.: (i) _Let \(\beta\) be the highest root in a root system \(\Phi\) of type \(D_{n}\), \(n\geqslant 4\), or of type \(A_{3}\). Let \(\alpha_{i}=\alpha_{1}\) in the case of \(D_{n}\), and \(\alpha_{i}=\alpha_{2}\) in the case of \(A_{3}\). There is a reduced expression for \(\sigma_{\beta}\) such that the set \(\Phi(\sigma_{\beta})\), ordered as in_ (2.1)_, is given by_
\[\{\gamma_{1}\prec\gamma_{1}+\alpha_{i}\prec\ldots\prec\gamma_{t}\prec\gamma_{ t}+\alpha_{i}\prec\beta\prec\gamma_{t+1}\prec\gamma_{t+1}+\alpha_{i}\prec \ldots\prec\gamma_{2t}\prec\gamma_{2t}+\alpha_{i}\}.\]
(ii) _Let \(\beta\) be the highest short root in a root system \(\Phi\) of type \(C_{n}\), \(n\geqslant 2\), and let \(\alpha_{i}=\alpha_{1}\). There is a reduced expression for \(\sigma_{\beta}\) such that the set \(\Phi(\sigma_{\beta})\), ordered as in_ (2.1)_, is given by_
\[\{\gamma_{1}\prec\gamma_{1}+\alpha_{i}\prec\ldots\prec\gamma_{t}\prec\gamma_{ t}+\alpha_{i}\prec\beta-\alpha_{i}\prec\beta\prec\beta+\alpha_{i}\prec \gamma_{t+1}\prec\gamma_{t+1}+\alpha_{i}\prec\ldots\prec\gamma_{2t}\prec \gamma_{2t}+\alpha_{i}\}.\]
_In \(\Phi(\sigma_{\beta})\), the only long roots are \(\beta\pm\alpha_{i}\)._
(iii) _Let \(\beta\) be the highest long root in a root system \(\Phi\) of type \(C_{3}\), and let \(\alpha_{i}=\alpha_{2}\). Then, \(\sigma_{\beta}=\sigma_{1}\sigma_{2}\sigma_{3}\sigma_{2}\sigma_{1}\) and_
\[\Phi(\sigma_{\beta})=\{\gamma_{1}\prec\gamma_{1}+\alpha_{i}\prec\beta\prec \gamma_{2}\prec\gamma_{2}+\alpha_{i}\},\]
_where \(\gamma_{1}=\alpha_{1}\) and \(\gamma_{2}=\alpha_{1}+\alpha_{2}+\alpha_{3}\)._
(iv) _In all cases above, we have \(t=\mathrm{rank}(\Phi)-2\) and_
\[\{\gamma_{1},\ldots,\gamma_{2t}\}=\Phi(\sigma_{\beta})\cap\Phi_{-1}.\]
_In particular, \(\gamma_{j}\) and \(\sigma_{i}\gamma_{j}=\gamma_{j}+\alpha_{i}\) are short roots. Furthermore, \(\langle\gamma_{j},\beta^{\vee}\rangle=1\), \(1\leqslant j\leqslant 2t\)._
Proof.: (i) We prove the statement for \(D_{n}\) by induction on \(n\). Assume \(\Phi\) is of type \(D_{n}\), \(n\geqslant 4\), and let \(\Phi^{\prime}\subset\Phi\) be the root subsystem of type \(D_{n-1}\) obtained by removing node \(1\) from the Dynkin diagram of \(\Phi\) (with the convention that \(D_{3}=A_{3}\)). Suppose the statement is true for \(\Phi^{\prime}\), and we show that it is true for \(\Phi\) as well.
Let \(\beta^{\prime}\) be the highest root in \(\Phi^{\prime}\), so \(\beta=\alpha_{1}+\alpha_{2}+\beta^{\prime}\). Note that \(\beta\) is orthogonal to all simple roots except for \(\alpha_{2}\), and \(\langle\beta,\alpha_{2}\rangle=1\). Since \(\beta=\sigma_{2}\sigma_{1}\beta^{\prime}\), we have \(\sigma_{\beta}=\sigma_{2}\sigma_{1}\sigma_{\beta^{\prime}}\sigma_{1}\sigma_{2}.\) From Lemma 2.1, it follows that
\[\Phi(\sigma_{\beta})=\Phi(\sigma_{1}\sigma_{2})\cup\sigma_{2}\sigma_{1}\Phi( \sigma_{\beta^{\prime}})\cup\sigma_{2}\sigma_{1}\sigma_{\beta^{\prime}}\Phi( \sigma_{2}\sigma_{1}), \tag{6.9}\]
with the ordering \(\prec\) on \(\Phi(\sigma_{\beta})\) being the concatenation of the ordering on the three sets on the right-hand side. Using that \(\langle\alpha_{1},\beta^{\prime}\rangle=-1\), \(\langle\alpha_{2},\beta^{\prime}\rangle=0\) we compute
\[\Phi(\sigma_{1}\sigma_{2})=\{\alpha_{2}\prec\alpha_{2}+\alpha_{1}\}\quad \text{and}\quad\sigma_{2}\sigma_{1}\sigma_{\beta^{\prime}}\Phi(\sigma_{2} \sigma_{1})=\{\beta^{\prime}\prec\beta^{\prime}+\alpha_{1}\}.\]
By the induction hypothesis, the roots in \(\Phi(\sigma_{\beta^{\prime}})\) come in consecutive pairs \(\gamma\prec\gamma+\alpha_{2}\), with \(\beta^{\prime}\) the central element in the set (which is of odd cardinality). We have \(\langle\gamma,\alpha_{2}\rangle=-1\), \(\langle\gamma,\alpha_{1}\rangle=0\), so the corresponding elements in the set \(\sigma_{2}\sigma_{1}\Phi(\sigma_{\beta^{\prime}})\) are
\[\sigma_{2}\sigma_{1}(\gamma)=\gamma+\alpha_{2}\prec\sigma_{2}\sigma_{1}( \gamma+\alpha_{2})=\gamma+\alpha_{2}+\alpha_{1}\quad\text{and}\quad\sigma_{2} \sigma_{1}(\beta^{\prime})=\beta,\]
finishing the induction step. Remark that \(\langle\alpha_{1},\alpha_{2}\rangle=\langle\alpha_{1},\beta^{\prime}\rangle= \langle\alpha_{1},\gamma+\alpha_{2}\rangle=-1\).
The base case for induction is \(D_{3}=A_{3}\), with \(i=2\) the middle node. We have \(\beta=\alpha_{1}+\alpha_{2}+\alpha_{3}\), \(\sigma_{\beta}=\sigma_{1}\sigma_{2}\sigma_{3}\sigma_{2}\sigma_{1}\), and formula (2.1) gives
\[\Phi(\sigma_{\beta})=\{\alpha_{1}\prec\alpha_{1}+\alpha_{2}\prec\beta\prec \alpha_{3}\prec\alpha_{3}+\alpha_{2}\}.\]
(ii) We again use induction on \(n\), the base case being \(n=2\), when \(\beta=\alpha_{1}+\alpha_{2}\), \(\sigma_{\beta}=\sigma_{2}\sigma_{1}\sigma_{2}\), and
\[\Phi(\sigma_{\beta})=\{\beta-\alpha_{1}\prec\beta\prec\beta+\alpha_{1}\}.\]
Assume now \(\Phi\) is of type \(C_{n}\), \(n\geqslant 3\), and let \(\Phi^{\prime}\subset\Phi\) be the root subsystem of type \(C_{n-1}\) obtained by removing node \(1\) from the Dynkin diagram of \(\Phi\). Assume that our claim is true for \(\Phi^{\prime}\).
Let \(\beta^{\prime}\) be the dominant short root in \(\Phi^{\prime}\) so \(\beta=\alpha_{1}+\alpha_{2}+\beta^{\prime}=\sigma_{2}\sigma_{1}\beta^{\prime}\). Note that \(\beta\) is orthogonal to all simple roots except for \(\alpha_{2}\), and \(\langle\beta,\alpha_{2}\rangle=1\). As before we have
\[\sigma_{\beta}=\sigma_{2}\sigma_{1}\sigma_{\beta^{\prime}}\sigma_{1}\sigma_{2},\]
and the remaining part of the argument proceeds as in part (i).
(iii) The claim follows by direct verification.
(iv) In cases (i) and (ii), the claim follows by induction, and in (iii) by direct verification.
### One consequence of Lemma 6.6 is the following identity.
**Lemma 6.7**.: _For \(\beta\in\Pi_{\mathrm{new}}(\Phi_{0})\), let \(\Phi(\sigma_{\beta})\cap\Phi_{-1}=\{\gamma_{1},\ldots,\gamma_{2t}\}\) as in Lemma 6.6 (iv). Then,_
\[\prod_{\alpha\in\Phi_{1}^{+}}\frac{1-u^{2}\underline{x}^{2\alpha}}{1-u^{2} \underline{x}^{2\sigma_{\beta}\alpha}}=\prod_{j=1}^{2t}k(\underline{x}^{ \gamma_{j}})^{-1},\]
_where \(k(x)=\frac{1-u^{2}x^{-2}}{1-x^{2}}\)._
Proof.: Since \(\sigma_{\beta}\alpha_{i}=\alpha_{i}\), the reflection \(\sigma_{\beta}\) keep \(\Phi_{e}\) stable, for any \(e\in\{-1,0,1\}\). Therefore \(\sigma_{\beta}\) restricts to a bijection of \(\Phi_{1}^{+}\setminus\Phi(\sigma_{\beta})\), giving
\[\prod_{\alpha\in\Phi_{1}^{+}}\frac{1-u^{2}\mathbf{x}^{2\alpha}}{1-u^{2} \mathbf{x}^{2\sigma_{\beta}\alpha}}=\prod_{\gamma\in\Phi_{1}^{+}\cap\Phi( \sigma_{\beta})}\frac{1-u^{2}\mathbf{x}^{2\alpha}}{1-u^{2}\mathbf{x}^{2\sigma _{\beta}\alpha}}=\prod_{j=1}^{2t}\frac{1-u^{2}\mathbf{x}^{2(\gamma_{j}+\alpha_ {i})}}{1-u^{2}\mathbf{x}^{2(\gamma_{j}+\alpha_{i}-\beta)}}. \tag{6.10}\]
Above, we used that \(\Phi_{1}\cap\Phi(\sigma_{\beta})=\{\gamma_{j}+\alpha_{i}\mid j=1,\ldots,2t\}\), according to Lemma 6.6 (iv). If \(t=0\), we have \(\Phi_{1}^{+}\cap\Phi(\sigma_{\beta})=\emptyset\) and all the products above are equal to \(1\).
We also have \(\Phi_{-1}\cap\Phi(\sigma_{\beta})=\{\gamma_{j}\mid 1\leqslant j\leqslant 2t\}\), and the map \(\gamma\mapsto-\sigma_{\beta}\sigma_{i}\gamma=\beta-\gamma-\alpha_{i}\) is a bijection of this set. It follows that, for \(\gamma\in\Phi_{-1}\cap\Phi(\sigma_{\beta})\), we have
\[\frac{1-u^{2}\mathbf{x}^{2(\gamma+\alpha_{i})}}{1-u^{2}\mathbf{x}^{2(\gamma+ \alpha_{i}-\beta)}}\bigg{|}_{x_{i}=1/u}=\frac{1-\underline{x}^{\gamma}}{1-u^{2 }\underline{x}^{-\gamma^{\prime}}}\,\]
with \(\gamma^{\prime}=\beta-\gamma-\alpha_{i}\in\Phi_{-1}\cap\Phi(\sigma_{\beta})\). Taking the product of these fractions over all \(\gamma\in\Phi_{-1}\cap\Phi(\sigma_{\beta})\), and comparing with (6.10) concludes the proof.
We are now ready to complete the second step in the proof of Theorem 6.1.
Remark that \((\sigma_{\beta}\mathbf{x})_{i}=x_{i}\) and \((\varepsilon^{\beta}\mathbf{x})_{i}=x_{i}\) for \(\beta\in\Pi_{\mathrm{new}}(\Phi_{0})\), by (2.2). We let sign functions \(\varepsilon^{\lambda}\) with \((\varepsilon^{\lambda}\mathbf{x})_{i}=x_{i}\) act on the multivariable \(\underline{x}\) by restriction of their action on \(\mathbf{x}\), and the reflection \(\sigma_{\beta}\) by \((\sigma_{\beta}\underline{x})_{j}:=\underline{x}^{\sigma_{\beta}\alpha_{j}}= \mathbf{x}^{\sigma_{\beta}\alpha_{j}}|_{x_{i}=1/u}\) for \(j\neq i\).
**Proposition 6.8**.: _For \(\beta\in\Pi_{\mathrm{new}}(\Phi_{0})\) we have_
\[F(\underline{x})=\begin{cases}F(\sigma_{\beta}\underline{x})J(\underline{x}^ {\beta},0)+F(\sigma_{\beta}\varepsilon^{\beta}\underline{x})J(\underline{x}^ {\beta},1)&\text{ if $\beta$ is short},\\ F(\sigma_{\beta}\underline{x})&\text{ if $\beta$ is long}.\end{cases}\]
_Moreover, \(F(\underline{x})=F(\varepsilon_{i}\underline{x})\)._
Proof.: Throughout the proof, we denote
\[R(\underline{x}):=\underset{x_{i}=1/u}{\mathrm{Res}}Z_{\Phi}(\mathbf{x};u).\]
Taking residues in the functional equation \(Z_{\Phi}(\mathbf{x};u)=Z_{\Phi}(\mathbf{x};u)\big{|}^{\mathrm{C}\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
function \(\varepsilon_{\underline{\delta}}=\varepsilon^{\sum_{\gamma\in\Phi(\sigma_{\beta})} \delta_{\gamma}\gamma}\) in (2.6) changes the sign of \(x_{i}\), then the residue of the corresponding term in (2.6) vanishes, since \(\underset{x_{i}=-1/u}{\text{Res}}Z_{\Phi}=0\). Denote by \(E\) the set of all \(\varepsilon_{\underline{\delta}}\) with \((\varepsilon_{\underline{\delta}}{\bf x})_{i}=x_{i}\). We obtain
\[R(\underline{x})=\sum_{\underline{\delta}\in\{0,1\}\atop\varepsilon_{ \underline{\delta}}\in E}R(\sigma_{\beta}\varepsilon_{\underline{\delta} \underline{x}})\cdot\Pi_{\underline{\delta}}\, \tag{6.12}\]
where
\[\Pi_{\underline{\delta}}=\Pi_{\sigma_{\beta},\underline{\delta}}(\underline{ x}):=\prod_{\gamma\in\Phi^{*}(\sigma_{\beta})}J\left((-1)^{\langle\gamma,\sum_{ \alpha<\gamma}\delta_{\alpha}\alpha\rangle}\underline{x}^{\gamma},\delta_{ \gamma}\right), \tag{6.13}\]
and \(\Phi^{s}(\sigma_{\beta})\), \(\ell_{s}=|\Phi^{s}(\sigma_{\beta})|\) are as in Lemma 2.3. Recall that the order \(\prec\) on \(\Phi^{s}(\sigma_{\beta})\) is induced by the order on \(\Phi(\sigma_{\beta})\) in (2.1). In all cases we have, by Lemma 6.6,
\[\Phi^{s}(\sigma_{\beta})=\{\gamma_{1}\prec\gamma_{1}+\alpha_{i}\prec\ldots \prec\gamma_{t}\prec\gamma_{t}+\alpha_{i}\prec\beta\prec\gamma_{t+1}\prec \gamma_{t+1}+\alpha_{i}\prec\ldots\prec\gamma_{2t}\prec\gamma_{2t}+\alpha_{i}\},\]
for some \(t\geqslant 0\), with the central element \(\beta\) missing if \(\beta\) is a long root as in Lemma 6.6 (iii). We have \(t=0\) only when \(\beta\) is the dominant short root in a subsystem of type \(C_{2}=B_{2}\), in which case \(\Phi^{s}(\sigma_{\beta})=\{\beta\}\).
For a sign \(\varepsilon_{\underline{\delta}}\in E\), the condition \((\varepsilon_{\underline{\delta}}{\bf x})_{i}=x_{i}\) translates, by (2.2), to
\[\sum_{j=1}^{2t}(\delta_{\gamma_{j}}+\delta_{\gamma_{j}+\alpha_{i}})\equiv 0 \mod 2\, \tag{6.14}\]
the condition being automatically satisfied if \(t=0\). Assume \(t>0\), and let \(\gamma:=\gamma_{2t}\), so the last two elements in \(\Phi^{s}(\sigma_{\beta})\) are \(\gamma\) and \(\gamma+\alpha_{i}\). For \(\underline{\delta}\in E\), define a tuplet \(\underline{\delta}^{\prime}\) such that \(\underline{\delta}^{\prime}\) and \(\underline{\delta}\) are identical, except on the last two positions, for which
\[\delta^{\prime}_{\gamma}=1-\delta_{\gamma},\qquad\delta^{\prime}_{\gamma+ \alpha_{i}}=1-\delta_{\gamma+\alpha_{i}}.\]
Note that \(\varepsilon_{\underline{\delta}^{\prime}}=\varepsilon_{\underline{\delta}} \varepsilon_{i}\), so \(R(\sigma_{\beta}\varepsilon_{\underline{\delta}\underline{x}})=R(\sigma_{ \beta}\varepsilon_{\underline{\delta}^{\prime}}\underline{x})\) in (6.12) can be taken as common factor in front of the sum \(\Pi_{\underline{\delta}}+\Pi_{\underline{\delta}^{\prime}}\). To compute this sum, remark that the first \(\ell_{s}-2\) factors in \(\Pi_{\underline{\delta}}\) and \(\Pi_{\underline{\delta}^{\prime}}\) are the same, and the last two factors of \(\Pi_{\underline{\delta}}\) are
\[J((-1)^{\langle\gamma,\gamma^{\prime}\rangle}\underline{x}^{\gamma},\delta_{ \gamma})\cdot J((-1)^{\langle\gamma,\gamma^{\prime}\rangle+\delta_{\gamma+ \alpha_{i}}}\underline{x}^{\gamma}/u,\delta_{\gamma+\alpha_{i}}), \tag{6.15}\]
with \(\gamma^{\prime}=\sum_{\alpha<\gamma}\delta_{\alpha}\alpha\). Here, we have used condition (6.14) and Lemma 6.6 (iv).
We now use the following identities
\[\begin{gathered} J(x,0)J(x/u,0)+J(x,1)J(-x/u,1)=k(x),\\ J(x,1)J(x/u,0)+J(x,0)J(-x/u,1)=0,\end{gathered} \tag{6.16}\]
where
\[k(x):=\frac{1-u^{2}x^{-2}}{1-x^{2}}.\]
By (6.16), the sum of the product in (6.15) with the corresponding product for \(\Pi_{\underline{\delta}^{\prime}}\) equals \(k(\underline{x}^{\gamma})\) or \(0\), depending on whether \(\delta_{\gamma}\) and \(\delta_{\gamma+\alpha_{i}}\) are the same or not, respectively. We conclude that the sum \(\Pi_{\underline{\delta}}+\Pi_{\underline{\delta}^{\prime}}\) vanishes, unless \(\delta_{\gamma}=\delta_{\gamma+\alpha_{i}}\), when it equals \(k(\underline{x}^{\gamma_{2x}})\) times a product involving only on the first \(\ell_{s}-2\) elements of \(\underline{\delta}\). When \(\delta_{\gamma}=\delta_{\gamma+\alpha_{i}}\), we also have
\[\sum_{j=1}^{2t-1}(\delta_{\gamma_{j}}+\delta_{\gamma_{j}+\alpha_{i}})\equiv 0 \mod 2.\]
Repeating the same reasoning with \(\gamma=\gamma_{2t-1}\) and smaller indices, we conclude that only the tuple \(\underline{\delta}\) having \(\delta_{\gamma_{j}}=\delta_{\gamma_{j}+\alpha_{i}}\), for all \(1\leqslant j\leqslant 2t\), contribute non-trivially to the sum in (6.12). We obtain
\[R(\underline{x})=\prod_{j=1}^{2t}k(\underline{x}^{\gamma_{j}})\cdot\begin{cases} R(\sigma_{\beta}\underline{x})J(\underline{x}^{\beta},0)+R(\sigma_{\beta} \varepsilon^{\beta}\underline{x})J(\underline{x}^{\beta},1)&\text{ if $\beta$ is short,}\\ R(\sigma_{\beta}\underline{x})&\text{ if $\beta$ is long.}\end{cases}\]
We also have
\[F(\underline{x})=R(\underline{x})\cdot\prod_{\alpha\in\Phi_{1}^{+}}(1-u^{2} \underline{x}^{2\alpha}),\]
and the identity in Lemma 6.7 concludes the argument.
As a consequence of Proposition 6.8, we have the following complement to Lemma 6.3.
**Lemma 6.9**.: _For \(\beta\in\Pi_{\mathrm{new}}(\Phi_{0})\), we have_
\[\operatorname*{Res}_{\begin{subarray}{c}x_{i}=1/u\\ \underline{x}^{\beta}=-1/u\end{subarray}}Z_{\Phi}(\mathbf{x};u)=0.\]
**Corollary 6.10**.: _The numerator \(N_{0}(\underline{x};u)\) from Proposition 6.4 is divisible by \(1+u\underline{x}^{\gamma}\), for any short root \(\gamma\in\Pi(\Phi_{0})\)._
The third step in the proof of Theorem 6.1 is the following.
**Proposition 6.11**.: \(N_{0}(\underline{x};u)\) _is a polynomial in \(\underline{x}^{\gamma}\), \(\gamma\in\Pi(\Phi_{0})\)._
Proof.: We distinguish two cases.
_Case I_: the root system \(\Phi\) is not of type \(A_{r}\). In this case, the number of roots \(\beta\in\Pi_{\mathrm{new}}(\Phi_{0})\) equals the number of neighbors of the node \(i\) in the Dynkin diagram of \(\Phi\), as it can be seen from the tables in SS5.4 and Appendix A. By Proposition 6.8 the function \(N_{0}(\underline{x};u)\) is even under the sign function \(\varepsilon_{i}\), which changes the sign of \(x_{j}\) precisely for \(\alpha_{j}\) such that \(\langle\alpha_{i},\alpha_{j}\rangle=-1\) (necessarily, \(\alpha_{j}\) is a short root). Using that the cardinality of \(\Pi_{\mathrm{new}}(\Phi_{0})\) equals the number of neighbors of the node \(i\) in the Dynkin diagram of \(\Phi\), it follows that in each monomial appearing in \(N_{0}(\underline{x};u)\) we can make a substitution
\[\prod_{\langle\alpha_{j},\alpha_{i}\rangle<0}x_{j}^{a_{j}}=m\cdot\prod_{\beta \in\Pi_{\mathrm{new}}(\Phi_{0})}(\underline{x}^{\beta})^{c_{\beta}},\]
where \(a_{j}\geqslant 0\) and \(m\) is a Laurent monomial in \(u\) and variables \(x_{k}\) with \(\langle\alpha_{k},\alpha_{i}\rangle=0\). Taking into account that \(\sum_{\langle\alpha_{j},\alpha_{i}\rangle=-1}a_{j}\) is even, a verification of the cases in SS5.4 and in Appendix A shows that all exponents \(c_{\beta}\) on the right-hand side are integral, and at least one is positive if one of the \(a_{j}\) is positive on the left-hand side. Therefore, after the substitution above, \(N_{0}(\underline{x};u)\) becomes a Laurent polynomial in the variables \(\underline{x}^{\gamma}\) for \(\gamma\in\Pi(\Phi_{0})\), such that each monomial that contains some negative exponents also contains a factor \((\underline{x}^{\beta})^{c}\) with \(c>0\), for some \(\beta\in\Pi_{\mathrm{new}}(\Phi_{0})\). By Corollary 3.5, this is possible only if \(N_{0}(\underline{x};u)\) is a polynomial in the variables \(\underline{x}^{\gamma}\) after the substitution above.
_Case II_: the root system \(\Phi\) is of type \(A_{r}\). If \(i=1\), then \(\Phi_{0}\) is the root system of type \(A_{r-2}\) with simple roots \(\alpha_{k}\), \(k\geqslant 3\). The bound on degree in Proposition 6.4 shows that \(N_{0}(\underline{x};u)\) does not depend on \(x_{2}\), which is our claim.
If \(i\neq 1,r\), then \(\beta=\alpha_{i-1}+\alpha_{i}+\alpha_{i+1}\) is the unique root in \(\Pi_{\text{new}}(\Phi_{0})\). We want to show that \(N_{0}(\underline{x};u)\) is a polynomial in \(u\underline{x}^{\beta}=x_{i-1}x_{i+1}\) and \(x_{k}\), \(k\not\in\{i-1,i,i+1\}\), so we decompose
\[F(\underline{x})=\sum_{a>0}x_{i-1}^{2a}f_{a}(\underline{x})+f_{0}(\underline{ x})+\sum_{a>0}x_{i+1}^{2a}g_{a}(\underline{x})\]
where \(f_{a}(\underline{x}),g_{a}(\underline{x})\) are of the form \(P_{a}(\underline{x})/D_{\Phi_{0}}(\mathbf{x})|_{x_{i}=1/u}\) with \(P_{a}(\underline{x})\) a polynomial in \(\underline{x}^{\gamma}\), \(\gamma\in\Pi(\Phi_{0})\). The exponents of \(x_{i\pm 1}\) are even in this expression because \(F(\underline{x})\) is even with respect to \(\varepsilon_{i}\).
We claim that \(f_{a}(\underline{x})=g_{a}(\underline{x})=0\), for all \(a\neq 0\). By symmetry we concentrate on \(f_{a}(\underline{x})\). The decomposition above is preserved by the actions \(\big{|}_{\sigma_{k}}^{\text{CG}}\) for \(\langle\alpha_{k},\alpha_{i}\rangle=0\), and \(\big{|}_{\sigma_{\beta}}^{\text{CG}}\). It follows that, for \(a>0\), the function \(f_{a}(\underline{x})\) is the specialization at \(x_{i}=1/u\) of a function invariant under the twisted action of the Weyl group of \(\Phi_{0}\), for some twisting parameter \(\omega\) in the weight lattice of \(\Phi_{0}\), as in Section 3. We now identify \(f_{a}(\underline{x})\) with this invariant function.
For a contradiction, assume that \(f_{a}(\underline{x})\neq 0\), and write \(f_{a}(\underline{x})=P_{a}(\underline{x})/D_{\Phi_{0}}(\mathbf{x})|_{x_{i}=1/u}\) as above. Proposition 3.3 applied to \(\Phi_{0}\), implies that there is a strongly dominant weight \(\xi\) such that \(O_{\xi}\subset\text{Supp}(P_{a}(\underline{x}))\). The bound on degree in Proposition 6.4, implies that \(0\leqslant\lambda<2\rho_{0}\) for \(\lambda\in\text{Supp}(P_{a}(\underline{x}))\), with \(\rho_{0}\) and \(w_{\circ}\) being the half-sum of positive roots in \(\Phi_{0}\) and the longest element in the Weyl group of \(\Phi_{0}\). Setting \(\theta=\omega+\rho_{0}\) as in Section 3, we have
\[0\leqslant\theta-\xi\leqslant\theta-w_{0}\xi<2\rho_{0}=\rho_{0}-w_{\circ}\rho_ {0}.\]
It follows that \(\xi-\rho_{0}<w_{\circ}(\xi-\rho_{0})\), which is impossible since \(\xi-\rho_{0}\in Q_{0}^{+}\) (non-negative integral linear combinations of elements in \(\Phi_{0}^{+}\)), and \(w_{\circ}\) maps \(\Phi_{0}^{+}\) onto \(\Phi_{0}^{-}\). The contradiction shows that \(f_{a}(\underline{x})=0\). Therefore, \(F(\underline{x})=f_{0}(\underline{x})\), which is precisely our claim.
### Proof of Theorem 6.1
We are now ready to assemble all the results in this section to prove Theorem 6.1. By Proposition 6.4 and Proposition 6.11, we have
\[F(\underline{x})=\frac{N_{0}(\underline{x};u)}{D_{\Phi_{0}}(\mathbf{x};u)|_{x_ {i}=1/u}}\;,\]
where \(N_{0}(\underline{x};u)\) is a polynomial in \(\underline{x}^{\gamma}\), \(\gamma\in\Pi(\Phi_{0})\). Proposition 6.8 shows that \(F(\underline{x})\) has the same transformation properties as \(Z_{\Phi_{0}}(\mathbf{x};u)|_{x_{i}=1/u}\). If \(\Phi_{0}\) is irreducible, Proposition 3.8 and the fact that \(N_{0}(\underline{0};u)=1\) finishes the proof of Theorem 6.1 in this case.
If \(\Phi_{0}\) is reducible (which is the case when \(\Phi\) is of type \(C_{n}\), \(n\geqslant 3\), or \(D_{n}\), \(n\geqslant 4\)), the argument in the previous paragraph has to be slightly adjusted. If \(\Phi\) is not of type \(D_{4}\), we have \(\Phi_{0}=\Phi_{0}^{\prime}\cup\{\pm\gamma\}\) with \(\Phi_{0}^{\prime}\) irreducible and \(\gamma\in\Pi(\Phi_{0})\) orthogonal to \(\Phi_{0}^{\prime}\). From Corollary 6.10, we have \(N_{0}(\underline{x})=(1+u\underline{x}^{\gamma})N_{0}^{\prime}(\underline{x})\) for some polynomial \(N_{0}^{\prime}(\underline{x})\). We obtain that
\[F(\underline{x})=\frac{1}{1-u\underline{x}^{\gamma}}F^{\prime}(\underline{x}),\quad F^{\prime}(\underline{x}):=\frac{N_{0}^{\prime}(\underline{x})}{D_{ \Phi_{0}^{\prime}}(\mathbf{x})|_{x_{i}=1/u}},\]
and \(F^{\prime}(\underline{x})\) is invariant under the Weyl group of the irreducible component \(\Phi_{0}^{\prime}\). Moreover, \(N_{0}^{\prime}(\underline{x})\) satisfies the conclusion of Proposition 6.11 for \(\Phi_{0}\) replaced with \(\Phi_{0}^{\prime}\). Therefore, we can apply Proposition 3.8 as before to conclude that \(F^{\prime}(\underline{x})=Z_{\Phi_{0}^{\prime}}(\mathbf{x};u)|_{x_{i}=1/u}\). In consequence, we have
\[F(\underline{x})=Z_{\Phi_{0}^{\prime}}(\mathbf{x};u)|_{x_{i}=1/u}\cdot Z_{A_{1} }(\underline{x}^{\gamma};u)=Z_{\Phi_{0}}(\mathbf{x};u)|_{x_{i}=1/u}.\]
Finally, if \(\Phi\) is of type \(D_{4}\), then \(\Phi_{0}\) is isomorphic to the direct sum of three root systems of type \(A_{1}\), and a similar argument applies. Therefore, the proof of Theorem 6.1 is concluded.
## 7. Parabolic subgroup averages
This section is dedicated to the proof of Theorem D. We continue to work under the hypothesis that \(\Phi\) is an irreducible root system not of type \(G_{2}\), and \(\alpha_{i}\) is a fixed short root. We first describe the kernel function that appears in the statement. The description involves a finite directed graph \(\mathcal{K}_{\Phi}(\alpha_{i})\) with vertices labeled by positive roots, akin to Kostant's cascade construction [29]. The directed graph is the Hasse diagram (the graphical representation of the cover relations) of a finite partial order relation on the set labeling the vertices. If \(\beta,\gamma\in\mathcal{K}_{\Phi}(\alpha_{i})\) and \(\beta\) is immediately followed by \(\gamma\) in the partial order (i.e. there is a directed edge from \(\beta\) to \(\gamma\)), we write \(\beta\lessdot\gamma\). We will routinely interchange between these two equivalent descriptions of \(\mathcal{K}_{\Phi}(\alpha_{i})\) (directed graph and partial order relation). The partial order will match the order \(\leqslant\) restricted to the set of positive roots that label the vertices of \(\mathcal{K}_{\Phi}(\alpha_{i})\).
Associated to \(\mathcal{K}_{\Phi}(\alpha_{i})\), there is an auxiliary copy \(\mathcal{F}_{\Phi}(\alpha_{i})\) of the same graph, whose vertices are irreducible root sub-systems (with corresponding bases) of \(\Phi\). If \(\gamma\lessdot\gamma^{\prime}\) is a directed edge in \(\mathcal{K}_{\Phi}(\alpha_{i})\), then the corresponding vertices \(\Psi\), \(\Psi^{\prime}\) in \(\mathcal{F}_{\Phi}(\alpha_{i})\) are irreducible root systems with \(\Psi^{\prime}\subset\Psi\), such that \(\gamma\in\Pi(\Psi)\) and \(\gamma^{\prime}\in\Pi(\Psi^{\prime})\). The bases for the root systems in \(\mathcal{F}_{\Phi}(\alpha_{i})\) are inherited from the basis \(\Pi(\Phi)\), and they will not be included in the notation.
The two directed graphs are constructed recursively. The minimal element of \(\mathcal{K}_{\Phi}(\alpha_{i})\) is \(\alpha_{i}\), and the corresponding vertex in \(\mathcal{F}_{\Phi}(\alpha_{i})\) is \(\Phi\), with basis \(\Pi(\Phi)\). Given a vertex labelled \(\beta\) in \(\mathcal{K}_{\Phi}(\alpha_{i})\) and the corresponding irreducible root system \(\Psi\) in \(\mathcal{F}_{\Phi}(\alpha_{i})\), with basis \(\Pi(\Psi)\), the vertices \(\gamma\) such that \(\beta\lessdot\gamma\) in \(\mathcal{K}_{\Phi}(\alpha_{i})\) and their corresponding root systems in \(\mathcal{F}_{\Phi}(\alpha_{i})\) are constructed as follows. If \(\beta\) is a long root, then it is a terminal vertex in \(\mathcal{K}_{\Phi}(\alpha_{i})\). Otherwise, let \(\beta^{\perp}\subset\Psi\) be the orthogonal sub-system that consists of roots orthogonal to \(\beta\). The basis \(\Pi(\Psi)\) induces a basis \(\Pi(\beta^{\perp})\), and we denote by \(\Pi_{\rm new}(\beta^{\perp})\) the set of elements of \(\Pi(\beta^{\perp})\) that are not in \(\Pi(\Psi)\). Then the vertices \(\gamma\) such that \(\beta\lessdot\gamma\) in \(\mathcal{K}_{\Phi}(\alpha_{i})\) are precisely the elements in \(\Pi_{\rm new}(\beta^{\perp})\); the corresponding root system in \(\mathcal{F}_{\Phi}(\alpha_{i})\) is the irreducible component of \(\beta^{\perp}\) that contains \(\gamma\). Naturally, if \(\Pi_{\rm new}(\beta^{\perp})\) is empty, then \(\beta\) is a terminal vertex in \(\mathcal{K}_{\Phi}(\alpha_{i})\).
The description of \(\Pi_{\rm new}(\beta^{\perp})\) given in Section 5 applies, so it is easy to construct the two graphs in all classifications. In particular, one checks that the graph \(\mathcal{F}_{\Phi}(\alpha_{i})\) is well-defined, namely the root system associated with a given vertex only depends on the corresponding root of \(\mathcal{K}_{\Phi}(\alpha_{i})\).
For certain nodes \(i\), the directed graph \(\mathcal{K}_{\Phi}(\alpha_{i})\) is a rooted tree isomorphic to the so-called Kostant cascade, a decreasing rooted tree of strongly orthogonal roots defined in [28, 29]. Each vertex in the Kostant cascade is the highest root of an associated irreducible root system. The root vertex in the Kostant cascade is the highest root in \(\Phi\). For a fixed \(\beta\) vertex, the vertices immediately lower in the tree order are the highest roots of the irreducible components of the root sub-system \(\beta^{\perp}\).
_Example 7.1_.: We give two examples for which \(\mathcal{K}_{\Phi}(\alpha_{i})\) is a rooted tree isomorphic to the cascade of roots in [28, Table III]. For \(\Phi\) of type \(A_{r}\) and \(r=2i-1\), then \(\mathcal{K}_{\Phi}(\alpha_{i})\) is a chain (that is, a directed tree with one terminal vertex)
\[\beta_{1}\lessdot\beta_{2}\lessdot\ldots\lessdot\beta_{i},\]
with \(\beta_{j}=\alpha_{i-j+1}+\ldots+\alpha_{i+j-1}\). For \(\Phi\) of type \(D_{r}\) with \(r=2i\) even, the directed graph \(\mathcal{K}_{\Phi}(\alpha_{i})\) is pictured in Figure 2. For these examples, the kernel functions \(K_{\Phi,\alpha_{i}}({\bf x})\) defined in SS7.3 below, are explicitly indicated in SS1.7.
The kernel function \(K_{\Phi,\alpha_{i}}(\mathbf{x})\) in Theorem D is defined only when the directed graph \(\mathcal{K}_{\Phi}(\alpha_{i})\) has a special structure, as described in the next lemma.
**Lemma 7.2**.: _Let \(\Phi\) be an irreducible root system not of type \(G_{2}\) and \(\alpha_{i}\) a simple short root. The following are equivalent_
1. _All the roots_ \(\beta\in\mathcal{K}_{\Phi}(\alpha_{i})\) _have_ \(n_{i}(\beta)=1\)_._
2. _The node_ \(i\) _is one of the admissible nodes in Table_ 1_._
_Furthermore, if these conditions are satisfied, then removing from \(\mathcal{K}_{\Phi}(\alpha_{i})\) the terminal vertices yields a chain._
In particular, \(\mathcal{K}_{\Phi}(\alpha_{i})\) is a rooted tree under the assumptions of the lemma.
Proof.: The equivalence is verified using the recursive construction of \(\mathcal{K}_{\Phi}(\alpha_{i})\), and the information in the tables in SS5.4 and Appendix A. The last statement also follows from a case by case analysis.
_Remark 7.3_.: If the conditions in Lemma 7.2 are satisfied, then the highest root \(\theta\in\Phi\) has \(n_{i}(\theta)\leqslant 2\). This condition is satisfied automatically except in the four exceptional root systems.
Let \(i\) be one of the admissible nodes in Table 1. By the lemma, removing from \(\mathcal{K}_{\Phi}(\alpha_{i})\) the terminal vertices yields a (possibly empty) chain
\[\alpha_{i}=\beta_{1}\lessdot\beta_{2}\lessdot\ldots\lessdot\beta_{N};\]
let \(\Psi_{1}\supset\Psi_{2}\supset\ldots\supset\Psi_{N}\) be the corresponding root systems in \(\mathcal{F}_{\Phi}(\alpha_{i})\). This chain structure is used to define the kernel function \(K_{\Phi,\alpha_{i}}(\mathbf{x})\), and ultimately makes possible the induction argument in the proof of Theorem 7.4. We recursively define the kernel function \(K_{\Phi,\alpha_{i}}(\mathbf{x})\) as follows.
* When \(\Phi\) is simply-laced, we define (7.1) \[K_{\Phi,\alpha_{i}}(\mathbf{x})=\prod_{j=1}^{N}\prod_{\gamma\in\Pi_{\text{ new}}(\beta_{j}^{\perp})}\frac{1}{1-\mathbf{x}^{\gamma-\beta_{j}}}\,\] so that we have for \(1\leqslant j\leqslant N\): \[K_{\Psi_{j},\beta_{j}}(\mathbf{x})=K_{\Psi_{j+1},\beta_{j+1}}(\mathbf{x}) \cdot\prod_{\gamma\in\Pi_{\text{new}}(\beta_{j}^{\perp})}\frac{1}{1-\mathbf{ x}^{\gamma-\beta_{j}}},\] setting \(K_{\Psi_{N+1},\beta_{N+1}}(\mathbf{x})=1\).
* When \(\Phi\) is double-laced, we define (7.2) \[K_{\Phi,\alpha_{i}}(\mathbf{x})=\prod_{j=1}^{N}\prod_{\gamma\in\Pi_{\text{new}}^{*} (\beta_{j}^{\perp})}\frac{1}{1-\mathbf{x}^{\gamma-\beta_{j}}}\.\] Recall that \(\Pi_{\text{new}}^{*}(\beta^{\perp})\) consists of the elements in \(\Pi_{\text{new}}(\beta^{\perp})\) which are highest roots in a subdiagram of type \(A_{3}\). Therefore \(K_{\Phi,\alpha_{i}}(\mathbf{x})=1\), unless \(\Phi\) is of type \(C_{r}\) and \(1<i<r-1\), when the explicit formula for \(K_{\Phi,\alpha_{i}}(\mathbf{x})\) is given in SS1.7.
Remark that, according to the conditions in Lemma 7.2, we have \(n_{i}(\beta)=1\) for all \(\beta\in\mathcal{K}_{\Phi}(\alpha_{i})\). In consequence, \(K_{\Phi,\alpha_{i}}(\mathbf{x})\) is independent of \(x_{i}\) and there are no extra poles involving \(x_{i}\) in the right-hand side of the formula in Theorem 7.4 below.
We consider the following modified version of \(Z_{\Phi}(\mathbf{x};u)\), obtained by removing the poles \(x^{\alpha}=\pm 1/u\) with \(n_{i}(\alpha)\geqslant 2\), as follows
\[Z_{\Phi}^{(i)}(\mathbf{x};u)=Z_{\Phi}(\mathbf{x};u)\cdot\prod_{ \begin{subarray}{c}\alpha\in\Phi^{*}\\ n_{i}(\alpha)\geqslant 2\end{subarray}}(1-u^{2}\mathbf{x}^{2\alpha}),\qquad Z_{ \Phi_{0}}^{(i)}(\mathbf{x};u)=Z_{\Phi_{0}}(\mathbf{x};u)\cdot\prod_{ \begin{subarray}{c}\alpha\in\Phi_{0}^{*}\\ n_{i}(\alpha)\geqslant 2\end{subarray}}(1-u^{2}\mathbf{x}^{2\alpha}). \tag{7.3}\]
This is motivated by the observation that the right-hand side of formula (7.4) below has poles involving \(x_{i}\) only at \(x^{\alpha}=\pm 1/u\) for \(\alpha\in\Phi^{+}\) with \(n_{i}(\alpha)=1\). With his notation, Theorem D can be restated as follows.
**Theorem 7.4**.: _Let \(\Phi\) be an irreducible root system not of type \(G_{2}\), and let \(i\) one of the admissible nodes specified in Table 1. We have,_
\[Z_{\Phi}^{(i)}(\mathbf{x};u)=\frac{\sum_{w\in W^{i}}\frac{1}{1-ux_{i}}K_{\Phi, \alpha_{i}}(\mathbf{x})\bigg{|}\,w}{\Delta_{\Phi^{i}}(\mathbf{x})}. \tag{7.4}\]
_Remark 7.5_.: The theorem is sharp, in the sense that the identity in the theorem does not hold as stated for nodes \(i\) not listed in Table 1, with \(K_{\Phi,\alpha_{i}}\) defined as above, using a longest chain in the directed graph obtained from \(\mathcal{K}_{\Phi}(\alpha_{i})\) by removing its terminal vertices. We verified this numerically for root systems \(\Phi\) of small rank (\(r\leqslant 10\)), by evaluating some of the variables to random numbers.
However there are similar formulas if one allows for more general kernel functions; we give here an example for \(\Phi\) of type \(D_{6}\) and \(i=4\). The graph \(\mathcal{K}_{\Phi}(\alpha_{i})\) is given in Figure 3, where \(\beta_{1}=\alpha_{4}\), \(\Pi_{\text{new}}(\beta_{1}^{\perp})=\{\beta_{2}\), \(\beta_{2}^{\prime}\), \(\beta_{2}^{\prime\prime}\}\), and \(\theta\) is the highest root in \(\Phi\). Using a computer, we verified that formula (7.4) holds with
\[K_{\Phi,\alpha_{i}}(\mathbf{x})=(1+u\mathbf{x}^{\theta}|_{x_{i}=1/u})\cdot \prod_{\gamma\in\Pi_{\text{new}}(\beta_{1}^{\perp})}\frac{1}{1-\mathbf{x}^{ \gamma-\beta_{1}}}.\]
One can prove this along the same lines as Theorem 7.4, but we leave a more thorough investigation of the cases not covered in Theorem 7.4 for future work.
The proof of Theorem 7.4 will occupy the remainder of this section. The structure of the argument is the following. What makes the argument possible, is a basic uniqueness result for rational functions with prescribed poles and invariance properties. This is stated as Lemma 7.6. We show in Proposition 7.13 that the uniqueness result ultimately reduces the proof of Theorem 7.4 to the equality of the residues at \(x_{i}=1/u\) of both sides of (7.4). The residue of \(Z_{\Phi}^{(i)}(\mathbf{x};u)\) is computed in Proposition 7.14, which is essentially a
reformulation of Theorem 6.1. Finally, the equality of the residues on both sides of (7.4) follows from Theorem 7.14, by induction on the rank of \(\Phi\).
The next result shows that a rational function is determined uniquely by its residues at \(x_{i}=\pm 1/u\), provided it is invariant under the Chinta-Gunnells action of the maximal parabolic subgroup \(W^{i}\) and satisfies some easily verified conditions. For later use, we formulate the lemma allowing for potentially more general parabolic subgroups in place of \(W^{i}\). For a rational function \(f(\mathbf{x})\), we denote by \(\deg_{x_{i}}f(\mathbf{x})\) the degree of its numerator minus the degree of its denominator with respect to the variable \(x_{i}\).
**Lemma 7.6**.: _Let \(\Phi\) be a simply-laced root system and fix \(\alpha_{i}\), a simple root. Let \(W^{\prime}\) be a parabolic subgroup corresponding to a subdiagram of the Dynkin diagram of \(\Phi\) such that \(\sigma_{i}\not\in W^{\prime}\). Let \(f(\mathbf{x})\) be a rational function that satisfies the following properties_
1. \(f(\mathbf{x})=f(\mathbf{x})\,\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
We illustrate the use of Lemma 7.6 on three examples. The first illustrates in a simple case the proof of Theorem 7.4, while the other two will be needed later in the proof. For a list \(I\) of indices, we denote by \(\Phi^{I}\) the parabolic root sub-system of \(\Phi\) obtained by removing the nodes in \(I\), and by \(W^{I}\) the corresponding parabolic sub-group of \(W\).
_Example 7.7_.: Let \(\Phi\) be the root system of type \(A_{2}\). Then,
\[Z_{\Phi}(\mathbf{x};u)=\frac{\sum_{w\in W^{2}}\frac{1}{(1-ux_{2})}\frac{1}{(1- x_{1}x_{3})}\bigg{|}w}{\Delta_{\Phi^{2}}(\mathbf{x})}\.\]
This is a particular example of the equality in Theorem 7.4. To verify it, use Lemma 7.6 for \(\Phi\), the node \(i=2\), and \(W^{\prime}=W^{2}\), to show that it is enough to prove the corresponding equality of residues at \(x_{2}=1/u\). The equality of the residues is equivalent to
\[\operatorname*{Res}_{x_{2}=1/u}Z_{\Phi}^{[2]}(\mathbf{x};u)=\frac{1}{(1-x_{1} x_{3})},\]
which is precisely the claim of Theorem 6.1 for this case.
_Example 7.8_.: Let \(\Phi\) be the root system of type \(A_{3}\), and let \(\theta\) denote its highest root. Then,
\[(1-u^{2}\mathbf{x}^{\theta})\cdot Z_{\Phi}(\mathbf{x};u)=\frac{\sum_{w\in W^{ 1,3}}\frac{1}{(1-ux_{1})(1-ux_{3})}\bigg{|}w}{\Delta_{\Phi^{1,3}}(\mathbf{x})}\.\]
Indeed, Lemma 7.6 for \(\Phi\), the node \(i=3\), and \(W^{\prime}=W^{1,3}\), is used to show that it is enough to prove the corresponding equality for the residues at \(x_{3}=1/u\). This equality of the residues follows from the application of Theorem 6.1.
_Example 7.9_.: Let \(\Phi\) be the root system of type \(A_{5}\), and let \(\theta\) denote its highest root. Then,
\[(1-u^{2}\mathbf{x}^{\theta})(1-u^{2}\mathbf{x}^{\theta-\alpha_{1}})\cdot Z_{ \Phi}(\mathbf{x};u)=\frac{\sum_{w\in W^{2,5}}\frac{1}{(1-ux_{2})(1-x_{1}x_{3}) (1-ux_{5})}\bigg{|}w}{\Delta_{\Phi^{2,5}}(\mathbf{x})}\.\]
To see this, we apply Lemma 7.6 for \(A_{5}\), the node \(i=5\), and \(W^{\prime}=W^{2,5}\), to conclude that it is enough to prove the corresponding equality for the residues at \(x_{5}=1/u\). The equality of residues reduces precisely to the equality considered above in Example 7.7.
We use Lemma 7.6 to show that Theorem 7.4 reduces to proving the equality of the residues at \(x_{i}=1/u\) of both sides of (7.4). Before presenting the argument we need some technical preparation.
As before, fix an index \(i\) such that \(\alpha_{i}\) is short. By Remark 7.3, we can restrict to nodes \(i\) such that \(n_{i}(\theta)\leqslant 2\), for \(\theta\) the highest root in \(\Phi\), since this assumption is implied by the hypothesis of Theorem 7.4. Denote
\[U=\{\alpha\in\Phi^{+}:n_{i}(\alpha)=0\},\quad S=\{\alpha\in\Phi^{+}:n_{i}( \alpha)=1\},\quad T=\{\alpha\in\Phi^{+}:n_{i}(\alpha)=2\}. \tag{7.5}\]
Since \(n_{i}(\theta)\leqslant 2\), we have \(\Phi^{+}=U\cup S\cup T\). For \(e\in\{0,\pm 1,\pm 2\}\), denote
\[A_{e}=\{\alpha\in A:\langle\alpha_{i},\alpha\rangle=e\},\]
where \(A\) is any subset of \(\Phi\). By \(A^{s}\), respectively \(A^{\ell}\) we denote the short, respectively long roots in the set \(A\).
**Lemma 7.10**.: _Let \(\alpha_{i}\) be a short root such that \(n_{i}(\theta)\leqslant 2\). We have_
1. \(U_{1}=U_{2}=\emptyset\)_,_ \(S_{-2}=\emptyset\)_,_ \(S_{2}=\{\alpha_{i}\}\)_,_ \(T_{-1}=T_{-2}=\emptyset\)_;_
2. \(U_{-1}=(\Phi^{i,+}\setminus\Phi^{(i),+})^{s}\)_,_ \(U_{-2}=(\Phi^{i,+}\setminus\Phi^{(i),+})^{\ell}\)_,_ \(U_{0}=\Phi^{(i),+}\)_;_
3. \(S_{1}=\{\alpha_{i}+\alpha:\alpha\in U_{-1}\}\)_,_ \(S_{-1}=\{\alpha-\alpha_{i}:\alpha\in T_{1}\}\)_,_ \(T_{2}=\{\alpha+2\alpha_{i}:\alpha\in U_{-2}\}\)_._
Proof.: We use the fact that if \(\beta,\beta^{\prime}\in\Phi\) with \(\langle\beta,\beta^{\prime}\rangle=-1\) then \(\beta+\beta^{\prime}\in\Phi\), and if \(\langle\beta,\beta^{\prime}\rangle=1\), then \(\beta-\beta^{\prime}\in\Phi\). Part (i) immediately follows, taking into account that there are no roots \(\alpha\) with \(n_{i}(\alpha)>2\).
Clearly \(\Phi^{(i),+}\subset U_{0}\), and since \(n_{i}(\beta)>0\) for \(\beta\in\Pi_{\rm new}(\Phi_{0})\), the other inclusion also holds. We have \(U=\Phi^{i,+}\) and, from part (i), we have a disjoint union \(U=U_{0}\cup U_{-1}\cup U_{-2}\). Since \(U_{-1}\), respectively \(U_{-2}\) are the short, respectively long roots in \(U\setminus U_{0}\), the proof of part (ii) is finished.
The reflection \(\sigma_{i}\) gives bijections \(U_{-1}\simeq S_{1}\), \(S_{-1}\simeq T_{1}\), \(U_{-2}\simeq T_{2}\), proving part (iii).
Some subsets of the sets \(U\), \(S\), \(T\) above are orbits under the parabolic groups \(W^{i}\) or \(W^{(i)}\).
**Lemma 7.11**.: _Let \(\alpha_{i}\) be a short root. Then,_
1. \(W^{i}\alpha_{i}=S^{s}\)_;_
2. \(U_{-2}=W^{(i)}(\gamma-\alpha_{i})\)_, where_ \(\gamma\in\Pi_{\rm new}(\Phi_{0})\) _is the unique element such that_ \(\gamma-\alpha_{i}\in(\Phi^{i})^{\ell}\)_;_
3. _If_ \(n_{i}(\theta^{s})=2\)_, then_ \(W^{i}\theta^{s}=T^{s}\)_, where_ \(\theta^{s}\) _is the dominant short root in_ \(\Phi\)_._
Proof.: (i) Since \(n_{i}(w\alpha_{i})=n_{i}(\alpha_{i})\) for \(w\in W^{i}\), the inclusion \(W^{i}\alpha_{i}\subseteq S^{s}\) is clear. For the reverse inclusion, let \(\alpha\in S^{s}\), and let \(\beta\in W^{i}\alpha\) of minimal height. We show that \(\beta\in W^{i}\alpha_{i}\). If \(\beta\) is a simple root, then \(\beta=\alpha_{i}\) as \(n_{i}(\beta)=1\), and we are done. If \(\beta\) is not simple, let \(\alpha_{j}\) simple such that \(\sigma_{\beta}\alpha_{j}\in\Phi^{-}\). Therefore, \(\langle\beta,\alpha_{j}\rangle>0\), and if \(j\neq i\), it follows that \(\beta>\sigma_{j}\beta\in W^{i}\beta=W^{i}\alpha\), contradicting the minimality of \(\beta\). In consequence, \(j=i\), so \(\langle\beta,\alpha_{i}\rangle=1\) (since \(\beta\) is a short root), and \(\sigma_{i}\beta=\beta-\alpha_{i}\) has \(n_{i}(\beta-\alpha_{i})=0\). But, in this situation, \(\sigma_{\beta-\alpha_{i}}\in W^{i}\) and \(\sigma_{\beta-\alpha_{i}}\alpha_{i}=\beta\), showing that \(\beta\in W^{i}\alpha_{i}\). In conclusion, \(W^{i}\alpha_{i}=S^{s}\), finishing the proof of (i).
(ii) Let \(\alpha\in U_{-2}\). Since \(\alpha+2\alpha_{i}\in T_{2}\) is also a root, it follows that \(\alpha+\alpha_{i}\in S^{s}_{0}\). The root \(\alpha+\alpha_{i}\) is short because both \(\alpha,\alpha+2\alpha_{i}\) are long. We consider two cases.
If \(\Phi\) is of type \(B_{r}\) or \(F_{4}\), the root system \(\Phi_{0}\) is irreducible, and there is a unique short root \(\gamma\in\Pi_{\rm new}(\Phi_{0})\). The set \(S^{s}_{0}\) consists of those roots \(\gamma\) in \(\Phi_{0}\) having \(n_{\gamma}=1\) and, by part (i), we deduce \(S^{s}_{0}=W^{(i)}\gamma\). Therefore, \(\alpha+\alpha_{i}=w\gamma\) for some \(w\in W^{(i)}\), so \(\gamma=w^{-1}\alpha+\alpha_{i}\in U_{-2}+\alpha_{i}\), as \(W^{(i)}\) permutes both the sets \(U_{-2}\) and \(S^{s}_{0}\). It follows that \(U_{-2}=W^{(i)}(\gamma-\alpha_{i})\).
If \(\Phi\) is of type \(C_{r}\), the root system \(\Phi_{0}\) has an irreducible component of type \(A_{1}\) generated by a short root \(\gamma\in\Pi_{\rm new}(\Phi_{0})\). If \(i=1\), or \(i=r-1\), we have that \(S^{s}_{0}=\{\gamma\}\); otherwise, part (i) implies that \(S^{s}_{0}=W^{(i)}\beta\cup\{\gamma\}\), for \(\beta\in\Pi^{s}_{\rm new}(\Phi_{0})\). Since \(\gamma\) satisfies \(\gamma-\alpha_{i}\in(\Phi^{i})^{\ell}\), but \(\beta\) does not, it follows that \(U_{-2}=W^{(i)}(\gamma-\alpha_{i})=\{\gamma-\alpha_{i}\}\) is a set with one element. This proves part (ii).
(iii) The inclusion \(W^{i}\theta^{s}\subseteq T^{s}\) is clear. The reverse inclusion follows if we show that if \(\beta\in T^{s}\) has the largest height in its \(W^{i}\)-orbit, then \(\beta\) is dominant. This is indeed the case. For \(j\neq i\), we have \(\langle\beta,\alpha_{j}\rangle\geqslant 0\), otherwise \(\sigma_{j}(\beta)\in W^{i}\beta\) has larger height than \(\beta\). Also, \(\langle\beta,\alpha_{i}\rangle\geqslant 0\), otherwise \(n_{i}(\sigma_{i}(\beta))>2=n_{i}(\theta^{s})\).
As a consequence of Lemmas 7.10 and 7.11, we obtain the following explicit description of the evaluation at \(x_{i}=1/u\) of a zeta average in a special situation. This formula will be used in the proof of Theorem 7.4. To ease notation, we write \(\Delta_{\Phi^{i}}^{\ell}({\bf x})\) for \(\Delta_{(\Phi^{i})^{\ell}}({\bf x})\), and \(\Delta_{\Phi^{(i)}}^{\ell}({\bf x})\) for the \(\Delta_{(\Phi^{(i)})^{\ell}}({\bf x})\).
**Lemma 7.12**.: _Assume \(\Phi\) is double-laced, and \(\alpha_{i}\) is a simple short root such that \(n_{i}(\theta)\leqslant 2\). Let \(\gamma\) be the unique root in \(\Pi_{\mathrm{new}}(\Phi_{0})\setminus\Pi_{\mathrm{new}}^{*}(\Phi_{0})\), and let \(\Phi_{0}^{\prime}\) be the irreducible component of \(\Phi_{0}\) which contains \(\gamma\). Then,_
\[Z_{\Phi_{0}^{\prime}}(\mathbf{x};u)\big{|}_{x_{i}=1/u}=\frac{\Delta_{\Phi^{(i)} }^{\ell}(\mathbf{x})}{\Delta_{\Phi^{i}}^{\ell}(\mathbf{x})}\.\]
Proof.: By Lemma 7.10 (ii), we have \(U_{-2}=(\Phi^{i,+}\setminus\Phi^{(i),+})^{\ell}\). It follows that
\[\frac{\Delta_{\Phi^{i}}^{\ell}(\mathbf{x})}{\Delta_{\Phi^{(i)}}^{\ell}( \mathbf{x})}=\prod_{\alpha\in U_{-2}}(1-\mathbf{x}^{\alpha})\ =\prod_{\alpha\in\alpha_{i}+U_{-2}}(1-u\underline{x}^{\alpha}).\]
By Lemma 7.11 (ii), we have \(U_{-2}+\alpha_{i}=W^{(i)}\gamma\), as the unique \(\gamma\in\Pi_{\mathrm{new}}(\Phi_{0})\setminus\Pi_{\mathrm{new}}^{*}(\Phi_{0})\) satisfies \(\gamma-\alpha_{i}\in U_{-2}\).
If \(\Phi\) is of type \(C_{r}\) with \(i\leqslant r-1\), then \(\Phi_{0}^{\prime}=\{\pm\gamma\}\) is of type \(A_{1}\), and \(W^{(i)}\gamma=\{\gamma\}\). It follows that
\[Z_{\Phi_{0}^{\prime}}(\mathbf{x};u)\big{|}_{x_{i}=1/u}=\frac{1}{1-u\underline {x}^{\gamma}}=\frac{\Delta_{\Phi^{(i)}}^{\ell}(\mathbf{x})}{\Delta_{\Phi^{i}} ^{\ell}(\mathbf{x})}.\]
If \(\Phi\) is of type \(F_{4}\) with \(i=4\), or of type \(B_{r}\) with \(i=r\), then \(\Phi_{0}=\Phi_{0}^{\prime}\) is irreducible of type \(B_{3}\) or, respectively, \(B_{r-1}\), with \(W^{(i)}\) generated by the simple reflections associated to long roots. Proposition 4.1 implies that
\[Z_{\Phi_{0}^{\prime}}(\mathbf{x};u)\big{|}_{x_{i}=1/u}=\prod_{\alpha\in W^{(i) }\gamma}\frac{1}{1-u\underline{x}^{\alpha}}=\frac{\Delta_{\Phi^{(i)}}^{\ell}( \mathbf{x})}{\Delta_{\Phi^{i}}^{\ell}(\mathbf{x})}.\qed\]
We are now ready to show that Theorem 7.4 reduces to proving an equality of the residues at \(x_{i}=1/u\).
**Proposition 7.13**.: _Let \(\Phi\) be an irreducible root system not of type \(G_{2}\), and let \(i\) be one of the admissible nodes specified in Table 1. Then the identity (7.4) in Theorem 7.4 is equivalent to_
\[\frac{\Delta_{\Phi^{i}}(\mathbf{x})}{\Delta_{\Phi^{(i)}}(\mathbf{x})}\ \underset{x_{i}=1/u}{\mathrm{Res}}Z_{\Phi}^{(i)}=\frac{\sum_{w\in W^{(i)}}K_{ \Phi,\alpha_{i}}(\mathbf{x})|w}{\Delta_{\Phi^{(i)}}(\mathbf{x})}. \tag{7.6}\]
Proof.: We show that both sides of (7.4) satisfy the assumptions of Lemma 7.6. We use the notation in SS7.7. Since \(n_{i}(\theta)\leqslant 2\), and \(W^{i}\) permutes the elements of \(T^{s}\), we have that \(Z_{\Phi}^{(i)}(\mathbf{x})\) is invariant under the Chinta-Gunnells actions of \(W^{i}\). The right-hand side of (7.4) is also invariant under \(W^{i}\).
The poles involving \(x_{i}\) of the right-hand side of (7.4) are precisely \(\mathbf{x}^{\alpha}=\pm 1/u\) for \(\alpha\) in the orbit \(W^{i}\alpha_{i}\), while the poles of the left-hand side occur at \(\mathbf{x}^{\alpha}=\pm 1/u\) for \(\alpha\in S^{s}\). By Lemma 7.10 (iv), we have \(W^{i}\alpha_{i}=S^{s}\). Therefore condition (b) in Lemma 7.6 is satisfied.
The degree in \(x_{i}\) of the right-hand side of (7.4) is clearly negative and, by Corollary 3.7, we have
\[\deg_{x_{i}}Z_{\Phi}^{(i)}=4|T^{s}|-n_{i}(2\rho)+|S^{\ell}|+2|T^{\ell}|=4|T^{s} |-|S|-2|T|+|S^{\ell}|+2|T^{\ell}|=2|T^{s}|-|S^{s}|.\]
If \(n_{i}(\theta^{s})=1\), the inequality \(|S^{s}|>2|T^{s}|\) is trivial (as \(T^{s}=\emptyset\)). If \(n_{i}(\theta^{s})=2\), by Lemma 7.11, the same inequality reduces to
\[\frac{|W^{i}|}{|W^{(i)}|}>2\frac{|W^{i}|}{|\operatorname{Stab}_{\theta^{s}}W^{ i}|}.\]
This inequality can be directly verified in all cases in Table 1. For example, we include here the verification for \(D_{r}\) and \(2i\leqslant r+1\), \(i\neq 1\). In this case, we have \(\operatorname{Stab}_{W^{i}}\theta^{s}=W^{2,i}\), the Weyl group of the parabolic root
sub-system obtained by excluding the nodes \(2\) and \(i\) from the Dynkin diagram of \(\Phi\). The inequality above is equivalent to
\[|W_{A_{1}}\times W_{A_{i-3}}\times W_{D_{r-i}}|>2|W_{A_{i-2}}\times W_{D_{r-i-1}}|.\]
Since \(|W_{A_{r}}|=(r+1)!\), \(|W_{D_{r}}|=2^{r-1}r!\), the inequality reduces to \(3i<2r+1\), which is satisfied in the range \(2i\leqslant r+1\).
The residues of the right-hand side of (7.4) at \(x_{i}=\pm 1/u\) involve only the terms in the sum that correspond to \(w\in W^{(i)}=\operatorname{Stab}_{W^{i}}\alpha_{i}\), and the residues at \(x_{i}=-1/u\) of both sides clearly vanish. Formula (7.6) expresses the equality of the residues at \(x_{i}=1/u\) of the two sides of (7.4), and out conclusion follows from Lemma 7.6.
Before presenting the proof of Theorem 7.4, we need to restate Theorem 6.1 in terms of the residue of \(Z^{(i)}_{\Phi}\). we collect one preliminary residue computation. To ease notation, we write \(\Delta^{s}_{\Phi^{i}}(\mathbf{x})\) for \(\Delta_{(\Phi^{i})^{*}}(\mathbf{x})\), and \(\Delta^{s}_{\Phi^{(i)}}(\mathbf{x})\) for the \(\Delta_{(\Phi^{(i)})^{*}}(\mathbf{x})\).
**Proposition 7.14**.: _Let \(\Phi\) be an irreducible root system not of type \(G_{2}\), and \(\alpha_{i}\) be a short simple root for which \(n_{i}(\theta)\leqslant 2\). We have,_
\[\frac{\Delta^{s}_{\Phi^{i}}(\mathbf{x})}{\Delta^{s}_{\Phi^{(i)}}(\mathbf{x})} \operatorname*{Res}_{x_{i}=1/u}Z^{(i)}_{\Phi}(\mathbf{x};u)=Z^{(i)}_{\Phi_{0} }(\mathbf{x};u)|_{x_{i}=1/u}\]
Proof.: Lemma 7.11 implies that \(\Phi_{1}^{+}=S_{1}\cup T_{1}\). From the description of \(S_{1}\) and \(U_{-1}\) in Lemma 7.11 it follows that
\[\operatorname*{Res}_{x_{i}=1/u}\frac{Z^{[i]}_{\Phi}(\mathbf{x};u)}{Z^{(i)}_{ \Phi}(\mathbf{x};u)}=\frac{\prod_{\alpha\in S_{1}\cup T_{1}}(1-u^{2}\underline {x}^{2\alpha})}{\prod_{\alpha\in T^{s}_{0}\cup T_{1}}(1-u^{2}\underline{x}^{2 \alpha})}=\frac{\Delta^{s}_{\Phi^{i}}(\mathbf{x})}{\Delta^{s}_{\Phi^{(i)}}( \mathbf{x})}\frac{1}{\prod_{\alpha\in T^{s}_{0}}(1-u^{2}\underline{x}^{2 \alpha})}\.\]
The conclusion follows from Theorem 6.1.
**Proof of Theorem 7.4.** We are now ready to assemble all the results in this section to prove Theorem 7.4. By Proposition 7.13 and Theorem 7.14, the identity (7.4) reduces to
\[Z^{(i)}_{\Phi_{0}}(\mathbf{x};u)\Big{|}_{x_{i}=1/u}=\frac{\sum_{w\in W^{(i)}}K _{\Phi,\alpha_{i}}(\mathbf{x})|w}{\Delta_{\Phi^{(i)}}(\mathbf{x})}\cdot\frac{ \Delta^{\ell}_{\Phi^{(i)}}(\mathbf{x})}{\Delta^{\ell}_{\Phi^{i}}(\mathbf{x})}, \tag{7.7}\]
where \(\Delta^{\ell}_{\Phi}(\mathbf{x}):=\Delta_{\Phi^{\ell}}(\mathbf{x})=\prod_{ \alpha\in(\Phi^{\ell})^{+}}(1-u^{2}\mathbf{x}^{\alpha})\), as in Lemma 7.12. We prove (7.4) by induction on the rank of \(\Phi\), by showing that the identity (7.7) is of the same type but for a smaller rank root system. For the base cases, (7.7) is verified directly.
Throughout the proof, it is useful to refer to the tables in SS5.4 and in Appendix A for the structure of \(\Phi_{0}\). Note that when \(\Phi_{0}\) is reducible, we have an orthogonal root system decomposition
\[\Phi_{0}=\Psi\oplus\Psi^{\prime},\]
with \(\Psi\) irreducible and \(\Psi^{\prime}\) of type \(A_{1}\), except when \(\Phi\) is of type \(D_{4}\), in which case \(\Psi\) is of type \(A_{1}\times A_{1}\). Therefore, if \(\Phi_{0}\) is reducible, \(Z_{\Phi_{0}}(\mathbf{x};u)\) factors as
\[Z_{\Phi_{0}}(\mathbf{x};u)=Z_{\Psi}(\mathbf{x};u)\cdot Z_{\Psi^{\prime}}( \mathbf{x};u). \tag{7.8}\]
#### 7.12.1. Simply-laced root systems
In this case, the fraction involving long roots in (7.7) is not present. We have three cases.
* \(\Phi\) is of type \(A_{r}\). For \(i=1\), or \(i=r\), we have \(\Phi_{0}=\Phi^{(i)}\), and \(K_{\Phi,\alpha_{i}}({\bf x})=1\), so (7.7) holds by the definition of \(Z_{\Phi^{(i)}}\). If \(1<i<r\), we have \(\Pi_{\rm new}(\Phi_{0})=\{\beta\}\), (7.9) \[K_{\Phi,\alpha_{i}}({\bf x})=\frac{1}{1-u\underline{x}^{\beta}}K_{\Phi_{0}, \beta}({\bf x}),\] and the root system \(\Phi^{(i)}\) is the parabolic sub-system of \(\Phi_{0}\) obtained by removing the node \(\beta\) from its Dynkin diagram. Therefore formula (7.7) follows by induction on \(r\), the base cases being \(i=1\) or \(i=r\).
* \(\Phi\) is of type \(D_{r}\), \(r\geqslant 4\). For \(i=1\), we have \[\Psi=\Phi^{(i)},\quad\Psi^{\prime}=\{\pm\beta\},\quad K_{\Phi,\alpha_{i}}({\bf x })=\frac{1}{1-u\underline{x}^{\beta}}=Z_{\Psi^{\prime}}({\bf x})|_{x_{i}=1/u}.\] Therefore, (7.7) follows from the definition of \(Z_{\Psi}({\bf x};u)\) and the factorization (7.8).
For \(i=r\) (and, similarly, for \(i=r-1\)), we have \(\Psi^{\prime}=\{\pm\alpha_{r-1}\}\), and \(\Psi\) is of type \(D_{r-2}\), with \(\beta\) playing the role of the node \(r-2\). Therefore (7.7) follows by induction on \(r\), with the base case being \(r=4\), \(i=1\).
For \(1<i<r-2\), we have \(\Psi^{\prime}=\{\pm\beta^{\prime}\}\) and \(\Psi\) is of type \(D_{r-2}\), with \(\beta\) playing the role of the node \(i-1\). Since \(0\leqslant r+1-2i=r-2+1-2(i-1)\), and
\[K_{\Phi,\alpha_{i}}({\bf x})=\frac{1}{1-u\underline{x}^{\beta}}K_{\Phi_{0}, \beta}({\bf x})\cdot Z_{\Psi^{\prime}}({\bf x};u)|_{x_{i}=1/u}, \tag{7.10}\]
formula (7.7) follows again by induction on \(r\). The base cases are \(i=1\), which was already proved, and \(r=4,5\), and \(i=r-2\).
For \(r=4\), \(i=2\), we have that \(W^{(i)}\) is trivial, \(\Phi_{0}^{+}=\{\pm\beta\}\oplus\{\pm\beta^{\prime}\}\oplus\{\pm\beta^{\prime \prime}\}\), and (7.7) follows from the definition of \(K_{\Phi,\alpha_{i}}({\bf x})\). The case of \(D_{5}\) with \(i=3\) is different from those encountered so far, since \(\Psi\) contains two roots \(\beta,\beta^{\prime}\in\Pi_{\rm new}(\Phi_{0})\). In this case, \(\Psi\) is of type \(A_{3}\) and \(\beta\), \(\beta^{\prime}\) play the role of nodes \(1\) and \(3\). Therefore, formula (7.7) reduces to the identity discussed in Example 7.8.
* \(\Phi\) is of type \(E_{r}\), \(6\leqslant r\leqslant 8\). If \(i\) is an extremal node, then \(\Pi_{\rm new}(\Phi_{0})=\{\beta\}\). The root system \(\Phi_{0}\) is irreducible, and \(\Phi^{(i)}\) is the parabolic sub-system of \(\Phi_{0}\) obtained by removing node \(\beta\) from its Dynkin diagram. By definition, \(K_{\Phi,\alpha_{i}}\) satisfies (7.9), and (7.7) follows by induction.
The only case when the node \(i\) is not extremal is for \(E_{6}\) and \(i=3\). In this case, \(\Phi_{0}\) is of type \(A_{5}\) and \(\beta\), \(\beta^{\prime}\in\Pi_{\rm new}(\Phi_{0})\) play the role of nodes \(2\) and \(5\). Therefore, formula (7.7) reduces to the identity proved in Example 7.9. This concludes the proof of Theorem 7.4 for simply-laced root systems.
#### 7.12.2. Double-laced root systems
We distinguish two cases.
* \(\Phi\) is of type \(B_{r}\), \(r\geqslant 3\), or \(F_{4}\). In this case \(i=r\), or \(i=4\), respectively. Then, \(\Phi^{(i)}\) consists of long roots only, and \(K_{\Phi,\alpha_{i}}({\bf x})=1\). The first fraction in (7.7) equals \(1\), by the Weyl denominator formula for \(A_{r-2}\) and, respectively, for \(A_{2}\). Formula (7.7) is then precisely the formula proved in Lemma 7.12.
* \(\Phi\) is of type \(C_{r}\), \(r\geqslant 2\). In this case, \(\Psi\) is of type \(C_{r-2}\), and \(\Psi^{\prime}=\Phi^{\prime}_{0}=\{\pm\gamma\}\) with the notation of Lemma 7.12. If \(i=1\), we have \(K_{\Phi,\alpha_{i}}({\bf x})=1\), and formula (7.7) follows from Lemma 7.12 and the definition of \(Z_{\Psi}({\bf x};u)\). If \(i>1\), then \(2i\leqslant r\) implies that \(i<r-1\). Therefore, \(\Psi\) contains a root \(\beta\in\Pi_{\rm new}^{*}(\Phi_{0})\) that plays the role of node \(i-1\) in its Dynkin diagram. Formula (7.9) holds for \(K_{\Phi,\alpha_{i}}({\bf x})\), and (7.7) follows by induction and the use of Lemma 7.12, the base case being \(i=1\). This completes the proof of Theorem 7.4.
## Appendix A Exceptional simply-laced root systems
We consider the root systems of type \(E_{6}\), \(E_{7}\), and \(E_{8}\), and describe the orthogonal complement \(\Phi_{0}\) to a simple root \(\alpha_{i}\), as defined in Section 5. We will use the notation set-up at the beginning of SS5.4. The description of \(\Pi_{\mathrm{new}}(\Phi_{0})\) and the Dynkin diagram of \(\Phi_{0}\) can be found in the relevant table below. We mark in boldface the indices satisfying the conditions in Lemma 7.2. Using the information in the tables, one can verify in these cases the equivalence of conditions (i) and (ii) in Lemma 7.2, as well as Remark 7.3. We first recall the standard labeling of the Dynkin diagrams and the formula for the longest root \(\theta\):
\begin{table}
\begin{tabular}{c c c} Node \(i\) & \(\Pi_{\mathrm{new}}(\Phi_{0})\) & \(\Phi_{0}\) \\ \hline \(\mathbf{i=1}\) & \(\beta=\theta_{\{1,\ldots,5\}}\) & \(\alpha_{2}\) \\ \hline \(\mathbf{i=2}\) & \(\beta=\theta_{\{2,\ldots,5\}}\) & \(\alpha_{3}\) \\ \hline \(i=3\) & \(\beta=\theta_{\{1,3,4\}}\) & \(\alpha_{2}\) \\ \(\beta^{\prime}=\theta_{\{2,\ldots,5\}}\) & \(\beta^{\prime}\) \\ \hline \(i=4\) & \(\beta=\theta_{\{3,4,5\}}\) & \(\alpha_{7}\) \\ \(\beta^{\prime}=\theta_{\{2,3,4\}}\) & \(\beta^{\prime\prime}\) \\ \hline \(i=5\) & \(\beta=\theta_{\{4,5,6\}}\) & \(\beta^{\prime}\) \\ \(\beta^{\prime}=\theta_{\{2,\ldots,5\}}\) & \(\beta^{\prime}\) \\ \hline \(i=6\) & \(\beta=\theta_{\{5,6,7\}}\) & \(\beta^{\prime}\) \\ \(\beta^{\prime}=\theta_{\{2,\ldots,6\}}\) & \(\beta^{\prime}\) \\ \hline \(\mathbf{i=7}\) & \(\beta=\theta_{\{2,\ldots,7\}}\) & \(\beta\) \\ \hline \end{tabular}
\end{table}
Table 7. The orthogonal root system in type \(E_{7}\)
\begin{table}
\begin{tabular}{c c c} Node \(i\) & \(\Pi_{\rm new}(\Phi_{0})\) & \(\Phi_{0}\) \\ \hline \({\bf i=1}\) & \(\beta=\theta_{\{1,\ldots,5\}}\) & \\ \hline \({\bf i=2}\) & \(\beta=\theta_{\{2,\ldots,5\}}\) & \\ \hline \({\bf i=3}\) & \(\beta=\theta_{\{1,3,4\}}\) & \\ \(\beta^{\prime}=\theta_{\{1,\ldots,5\}}\) & \\ \hline \(i=4\) & \(\beta=\theta_{\{3,4,5\}}\) & \\ \(\beta^{\prime}=\theta_{\{2,3,4\}}\) & \\ \(\beta^{\prime\prime}=\theta_{\{2,4,5\}}\) & \\ \hline \end{tabular}
\begin{tabular}{c c c} Node \(i\) & \(\Pi_{\rm new}(\Phi_{0})\) & \(\Phi_{0}\) \\ \hline \({\bf i=1}\) & \(\beta=\theta_{\{1,\ldots,5\}}\) & \\ \hline \(i=2\) & \(\beta=\theta_{\{2,\ldots,5\}}\) & \\ \hline \(i=3\) & \(\beta=\theta_{\{1,3,4\}}\) & \\ \(\beta^{\prime}=\theta_{\{2,\ldots,5\}}\) & \\ \hline \(i=4\) & \(\beta=\theta_{\{3,4,5\}}\) & \\ \(\beta^{\prime}=\theta_{\{2,3,4\}}\) & \\ \(i=5\) & \(\beta=\theta_{\{4,5,6\}}\) & \\ \(\beta^{\prime}=\theta_{\{2,\ldots,5\}}\) & \\ \hline \(i=6\) & \(\beta=\theta_{\{5,6,7\}}\) & \\ \(\beta^{\prime}=\theta_{\{2,\ldots,6\}}\) & \\ \hline \(i=7\) & \(\beta=\theta_{\{6,7,8\}}\) & \\ \(\beta^{\prime}=\theta_{\{2,\ldots,7\}}\) & \\ \hline \({\bf i=8}\) & \(\beta=\theta_{\{2,\ldots,8\}}\) & \\ \hline \end{tabular}
\end{table}
Table 8. The orthogonal root system in type \(E_{6}\)
\begin{table}
\begin{tabular}{c c c} Node \(i\) & \(\Pi_{\rm new}(\Phi_{0})\) & \(\Phi_{0}\) \\ \hline \({\bf i=1}\) & \(\beta=\theta_{\{1,\ldots,5\}}\) & \\ \hline \(i=2\) & \(\beta=\theta_{\{2,\ldots,5\}}\) & \\ \hline \(i=3\) & \(\beta=\theta_{\{1,3,4\}}\) & \\ \(\beta^{\prime}=\theta_{\{2,\ldots,5\}}\) & \\ \hline \(i=4\) & \(\beta=\theta_{\{3,4,5\}}\) & \\ \(\beta^{\prime}=\theta_{\{2,3,4\}}\) & \\ \(\beta^{\prime\prime}=\theta_{\{2,4,5\}}\) & \\ \hline \(i=5\) & \(\beta=\theta_{\{4,5,6\}}\) & \\ \(\beta^{\prime}=\theta_{\{2,\ldots,5\}}\) & \\ \hline \(i=6\) & \(\beta=\theta_{\{5,6,7\}}\) & \\ \(\beta^{\prime}=\theta_{\{2,\ldots,6\}}\) & \\ \hline \(i=7\) & \(\beta=\theta_{\{6,7,8\}}\) & \\ \(\beta^{\prime}=\theta_{\{2,\ldots,7\}}\) & \\ \hline \({\bf i=8}\) & \(\beta=\theta_{\{2,\ldots,8\}}\) & \\ \hline \end{tabular}
\end{table}
Table 9. The orthogonal root system in type \(E_{8}\)
## Appendix B The root system of type \(G_{2}\)
Throughout this section we assume that the root system \(\Phi\) is of type \(G_{2}\):
The dominant short root is \(\theta_{s}=2\alpha_{1}+\alpha_{2}\) and the dominant long root is \(\theta_{\ell}=3\alpha_{1}+2\alpha_{2}\). As in Section 5, we fix a node \(i\) and consider the orthogonal complement \(\Phi_{0}=\alpha_{i}^{\perp}:=\{\alpha\in\Phi:\langle\alpha_{i},\alpha\rangle=0\}.\) The Dynkin diagram of \(\Phi_{0}\) is of rank one, with basis \(\Pi(\Phi_{0})=\{\beta\}\) as described in Table 10.
We define
\[Z_{\Phi}^{[i]}(\mathbf{x};u)=Z_{\Phi}(\mathbf{x};u)\cdot\prod_{\begin{subarray} {c}\alpha\in\Phi_{>0}^{+}\\ m_{\alpha}=2\end{subarray}}(1-u^{2}\mathbf{x}^{2\alpha}),\]
where \(\Phi_{>0}^{+}=\{\alpha\in\Phi^{+}\mid\langle\alpha,\alpha_{i}\rangle>0\}\). The condition \(m_{\alpha}=2\) is superfluous, as it holds for all \(\alpha\in\Phi\), but we include it since this definition is consistent with the one for simply-laced and double-laced root systems.
The zeta average \(Z_{\Phi_{0}}(\mathbf{x};u)\) defined using the basis \(\Pi(\Phi_{0})\) will henceforth be regarded as an element of \(\mathbb{F}(Q)\) via \(\mathbb{F}(Q_{0})\subset\mathbb{F}(Q)\). Since \(\Phi_{0}=\{\pm\beta\}\) is of type \(A_{1}\) in both cases, we have \(Z_{\Phi_{0}}(\mathbf{x};u)=1/(1-u\mathbf{x}^{\beta})\).
**Theorem B.1**.: _We have_
\[\operatorname*{Res}_{x_{1}=1/u}Z_{\Phi}^{[1]}(\mathbf{x};u)=\left.Z_{\Phi_{0} }(\mathbf{x};u^{3})\right|_{x_{1}=1/u}\quad\text{and}\quad\operatorname*{Res} _{x_{2}=1/u}Z_{\Phi}^{[2]}(\mathbf{x};u)=\left.Z_{\Phi_{0}}(\mathbf{x};u) \right|_{x_{2}=1/u}.\]
Remark the extra change of variable in the case \(i=1\), which is a singular feature of the \(G_{2}\) case.
We adopt the notation in Section 5 with respect to the parabolic sub-system \(\Phi^{i}\) and its Weyl group \(W^{i}\). Also, for \(\alpha\in\Phi\), \(n_{i}(\alpha)\in\mathbb{Z}\) denotes the coefficient of \(\alpha_{i}\) in the expansion of \(\alpha\) in the basis \(\Pi(\Phi)\). Let
\[Z_{\Phi}^{(i)}(\mathbf{x};u)=Z_{\Phi}(\mathbf{x};u)\cdot\prod_{\begin{subarray} {c}\alpha\in\Phi\\ n_{i}(\alpha)\geqslant 2\end{subarray}}(1-u^{2}\mathbf{x}^{2\alpha}).\]
To state the analogue of Theorem 7.4, remark that \(\alpha_{2}\) is the only simple root \(\alpha_{i}\) for which \(n_{i}(\beta)=1\) (see Lemma 7.2). This is is also the only simple root for which \(n_{i}(\theta_{\ell})\leqslant 2\) (see Remark 7.5). The product in the definition of \(Z_{\Phi}^{(2)}(\mathbf{x};u)\) contains only one term, for \(\alpha=\theta_{\ell}\).
**Theorem B.2**.: _We have_
\[Z_{\Phi}^{(2)}(\mathbf{x};u)=\frac{\sum_{w\in W^{2}}\frac{1}{1-ux_{2}}\frac{1} {1-u\mathbf{x}^{\theta_{s}}}\bigg{|}\,w}{\Delta_{\Phi^{2}}(\mathbf{x};u)}\.\]
\begin{table}
\begin{tabular}{l|l|l} Node \(i\) & \(\Pi(\Phi_{0})\) & \(\Phi_{0}\) \\ \hline \(i=1\) & \(\beta=\theta_{\ell}\) & \(A_{1}^{\ell}\) \\ \hline \(i=2\) & \(\beta=\theta_{s}\) & \(A_{1}^{s}\) \\ \hline \end{tabular}
\end{table}
Table 10. The orthogonal root system in type \(G_{2}\)
The presence of two terms involving \(u\) in the average above is explained by the fact that the roots \(\alpha\) with \(n_{2}(\alpha)=1\) form two orbits under the group \(W^{2}=\langle\sigma_{1}\rangle\): one orbit consisting of long roots with representative \(\alpha_{2}\), and one orbit consisting of short roots with representative \(\theta_{s}\).
Theorems B.1 and Theorem B.2 can be proved along the same lines as Theorems 6.1 and 7.4. However, they can be verified directly, using the explicit formula
\[Z_{\Phi}(x_{1},x_{2};u)=\frac{u^{5}x_{1}^{7}x_{2}^{4}-u^{3}x_{1}^{6}x_{2}^{3}-u ^{3}x_{1}^{4}x_{2}^{3}+u^{2}x_{1}^{4}x_{2}^{2}+u^{3}x_{1}^{3}x_{2}^{2}-u^{2}x_ {1}^{3}x_{2}-u^{2}x_{1}x_{2}+1}{D_{\Phi}(\mathbf{x};u)/\left[(1+ux_{1})(1+ux_{2 })(1+u\mathbf{x}^{\theta_{s}})\right]},\]
where \(D_{\Phi}(\mathbf{x};u)=\prod_{\alpha\in\Phi^{+}}(1-u^{2}\mathbf{x}^{2\alpha})\).
## Appendix C Proof of Theorem C
In this appendix we give a proof of Theorem C. We let \(\mathbb{K}=\mathbb{Q}(\sqrt{-1})\) and \(\Phi\) an irreducible root system not of type \(G_{2}\). The argument given here applies with obvious modifications to give an alternative proof of Theorem B over \(\mathbb{F}_{q}(T)\) with \(q\equiv 1\pmod{4}\). What simplifies the argument, and guides our choice of number field and the congruence condition in Theorem B, is the fact that the quadratic reciprocity law takes the simple shape \(\left(\frac{a}{b}\right)=\left(\frac{b}{a}\right)\) under these assumptions, for \(a,b\) coprime ideals of odd norm in \(\mathbb{Q}(\sqrt{-1})\), or coprime monic polynomials in \(\mathbb{F}_{q}(T)\) with \(q\equiv 1\pmod{4}\). We emphasize that these assumptions are made only to simplify the arguments, and similar results hold over arbitrary number fields. However in general one needs to consider MDS twisted by characters, as introduced in [13], and the statements are more involved.
The idea of the proof is straightforward: we show that both sides of (1.4) are multiple Dirichlet series with the same \(p\)-part, and they satisfy the same twisted multiplicativity property. First, in Lemma C.1 we derive a formula for the residue of \(\mathcal{Z}_{\Phi}(\mathbf{s})\) as an MDS in \(s_{j}\) for \(j\neq i\). Using this formula, we show that both sides of (1.4) have the same \(p\)-part; it is here that we crucially use Theorem A, which is the main difficulty in the argument. Using again the formula in Lemma C.1, we show that both sides of (1.4) satisfy the same twisted multiplicativity, inherited from the root system \(\Phi_{0}\).
We recall the definition of the MDS \(\mathcal{Z}_{\Phi}(\mathbf{s})\), following [13]. We have
\[\mathcal{Z}_{\Phi}(\mathbf{s})=\sum\frac{H(m_{1},\ldots,m_{r})}{|m_{1}|^{s_{1 }}\cdot\ldots\cdot|m_{r}|^{s_{r}}},\]
where the sum is over integers \(m_{j}\) in \(\mathbb{K}\) of odd norm, modulo units, and the norms are the norms of the principal ideals generated by \(m_{j}\). In what follows we use the language of ideals, and we regard the \(m_{j}\) as integral ideals in \(\mathbb{K}\) of odd norm. The coefficients \(H\) satisfy the following properties, which uniquely determine \(\mathcal{Z}_{\Phi}\).
* Twisted multiplicativity: if the ideals \(\prod m_{j}\) and \(\prod m_{j}^{\prime}\) are coprime, then4 Footnote 4: Here we assume that \(\Phi\) is not of type \(G_{2}\); for \(G_{2}\) the condition in the product would be \(\langle\alpha_{k},\alpha_{j}\rangle<0\). Recall also that the Weyl invariant pairing is normalized as in §2.1.
(C.1) \[H(m_{1}m_{1}^{\prime},\ldots,m_{r}m_{r}^{\prime})=H(m_{1},\ldots,m_{r})H(m_{1} ^{\prime},\ldots,m_{r}^{\prime})\cdot\prod_{\begin{subarray}{c}k<j\\ \langle\alpha_{k},\alpha_{j}\rangle=-1\end{subarray}}\left(\frac{m_{k}}{m_{j}^ {\prime}}\right)\left(\frac{m_{k}^{\prime}}{m_{j}}\right);\]
* Determination of \(p\)-part: for a prime \(p\) and \(\lambda=\sum n_{j}\alpha_{j}\in Q^{+}\), we have (C.2) \[H(p^{n_{1}},\ldots,p^{n_{r}})=a_{\lambda}(|p|^{-1/2}),\] where \(a_{\lambda}(u)\) are the coefficients of the zeta average \(Z_{\Phi}(\mathbf{x};u)=\sum_{\lambda}a_{\lambda}(u)\mathbf{x}^{\lambda}\) defined in SS2.6.
The analytic properties of \(\mathcal{Z}_{\Phi}(\mathbf{s})\) have been established in [13]. In particular, it has meromorphic continuation to \(\mathbb{C}^{r}\) and satisfies a group of functional equations isomorphic to the Weyl group of \(\Phi\).
### In this subsection, we prove the following formula for the residue of the MDS over \(\mathbb{K}\).
**Lemma C.1**.: _Let \(\mathbb{K}=\mathbb{Q}(\sqrt{-1})\). Let \(\Phi\) be an irreducible root system not of type \(G_{2}\), and let \(\alpha_{i}\) be a short simple root. Then,_
\[\operatorname*{\mathrm{Res}}_{s_{i}=1/2}\mathcal{Z}_{\Phi}(\mathbf{s})=\frac{ \pi}{8}\sum_{\begin{subarray}{c}m_{j},j\neq i\\ \prod_{(\alpha_{j},\alpha_{i})=-1}m_{j}=\square\end{subarray}}\frac{1}{\prod_{ j\neq i}|m_{j}|^{s_{j}}}\prod_{p|\prod_{\Pi_{j\neq i}}m_{j}}(1-|p|^{-1})\sum_{ \begin{subarray}{c}m\\ p|m=\Rightarrow p|\prod_{\Pi_{j\neq i}}m_{j}\end{subarray}}\frac{H(m_{1}, \ldots,m,\ldots,m_{r})}{|m|^{1/2}},\]
_where the ideal \(m\) is on position \(i\) in the argument of \(H\), and the sums are over integral ideals of odd norm. The series converges for \(\Re s_{j}\) large enough for \(j\neq i\)._
By essentially the same argument, the same residue formula, but without the factor \(\pi/8\), holds over \(\mathbb{K}=\mathbb{F}_{q}(T)\) for \(q\equiv 1\,(\mathrm{mod}\ 4)\).
Proof.: One sums first over \(m_{i}\), keeping \(m_{j}\) fixed for \(j\neq i\), as in [13, SS5]. From this sum one extracts a Dirichlet series with quadratic character, whose residue is \(0\) unless the character is trivial. In the latter case, we use the Dirichlet class number formula to compute the residue
\[\operatorname*{\mathrm{Res}}_{s=1}\zeta_{\mathbb{K}}^{(2\prod_{j\neq i}m_{j})} (s)=\frac{\pi}{4}\prod_{p|^{2}\prod_{j\neq i}m_{j}}(1-|p|^{-1}),\]
where \(\zeta_{\mathbb{K}}^{(c)}\) is the Dedekind zeta function of \(\mathbb{K}\) with the Euler factors at the primes dividing \(c=2\prod_{j\neq i}m_{j}\) removed. The conclusion immediately follows.
Using the formula in Lemma C.1, we now show that the \(p\)-parts of both sides in (1.4) match. We denote by \(\mathcal{L}(\underline{s})\) the series in Lemma C.1 as an MDS in the multivariable \(\underline{s}=(s_{1},\ldots,s_{i-1},s_{i+1},\ldots s_{r})\):
(C.3) \[\mathcal{L}(\underline{s}):=\sum_{\begin{subarray}{c}\underline{m}\\ \prod_{(\alpha_{j},\alpha_{i})=-1}m_{j}=\square\end{subarray}}\frac{H^{ \prime}(\underline{m})}{\prod_{j\neq i}|m_{j}|^{s_{j}}},\]
with \(H^{\prime}(\underline{m})\) as resulting from Lemma C.1 and \(\underline{m}=(m_{1},\ldots,m_{i-1},m_{i+1},\ldots,m_{r})\). We leave aside for now the question as to what root system is the MDS \(\mathcal{L}(\underline{s})\) attached to.
To compute its \(p\)-part for \(p\) a prime of odd norm, let \(m_{j}=p^{k_{j}}\) for \(j\neq i\), \(m=p^{k_{i}}\) and make the change of variables \(x_{j}=|p|^{-s_{j}}\) for \(j\neq i\), \(x_{i}=|p|^{-1/2}\). Because of the \(p\)-part property (C.2), we also denote \(u=|p|^{-1/2}\). Let \(L_{p}(\underline{x};u)\) be the \(p\)-part of \(\mathcal{L}(\underline{s})\), after the substitutions above, where \(\underline{x}\) denotes, as before, the multivariable \((x_{1},\ldots,x_{i-1},x_{i+1},\ldots,x_{r})\). Let \(R_{p}(\underline{x};u)\) denote the \(p\)-part of the right-hand side of (1.4) (without the factor \(\pi/8\)), after the same substitutions.
**Lemma C.2**.: _With the notation above, we have_
\[L_{p}(\underline{x};u)=R_{p}(\underline{x};u).\]
Proof.: By definition, we have
\[L_{p}(\underline{x};u) =\sum_{\begin{subarray}{c}\lambda\in Q^{+}\\ \langle\lambda,\alpha_{i}\rangle\text{ even}\end{subarray}}(1-u^{2})a_{\lambda}(u) \mathbf{x}^{\lambda}|_{x_{i}=u}\] \[=(1-u^{2})(Z_{\Phi})_{i}^{+}(\mathbf{x};u)|_{x_{i}=u},\]
where the notation \(f_{i}^{+}\) for a function \(f(\mathbf{x};u)\) is introduced in SS2.5.
On the other hand, the \(p\)-part of \(\mathcal{Z}_{\Phi_{0}}(\mathbf{s})|_{s_{i}=1/2}\) is, by definition and after the substitutions above, \(Z_{\Phi_{0}}(\mathbf{x};u)|_{x_{i}=u}\), and we can write
\[R_{p}(\underline{x};u)=\prod_{\begin{subarray}{c}\alpha\in\Phi^{+}\\ \langle\alpha,\alpha_{i}\rangle=1\end{subarray}}\frac{1}{1-\mathbf{x}^{2 \alpha}|_{x_{i}=u}}\cdot Z_{\Phi_{0}}(\mathbf{x};u)|_{x_{i}=u}.\]
We now use Theorem A to express the term \(Z_{\Phi_{0}}(\mathbf{x};u)|_{x_{i}=u}\). To apply the theorem, we need to replace the evaluation of \(x_{i}\) at \(u\) with an evaluation at \(1/u\). We use repeatedly the following simple change of variables formula: if \(f(\mathbf{x};u)\) is any function such that the evaluations below are well-defined, then
(C.4) \[f(\sigma_{i}\mathbf{x};u)\big{|}_{\begin{subarray}{c}x_{i}=1/u\\ x_{j}\mapsto u^{-(\alpha_{j},\alpha_{i})}x_{j}\end{subarray}}=f(\mathbf{x};u) \big{|}_{x_{i}=u},\]
where here and below the substitution \(x_{j}\mapsto u^{-(\alpha_{j},\alpha_{i})}x_{j}\) takes place for all \(j\neq i\). It follows that
\[Z_{\Phi_{0}}(\mathbf{x};u)|_{x_{i}=u}=Z_{\Phi_{0}}(\sigma_{i}\mathbf{x};u)|_{ \begin{subarray}{c}x_{i}=1/u\\ x_{j}\mapsto u^{-(\alpha_{j},\alpha_{i})}x_{j}\end{subarray}}=Z_{\Phi_{0}}( \mathbf{x};u)|_{\begin{subarray}{c}x_{i}=1/u\\ x_{j}\mapsto u^{-(\alpha_{j},\alpha_{i})}x_{j}\end{subarray}},\]
where the second equality uses the fact that \(Z_{\Phi_{0}}(\mathbf{x};u)\in\mathbb{F}(Q_{0})\subset\mathbb{F}(Q)\). By (C.4), we also have
\[\prod_{\langle\alpha,\alpha_{i}\rangle=1}(1-\mathbf{x}^{2\alpha})|_{x_{i}=u}= \prod_{\langle\alpha,\alpha_{i}\rangle=1}(1-u^{2}\mathbf{x}^{2\sigma_{i} \alpha})|_{x_{i}=u}=\prod_{\langle\alpha,\alpha_{i}\rangle=1}(1-u^{2}\mathbf{ x}^{2\alpha})|_{\begin{subarray}{c}x_{i}=1/u\\ x_{j}\mapsto u^{-(\alpha_{j},\alpha_{i})}x_{j}\end{subarray}}.\]
Using this identity and applying Theorem A, we obtain the first equality below
\[R_{p}(\underline{x};u) =\operatorname*{Res}_{x_{i}=1/u}Z_{\Phi}(\mathbf{x};u)|_{x_{j} \mapsto u^{-(\alpha_{j},\alpha_{i})}x_{j}}\] \[=(1-u^{2})(Z_{\Phi})_{i}^{+}(\sigma_{i}\mathbf{x};u)|_{ \begin{subarray}{c}x_{i}=1/u\\ x_{j}\mapsto u^{-(\alpha_{j},\alpha_{i})}x_{j}\end{subarray}}\] \[=(1-u^{2})(Z_{\Phi})_{i}^{+}(\mathbf{x};u)|_{x_{i}=u}.\]
The second equality follows from (6.11), while in the third we use again (C.4). Comparing with the formula for \(L_{p}(\underline{x};u)\) above, we conclude that \(R_{p}(\underline{x};u)=L_{p}(\underline{x};u)\).
### Using Lemma C.1, the identity (1.4) becomes
(C.5) \[\prod_{\begin{subarray}{c}\alpha\in\Phi^{+}\\ \langle\alpha,\alpha_{i}\rangle=1\end{subarray}}\zeta_{\mathbb{K}}^{(2)}(2 \mathbf{s}_{\alpha})^{-1}|_{s_{i}=1/2}\cdot\mathcal{L}(\underline{s})= \mathcal{Z}_{\Phi_{0}}(\mathbf{s})|_{s_{i}=1/2},\]
with \(\mathcal{L}(\underline{s})\) defined in (C.3). In the previous subsection we have shown that the \(p\)-parts of both sides match, and now we show that both sides satisfy the same twisted multiplicativity property. The product of zeta functions on the left does not affect the twisted multiplicativity, so we concentrate on the coefficients \(H^{\prime}(\underline{m})\) of \(\mathcal{L}(\underline{s})\).
Let \(\underline{m}\), \(\underline{m}^{\prime}\) be tuples as in the summation defining \(\mathcal{L}(\underline{s})\), and \(m,m^{\prime}\) ideals of odd norm such that \(m\prod_{j\neq i}m_{j}\) and \(m^{\prime}\prod_{j\neq i}m^{\prime}_{j}\) are coprime. Using twisted multiplicativity for \(H(m_{1}m_{1},\dots,mm^{\prime},\dots,m_{r}m^{\prime}_{r})\)
under the condition that \(\prod_{\langle\alpha_{j},\alpha_{i}\rangle=-1}m_{j}\) and \(\prod_{\langle\alpha_{j},\alpha_{i}\rangle=-1}m_{j}^{\prime}\) are squares, one checks that the residue symbols involving \(m,m^{\prime}\) multiply to \(1\), so the formula for \(H^{\prime}\) gives
(C.6) \[H^{\prime}(\underline{m}\cdot\underline{m}^{\prime})=H^{\prime}(\underline{m} )\cdot H^{\prime}(\underline{m}^{\prime})\cdot\prod_{\begin{subarray}{c}k<j\\ \langle\alpha_{k},\alpha_{j}\rangle=-1\end{subarray}}\left(\frac{m_{k}}{m_{j}^ {\prime}}\right)\left(\frac{m_{k}^{\prime}}{m_{j}}\right).\]
To illustrate the last part of the argument in a concrete situation, and to simplify the notation, let us assume that \(\Phi=A_{r}\), and \(1<i<r\). In this case,
\[\Pi(\Phi_{0})=\{\alpha_{j}\mid|j-i|>1\}\cup\{\beta\},\quad\text{with }\beta= \alpha_{i-1}+\alpha_{i}+\alpha_{i+1}.\]
Denote by \(H^{\prime\prime}(\underline{m})\) the coefficients of the left-hand side of (C.5) when written as a MDS. They satisfy the same twisted multiplicativity as \(H^{\prime}(\underline{m})\), and the the matching of \(p\)-parts of both sides in (C.5) shows that \(H^{\prime\prime}(\underline{m})=0\) unless \(m_{i-1}=m_{i+1}\). Property (C.6) then reduces to the twisted multiplicativity satisfied by the coefficients of \(\mathcal{Z}_{\Phi_{0}}(\mathbf{s})|_{s_{i}=1/2}\) with respect to the system \(\Phi_{0}\). Together with Lemma C.2, this finishes the proof of (1.4) in this particular case.
The general case is entirely similar, but it requires heavier notation, so we leave the verification to the interested reader.
|
2305.10238 | Energy Loss Prediction in IoT Energy Services | We propose a novel Energy Loss Prediction(ELP) framework that estimates the
energy loss in sharing crowdsourced energy services. Crowdsourcing wireless
energy services is a novel and convenient solution to enable the ubiquitous
charging of nearby IoT devices. Therefore, capturing the wireless energy
sharing loss is essential for the successful deployment of efficient energy
service composition techniques. We propose Easeformer, a novel attention-based
algorithm to predict the battery levels of IoT devices in a crowdsourced energy
sharing environment. The predicted battery levels are used to estimate the
energy loss. A set of experiments were conducted to demonstrate the feasibility
and effectiveness of the proposed framework. We conducted extensive experiments
on real wireless energy datasets to demonstrate that our framework
significantly outperforms existing methods. | Pengwei Yang, Amani Abusafia, Abdallah Lakhdari, Athman Bouguettaya | 2023-05-16T09:07:08Z | http://arxiv.org/abs/2305.10238v1 | # Energy Loss Prediction in IoT Energy Services
###### Abstract
We propose a novel _Energy Loss Prediction(ELP)_ framework that estimates the energy loss in sharing crowdsourced energy services. Crowdsourcing wireless energy services is a novel and convenient solution to enable the ubiquitous charging of nearby IoT devices. Therefore, capturing the wireless _energy sharing loss_ is essential for the successful deployment of efficient energy service composition techniques. We propose _Easgcommer_, a novel attention-based algorithm to predict the battery levels of IoT devices in a crowdsourced energy sharing environment. The predicted battery levels are used to estimate the energy loss. A set of experiments were conducted to demonstrate the feasibility and effectiveness of the proposed framework. We conducted extensive experiments on real wireless energy datasets to demonstrate that our framework significantly outperforms existing methods.
Wireless Energy, Wireless Power Transfer, Energy Services, Energy Loss, IoT, Crowdsourcing, Informer
## I Introduction
Internet of things (IoT) is a paradigm that enables everyday objects (i.e., things) to connect to the Internet and exchange data [1]. IoT devices, such as smartphones and wearables, typically have augmented capabilities, including sensing, networking, and processing [2]. Abstracting the capabilities of IoT devices using the _service paradigm_ may yield a multitude of novel IoT services [3][4]. These IoT services may be exchanged between IoT devices as _crowdsourced_ IoT services. IoT services are defined by their functional and non-functional attributes [5]. The functional attributes are the tasks performed by an IoT device such as WiFi hotspot access. The non-functional attributes of IoT services are the Quality of Service (QoS) surrounding the delivery of the service, which includes trust and reliability [6]. Crowdsourced IoT services refer to the delivery of services from nearby IoT devices [5]. IoT devices can crowdsource a variety of services, including computing resources [7], energy sharing [8][9], and environmental monitoring [10]. For instance, in energy sharing, IoT devices (service providers) may deliver energy _wirelessly_ to another nearby device with a low battery (service consumer) [9]. This paper focuses on energy services.
_Energy-as-a-Service (EaaS)_, refers to the wireless delivery of energy among nearby IoT devices [3][11]. Energy providers such as smart textiles or solar watches may _harvest_ energy from natural resources (e.g., body heat or physical activity) [12][13][14]. For instance, the PowerWalk kinetic energy harvester produces 10-12 watts of on-the-move power1. The harvested spare energy may be shared with nearby IoT devices as a service. Energy providers may deliver their services using the recently developed wireless charging technologies [15][16][17]. For instance, several mobile applications have been developed in recent studies as a first attempt to _enable peer-to-peer wireless energy services over a distance_[15][16][17]. These applications allow smartphones to request energy services from nearby smartphones by size, e.g., to be charged by 1000 mAh, or by time, e.g., to be charged for the next 10 minutes. Although the present technology may not offer efficient energy delivery [18], technological advancements are anticipated to facilitate devices to exchange greater quantities of energy [5][19]. Moreover, several companies, such as Xiaomi (mi.com) and Energous (energous.com), are working on developing technologies to increase both the distance and amount of energy that can be transferred. For instance, Energous developed a technology to enable wireless charging of up to 3 Watts of power within a 5-meter distance2.
Footnote 2: energous.com
A crowdsourced EaaS ecosystem is a dynamic environment where _providers_ and _consumers_ gather in _microcells_, such as restaurants. IoT users may share spare energy or request energy from nearby devices. An energy service composition framework has been proposed to manage the allocation of services to requests [5][20]. The framework assumes that energy requests are fulfilled by an exact amount of advertised energy. However, in reality, consumers receive less energy than expected [15][17]. Several factors contribute to _energy loss_ during wireless energy transfer, including the type of technology employed, the distance between devices, and the consumption behavior of IoT devices [21][22].
Estimating the _energy loss_ is crucial in assessing energy sharing efficiency. Capturing the wireless energy sharing loss is essential for the successful deployment of efficient energy service composition techniques. Energy loss may inform the service composition to optimize the allocation of energy services. For example, choosing a smaller energy service could be better than selecting a larger service due to distance. Furthermore, considering energy loss in the composition ensures meeting consumers' expectations [9]. Consequently, selecting services with low energy loss encourages consumer participation in this ecosystem [20].
We propose a novel Energy Services Loss Prediction framework (_ELP_) to _estimate energy services transfer loss_. Our
framework estimates energy loss by considering the following factors: (1) history of wireless charging data between two IoT users, including battery level history during wireless energy sharing, (2) history of IoT users' device energy usage, i.e., self-consumption, represented by battery levels in the idle state, and (3) wireless charging parameters such as distance and time. Our framework predicts the future battery levels of the provider and consumer (See Fig.1). The predicted values were then used to estimate the energy loss. To achieve this, our framework initially identifies abnormal charging behaviors such as outlier battery levels. Subsequently, it predicts the battery levels of providers and consumers under various charging states, including charging and idle. We utilized Informer, an efficient time series model [23], in the prediction phase. However, Informer resulted in low accuracy due to the fully zero initialization of the generative inference within its decoder. To address this issue, we extended the Informer model to _Easeformer_. The Easeformer model achieves a higher prediction accuracy as it effectively captures the features of IoT users and their energy sharing preferences. These features form the prior knowledge of the partial generative inference. We employed a real-world wireless energy sharing dataset to demonstrate the effectiveness of the proposed framework. To the best of our knowledge, existing research has not considered the energy loss that occurs during wireless energy delivery [5][20]. This work represents one of the first attempts to quantify energy loss in the context of wireless energy sharing services. The main contributions of this paper are as follows:
* A novel Energy Loss Prediction (ELP) framework to predict energy services transfer loss.
* A novel prediction module (Easeformer) that effectively predicts the energy provider's loss and the energy consumer's gain.
* An Encoder Input Transformer (EIT) to identify the wireless charging patterns according to the distance between IoT users.
* A decoder with partial generative inference to study the impact of various start token lengths.
* A comprehensive experiment to demonstrate that our energy loss prediction framework outperforms state-of-the-art time series prediction model using real collected energy services datasets.
### _Motivating Scenario_
We describe a scenario in a confined area (i.e., a microcell), where people typically gather (See Fig.2 (A)). Each microcell may have several IoT devices that act as energy consumers or providers (See Fig.2 (B)). Consumers and providers may submit their requests or service advertisements to a local _edge_, e.g., a router in a microcell. Assume that a _device owner_, e.g., a _smartphone_, requires energy to execute some critical tasks, such as making a call. In this scenario, the consumer is assumed to be able to receive service advertisements from multiple energy providers through the edge. We assume that consumers can receive energy from one provider at a time. Several resource-allocation frameworks have been proposed to select the optimal set of energy services for a consumer [9][11]. However, none of the existing frameworks considers the energy loss that occurs during wireless energy delivery. In reality, consumers often receive less energy than expected [15][17]. In this respect, it is important to estimate the energy loss while exchanging services. Indeed, services with high energy loss do not deliver the expected amount of energy. As a result, selecting services with high energy loss will discourage consumers from participating in this ecosystem [20].
Figure 3 shows an example of the received energy services with different energy amounts. A traditional energy service composition allocates services with a higher capacity. For instance, using traditional composition, the first (\(S1\)) and third (\(S3\)) services are selected with the expectation of fulfilling 92% of the required energy. However, in real-time charging, energy loss occurs based on several factors, such as distance. The energy composition fulfills 47.5% of the required energy due to the energy loss of both services. On the other side, informing the composition framework with the predicted energy loss will enhance the selection of services. For instance, given the energy loss of shared services, the energy-loss-aware composition will guarantee fulfilling 65% of the required energy. In this case, the energy loss plays a key role in the development of composition algorithms [20]. Therefore, estimating the energy loss before selecting and composing energy services ensures the efficient composition of the services.
We focus on estimating the energy loss based on the history data of IoT users stored on the edge. Based on these history data, we designed and developed a machine-learning-based
Fig. 1: High-level Energy Loss Prediction (ELP) framework
Fig. 2: The energy services environment
framework to predict the energy loss. The predicted energy loss may be considered as a key indicator for selecting services in the energy-sharing process. In the future, we aim to design an energy loss-aware service composition. To the best of our knowledge, _this work is among the first attempts to capture energy loss while exchanging wireless energy services_.
## II Related Work
The background of our work comes from three areas: energy sharing services, energy consumption analysis, and machine learning-based energy consumption prediction. We present the related work to our research in the three domains.
Energy sharing services have emerged as a novel approach for charging nearby IoT devices [5][20]. Several studies have proposed solutions to meet the requirements of energy consumers [11][24][25]. A temporal composition was proposed to maximize the energy provided using a fractional knapsack [11]. An elastic composition algorithm was proposed to address the highly fluctuating reliability of the energy providers [24]. This algorithm selects more reliable services by prolonging a consumer's stay using the concepts of soft and hard deadlines. The fluid approach leverages crowd mobility patterns to predict intermittent disconnections in energy services and then replaces or tolerates such disconnections [25]. Another study proposed the use of energy services as a tool for increasing consumer satisfaction [14][26]. Other studies have addressed challenges from the provider's perspective [27][28]. A context-aware incentive model was introduced to overcome resistance to providing energy services [27]. An energy-composition framework considering consumer reliability has been suggested to encourage providers to share their energy [28]. To the best of our knowledge, none of the existing research considers the energy loss that occurs while transferring energy [5][20]. _This work is among the first attempts to capture energy loss in the context of wireless energy sharing services_.
Traditionally, energy consumption analysis has been categorized into hardware-based profiling and software-based profiling [29]. Hardware-based profiling employs external hardware equipment such as a multimeter to measure the current and voltage, thereby estimating the power consumed by a smartphone during system activities. However, the experimental results may vary due to differences in the experimental devices and research methodologies. For example, in a study involving Google Nexus S, researchers removed the lithium-ion battery and used an external DC power source with a fixed voltage to focus on current. In this case, the screen display, GPS, and Wi-Fi were identified as the most power-consuming modules [30]. In contrast, another study using Nokia N95 found that wireless technologies consume the most energy [31]. Software-based profiling primarily involves measurements obtained through software programs. One such example is a mobile phone energy-monitoring system designed to analyze energy consumption at the application level [32]. This system was developed to accurately record the energy consumption of each application and rank them accordingly.
Machine learning-based prediction methods employ cutting-edge machine learning (ML) algorithms to forecast energy consumption. For example, one study utilized _GreenMiner_, an Android device energy usage recorder, to gather datasets for subsequent profiling [33]. Given that the collected data were time series, researchers developed a model using recurrent neural networks (RNNs) and implemented a long short-term memory (LSTM) algorithm. Additionally, they employed a support vector machine (SVM), shallow multi-layer perceptron (MLP), and linear regression for comparison. The results indicated that time series models, such as LSTM, were more effective in predicting the energy consumption. With the ongoing advancements in transformer-based algorithms, Informer, a state-of-the-art time series model, has been introduced [23]. The Informer boasts superior algorithmic efficiency compared to previous time series models.
## III System Model
We present definitions of energy consumers and providers. We then introduce a formal presentation of the energy loss. This study considers a provisioning framework for stationary services and requests. The goal is to accurately predict the energy loss that may occur during the energy sharing process. We use the below definitions to formulate the problem.
Fig. 3: Example of the motivating scenario
### _Preliminaries_
**Definition 1**: _Energy History Record (EHR)_ _is the history of an IoT user, i.e., consumer or provider, where the history includes the battery levels during a _single_ period of time. The history can be for the device while sharing energy or in daily usage, i.e., without sharing energy. The history is _defined as a tuple of \(<rid,uid,state,d,user\_type,BL>\) where:_
* \(rid\) _is a unique record identifier,_
* \(uid\) _is a unique user identifier,_
* \(state\) _is the energy state of the user which can be idle, i.e., any state of usage except wireless sharing, or sharing, i.e., during an energy sharing process,_
* \(d\) _the distance between the consumer and provider in the state of wireless energy sharing,_
* \(user\_type\) _is the type of user, i.e., consumer or provider in the state of wireless energy sharing,_
* \(BL\) _is a set of_ \(\{BL_{t_{0}},BL_{t_{1}},\ldots,BL_{t_{k}}\}\) _where_ \(BL_{t_{i}}\) _is the IoT battery level at time_ \(t_{i}\)_,_ \(t_{0}\) _is the start time and_ \(t_{k}\) _is the end time._
**Definition 2**: _EaaS Consumer (C)_ _represents the user history profile as a consumer and is formulated using \(n\) energy history records \(EHR\). _EaaS Consumer is defined as a tuple of \(<\mathcal{CB},\mathcal{NCB},\mathcal{CG},\mathcal{CU}>\) where:_
* \(\mathcal{CB}\) _is a set of_ \(\{CB_{1}^{t},CB_{2}^{t},\ldots,CB_{n}^{t}\}\) _where_ \(n\) _is the number of wireless energy sharing history records_ \(EHR\) _and_ \(CB_{i}^{t}\) _is the consumer battery level at time_ \(t\) _from the history record_ \(EHR_{i}\)_,_
* \(\mathcal{NCB}\) _is a set of_ \(\{NCB_{1}^{t},NCB_{2}^{t},\ldots,NCB_{n}^{t}\}\) _where_ \(NCB_{i}^{t}\) _is the consumer battery level in idle state (not charging) at time_ \(t\) _from the history record_ \(EHR_{i}\)_,_
* \(\mathcal{CG}\) _is a set of_ \(\{CG_{1}^{t},CG_{2}^{t},\ldots,CG_{n}^{t}\}\) _where_ \(CG_{i}^{t}\) _is the consumer gain from the energy sharing process at time_ \(t\) _from the history record_ \(EHR_{i}\)_,_ \(CG_{i}^{t}\) _is computed using the consumer battery level in sharing state_ \(CB\in\mathcal{CB}\) _as follows:_ \[CG_{i}^{t}=CB_{i}^{t}-CB_{i}^{t_{0}}\] (1)
* \(\mathcal{CU}\) _is a set of_ \(\{CU_{1}^{t},CU_{2}^{t},\ldots,CU_{n}^{t}\}\) _where_ \(CU_{i}^{t}\) _is the consumer usage, i.e., self-consumption, at time_ \(t\) _from the history record_ \(EHR_{i}\)_,_ \(CU_{i}^{t}\) _is computed using the consumer battery level in idle state_ \(NCB\in\mathcal{NCB}\) _as follows:_ \[CU_{i}^{t}=NCB_{i}^{t_{0}}-NCB_{i}^{t}\] (2)
**Definition 3**: _EaaS Provider (P)_ _represents the user history profile as a provider and is formulated using \(n\) energy history records _EHR.__EaaS Provider is defined as a tuple of \(<\mathcal{PB},\mathcal{NPB},\mathcal{PL},\mathcal{PU}>\) where:_
* \(\mathcal{P}\mathcal{B}\) _is a set of_ \(\{PB_{1}^{t},PB_{2}^{t},\ldots,PB_{n}^{t}\}\) _where_ \(n\) _is the number of energy sharing history records (EHR) and_ \(PB_{i}^{t}\) _is the provider battery level at time_ \(t\) _from the history record_ \(EHR_{i}\)_,_
* \(\mathcal{NPB}\) _is a set of_ \(\{NPB_{1}^{t},NPB_{2}^{t},\ldots,NPB_{n}^{t}\}\) _where_ \(NPB_{i}^{t}\) _is the provider battery level in idle state (not charging) at time_ \(t\) _from history record_ \(EHR_{i}\)_,_
* \(\mathcal{PL}\) _is a set of_ \(\{PL_{1}^{t},PL_{2}^{t},\ldots,PL_{n}^{t}\}\) _where_ \(PL_{i}^{t}\) _is the provider loss from the energy sharing process at time_ \(t\) _from history record_ \(EHR_{i}\)_,_ \(PL_{i}^{t}\) _is computed using the provider battery level in sharing state_ \(PB\in\mathcal{PB}\) _as follows:_ \[PL_{i}^{t}=PB_{i}^{t_{0}}-PB_{i}^{t}\] (3)
* \(\mathcal{PU}\) _is a set of_ \(\{PU_{1}^{t},PU_{2}^{t},\ldots,PU_{n}^{t}\}\) _where_ \(PU_{i}^{t}\) _is the provider usage, i.e., self-consumption, at time_ \(t\) _from history record_ \(EHR_{i}\)_,_ \(PU_{i}^{t}\) _is computed using the provider battery level in idle state_ \(NPB\in\mathcal{NPB}\) _as follows:_ \[PU_{i}^{t}=NPB_{i}^{t_{0}}-NPB_{i}^{t}\] (4)
**Definition 4**: _Energy Loss (\(\mathcal{EL}\)) is a set of \(\{EL_{1}^{t},EL_{2}^{t},\ldots,EL_{n}^{t}\}\), \(EL_{i}^{t}\) is the energy loss of the wireless transfer without the energy consumed by the device for other purposes. \(EL_{i}^{t}\) is computed as follows:_
\[EL_{i}^{t}=RT_{i}^{t}-RR_{i}^{t} \tag{5}\]
* \(RT_{i}^{t}\) _represents the actual transferred energy service at time_ \(t\) _from the energy history record_ \(EHR_{i}\)_._ \(RT_{i}^{t}\) _is computed using the provider loss_ \(PL\in\mathcal{PL}\) _and usage_ \(PU\in\mathcal{PU}\) _as follows:_ \[RT_{i}^{t}=PL_{i}^{t}-PU_{i}^{t}\] (6)
* \(RR_{i}^{t}\) _represents the actual received energy service at time_ \(t\) _from the energy history record_ \(EHR_{i}\)_._ \(RR_{i}^{t}\) _is computed using the consumer gain_ \(CG\in\mathcal{CG}\) _and usage_ \(CU\in\mathcal{CU}\) _as follows:_ \[RR_{i}^{t}=CG_{i}^{t}+CU_{i}^{t}\] (7)
### _Problem formulation_
Given the history of two IoT users, i.e., a consumer and a provider, where each user history record \(EHR\) consists of \(m\) energy _usage_ history records and \(n\) energy _sharing_ history records. Each history record \(EHR\) includes the battery levels of the IoT user over a period of time \(T_{i}^{t}\). \(T_{i}^{t}\) is a time series of \(<T_{i}^{t_{0}},T_{i}^{t_{1}},\ldots,T_{i}^{t_{k}}>\) where \(T_{i}^{t_{j}}\) is a timestamp \(j\) in the energy history record \(EHR_{i}\). A set of \(T_{i}\) creates \(\mathcal{T}\), i.e., a set of \(n\) time series. \(\mathcal{T}\) is defined based on the user \(n\) number of \(EHR\). Therefore, \(\mathcal{T}\) is a set of \(\{T_{1}^{t},T_{2}^{t},\ldots,T_{n}^{t}\}\). For simplicity, we use \(t_{0}\) instead of \(T_{i}^{t_{0}}\) and \(t_{k}\) instead of \(T_{i}^{t_{k}}\) in the following definitions. Moreover, given the energy sharing distance \(\mathcal{D}\) as a set of \(\{d_{1},d_{2},\ldots,d_{n}\}\) where \(d_{i}\) is the energy sharing distance between a provider and a consumer from \(EHR_{i}\). We transform the problem of energy loss estimation into the integration of four parallel time series forecasting problems [23][34][35]. The goal of the four aforementioned problems is to predict provider loss \(\mathcal{PL}\), provider usage \(\mathcal{PU}\), consumer gain \(\mathcal{CG}\), and consumer usage \(\mathcal{CU}\). Each predicted time series will be used later to compute the energy loss. Each time series forecasting problem can be formulated as follows:
* Given the input \(\mathcal{X}^{t}=\{x_{1}^{t},...,x_{L_{x}}^{t}\mid x_{i}^{t}\in\mathbb{R}^{d_{x}}\}\) at time \(t\) where \(L_{x}\) is the length of the input, and \(\mathcal{X}^{t}\) is the flattened sequence from the \(n\) number of \(EHR\).
* The objective is to predict the corresponding output sequence \(\mathcal{Y}^{t}=\{y_{1}^{t},...,y_{L_{y}}^{t}\mid y_{i}^{t}\in\mathbb{R}^{d_{y}}\}\) where \(L_{y}\) is the length of the output.
We use the following assumptions to formulate the problem.
* Consumers and providers are committed to the energy sharing process, i.e., the history records are for completed energy services delivery.
* Providers are motivated to share their energy using an incentive model [5][27].
* Providers and consumers are static during energy sharing.
* The energy sharing distance between the provider and consumer is prior knowledge, i.e., the energy sharing distance is known in advance.
* The self-consumption of providers and consumers is stable and may not change a lot in both conditions of charging or idle [36].
* IoT users prefer longer energy sharing distances.
* A secure and trustworthy framework is utilized to preserve the privacy and security of IoT devices [6].
## IV Energy Loss Prediction Framework
We present an Energy Loss Prediction \(ELP\) framework to predict the energy loss that occurs in sharing energy services. As previously mentioned, our framework estimates energy loss by predicting the future battery levels of the provider and consumer. The predicted values were then utilized to estimate the energy loss (See Fig.1). In detail, the framework consists of four phases (See Fig.4): (1) Filter, (2) Prediction, (3) Integration, and (4) Estimation. In the first phase, the history energy sharing data are filtered, and abnormal battery-level data are removed. The second phase uses the Easeformer and linear regression models to predict provider usage, provider loss, consumer usage, and consumer gain. The third phase integrates the predicted values from the previous phase to compute the actual transferred and received energy. The final phase uses the computed actual transferred and received energy to estimate the energy loss. Algorithm 1 presents the four phases for implementing the ELP framework. In what follows, we present each phase in detail.
```
0:\(P\): provider history profile, \(C\): consumer history profile
0:\(\mathcal{EL}\): predicted energy loss Phase 1: Filter
1:\(\mathcal{\hat{PL}},\mathcal{\hat{C}G}\leftarrow\mathbf{Compute\_Energy}(P.\mathcal{\hat{P}B},C.\mathcal{\hat{CB}})\)
2:\(\mathcal{PL},\mathcal{\hat{C}G}\leftarrow\mathbf{DBSCAN}(\mathcal{\hat{PL}}, \mathcal{\hat{C}G})\)
3:\(\mathcal{PL},\mathcal{\hat{C}G}\leftarrow\mathbf{Easeformer}(\mathcal{PL}, \mathcal{\hat{C}G})\)
4:\(\mathcal{\hat{PL}},\mathcal{\hat{C}U}\leftarrow\mathbf{LinearRegression}( \mathcal{PU},\mathcal{CU})\)
5:\(\mathbf{Phase}\) 3: Integration
6:\(\mathcal{\hat{RT}},\mathcal{\hat{RR}}\leftarrow\) Integration of \(\mathcal{\hat{PL}},\mathcal{\hat{P}U},\mathcal{\hat{C}G},\mathcal{\hat{CU}}\)
7:\(\mathbf{Phase}\) 4: Estimation
8:\(\mathcal{\hat{EL}}\leftarrow\) Energy loss estimation based on \(\mathcal{\hat{RT}},\mathcal{\hat{RR}}\)
9:return\(\mathcal{\hat{EL}}\)
```
**Algorithm 1** Energy Loss Prediction Framework (ELP)
### _ELP Filter Phase_
This phase involves the following two steps: (1) compute the consumer gain and the provider loss from the raw history data of the provider and consumer (2) remove the abnormal data, i.e., outliers, from the computed consumer gain and provider loss. Typically, outliers are defined as values that deviate significantly from the majority of the battery level data points [37]. In the first step, given energy sharing raw history data \(\mathcal{\hat{CB}}\) and \(\mathcal{\hat{PB}}\), we use Eq.1 to compute the consumer gain \(\mathcal{\hat{C}G}\) and Eq.3 to compute the provider loss \(\mathcal{\hat{PL}}\) (See Algorithm 1, Line 1). In the second step, we filter the computed consumer gain \(\mathcal{\hat{C}G}\) and provider loss \(\mathcal{\hat{PL}}\). The reason for filtering is that, in the prediction phase, we used the Mean Squared Error (MSE) as the loss function. It is important to note that \(MSE\) is vulnerable to outliers [38]. As a result, we utilize an outlier detection technique called Density-Based Spatial Clustering of Applications with Noise (DBSCAN) as a filter to eliminate outliers [37]. Upon examining our collected dataset, we observed that the likelihood of errors in the wireless energy sharing process is low, leading to rare occurrences of abnormal battery-level data. Consequently, these anomalous data points exhibit a lower density than normal data points. Unlike statistical methods that detect anomalous points above or below a specific threshold, i.e., extremes, DBSCAN effectively identifies infrequently occurring data i.e., detecting outlier points with lower density [39]. Therefore, we employ DBSCAN to detect and remove outliers. The inputs for the filter phase are the raw energy sharing history data of the provider \(P\) and consumer \(C\). The outputs of this phase are the cleaned provider loss \(\mathcal{PL}\) and consumer gain \(\mathcal{CG}\) (See Algorithm 1, Lines 1-2).
Fig. 4: Energy Loss Prediction (ELP) framework overview
### _ELP Prediction Phase_
This phase aims to predict the future consumer gain, consumer usage, provider loss, and provider gain using the calculated values of these attributes from the previous phase. As previously mentioned, the predicted values will be used in the following phase to compute energy loss. In this phase, we employ two types of models, i.e., Transformer-based time series model and linear regression, due to the distinct nature of data between the energy sharing and idle states. Consequently, we utilize a multi-head attention-based time series model, Informer, as our baseline to capture the consumer gain and the provider loss values separately in the energy sharing state [22]. The Informer algorithm employs ProbSparse multi-head self-attention, which reduces the time and space complexity to \(\mathcal{O}(L\log L)\). This is significantly lower than the standard Transformer's \(\mathcal{O}(L^{2})\) for each layer, where \(L\) represents the length of inputs/outputs. As a result, Informer is more suitable for handling time series forecasting problems [23]. However, the Informer delivered low accuracy due to the fully zero initialization of the generative inference within its decoder. To address this issue, we extend Informer to _Easeformer_, which achieves higher prediction accuracy by effectively capturing the features of IoT users and their energy sharing preferences. Thus, Easeformer is used to predict the values of provider loss and consumer gain in the energy sharing state. On the other hand, we employ linear regression for predicting consumer and provider usage in the idle state (See dotted lines in Fig.4) [40]. This is based on our assumption that battery levels in the idle state, i.e., self-consumption, remain stable and do not change significantly. We formulated this assumption in light of the battery level variation trend discussed in [36]. In summary, the prediction phase utilizes two Easeformer models and two linear regression models to predict energy values changes of providers and consumers in each state, i.e., energy sharing and idle states, respectively. In the following subsections, we present each model in detail.
#### Iv-B1 Easeformer model for sharing state prediction
The Easeformer model overview is depicted in Fig.5. We consider the consumer gain \(\mathcal{CG}\) and the provider loss \(\mathcal{PL}\) as the model targets. We then use the energy sharing distance \(\mathcal{D}\) and the energy sharing time \(\mathcal{T}\) as input features. Therefore, \(\mathcal{CG}/\mathcal{PL}\), \(\mathcal{D}\), and \(\mathcal{T}\) are concatenated as \(\mathbf{X}_{en}^{t}=\{x_{1}^{t},...,x_{L_{x}}^{t}\mid x_{i}^{t}\in\mathbb{R} ^{L_{x}\times d_{model}}\}\), i.e., the input data of the encoder (See Fig.5).
**Easeformer Encoder**: In the encoder structure of the Easeformer, we assume that IoT users prefer longer energy sharing distances. Moreover, based on the human decision model used in [41], we propose an Encoder Input Transformer (_EIT_). EIT applies the softmax function to the feature \(\mathcal{D}\) to obtain the probabilities \(\mathbf{P}_{\text{user}}\) for various distances that IoT users are likely to select (See Fig.5). We then normalize the distance set to a new set ranging from 0 to 1. This allows IoT users to choose the longest energy sharing distance with higher probability. Additionally, we employ the softmax temperature \(\tau\) to fine-tune the smoothness of the output probability distribution [42]. When \(\tau\to 1\), the function becomes equivalent to a traditional softmax function. When \(\tau\to 0\), the output distribution converges to a mass point. When \(\tau\rightarrow\infty\), all the elements in \(\mathbf{P}_{\text{user}}\) become equal, resulting in a smooth approximated distribution. Consequently, the probability of an IoT user selecting the \(i\)-th distance among \(k\) different distances can be computed as follows:
\[\mathbf{P}_{user}(d_{i}\mid d_{1}...d_{k})=\frac{\mathbf{exp}(d_{i}/\tau)}{ \sum_{j=1}^{k}\mathbf{exp}(d_{j}/\tau)} \tag{8}\]
Algorithm 2 presents the aforementioned process in detail.
Moreover, in this paper, we do not focus on the local temporal context of time series inputs. Therefore, we remove the time representation from the input representation part compared to the Informer embedding structure and retain the value and positional representation (See Fig.5). After embedding, all the input data are projected into 512 dimensions and then concatenated. Consequently, the input representations \(\mathbf{X}_{en}\) are encoded into the hidden state representations \(H^{t}=h_{1}^{t},...,h_{L_{x}}^{t}\). Subsequently, \(H^{t}\) serves as an input for the ProbaSparse multi-head attention mechanism, which is defined as:
\[head_{i}=\mathbf{Softmax}\left(\frac{(h^{t})^{T}\overline{W}_{i}^{Q}(h^{t})^ {T}W_{i}^{K^{T}}}{\sqrt{d}}\right)(h^{t})^{T}W_{i}^{V} \tag{9}\]
\[\mathcal{A}(h^{t})=\mathbf{Concat}(head_{1},...,head_{h})\]
where \(W_{i}^{Q}\), \(W_{i}^{K}\), and \(W_{i}^{V}\in\mathbb{R}^{d\times d}\) are three projection matrices. The multi-head attention for \(h^{t}\) is denoted by \(\mathcal{A}(h^{t})\), where \(h\) is the number of heads. \(\overline{W}_{i}^{Q}\) represents the sparse matrix that contains only the Top-\(u\) queries under the sparsity
Fig. 5: Overview of the Easeformer model. On the left: The encoder processes an extended, long sequence of multidimensional inputs, where each input dimension is represented by unique colors (for example, the orange series signifies the energy sharing distance). The blue trapezoid represents a self-attention distilling operation that extracts dominant attention while reducing the network size. On the right: The decoder accepts a long-sequence input, padding target elements to zero for dimensions lacking prior knowledge. In the case of dimensions with prior knowledge, such as energy sharing distance, the decoder makes use of this prior information. The decoder subsequently computes the weighted attention composition of the feature map and generates output elements through a generative process [23].
measurement \(M(\mathbf{q}_{i},\mathbf{K})\). The aforementioned query sparsity measurement is called the max-mean measurement, where \(\mathbf{q}_{i}\) denotes the \(i\)-th row in \(\mathbf{Q}\)[23]. Furthermore, we use the self-attention distilling technique to handle the redundancy of the value \(\mathbf{V}\) in the encoder's feature map. The technique aims to privilege the superior ones with dominant features and create a focused self-attention feature map in the next layer [23][43].
_Easeformer Decoder:_ In the decoder structure of the Easeformer, we employ a standard decoder structure (See Fig.5), which is composed of a stack of two identical multi-head attention layers [44]. Unlike the Informer which fully initializes the target sequence by zero, we propose a partial generative inference that makes use of prior knowledge, such as the energy sharing distance. Note, the energy distance can be computed using the technique described in [45]. Specifically, the partial generative inference samples an \(L_{token}\)-long sequence in the input sequence for features without prior knowledge. This sequence then serves as a clue to the inference of the \(L_{y}\)-length prediction. Therefore, we use the prior knowledge which is the energy distance in our context, as a partial input of the \(L_{y}\) prediction, and we initialize other features to 0. For example, when predicting 30 points, i.e., the prediction of a 30-minute energy sharing process, we will use the known two 30-minute energy sharing history data, i.e., 60 points, and the prior knowledge, i.e., the energy sharing distance, as the start token. We then feed the generative-style inference decoder with the following vectors as follows:
\[\mathbf{X}^{t}_{de_{partial}}=\mathbf{Concat}(\mathbf{X}^{t}_{token_{partial} },\mathbf{X}^{t}_{0}) \tag{10}\]
\[\mathbf{X}^{t}_{de_{prior}}=\mathbf{Concat}(\mathbf{X}^{t}_{token_{prior}}, \mathbf{X}^{t}_{prior}) \tag{11}\]
\[\mathbf{X}^{t}_{de}=\mathbf{Concat}(\mathbf{X}^{t}_{depatiati},\mathbf{X}^{t}_{ de_{prior}}) \tag{12}\]
where \(\mathbf{X}^{t}_{0}\in\mathbb{R}^{L_{y}\times(d_{model}-d_{prior})}\) represents the zero-initialized input tokens of the decoder, \(\mathbf{X}^{t}_{prior}\in\mathbb{R}^{L_{y}\times d_{prior}}\) represents the prior knowledge-initialized input tokens of the decoder, \(\mathbf{X}^{t}_{token_{partial}}\in\mathbb{R}^{L_{token}\times(d_{model}-d _{prior})}\) and \(\mathbf{X}^{t}_{token_{prior}}\in\mathbb{R}^{L_{token}\times d_{prior}}\) denotes the history data that serves as a clue. Furthermore, \(\mathbf{X}^{t}_{depati}\in\mathbb{R}^{(L_{token}+L_{y})\times(d_{model}-d_{ prior})}\) corresponds to features that are not known in advance, while \(\mathbf{X}^{t}_{de_{prior}}\in\mathbb{R}^{(L_{token}+L_{y})\times d_{prior}}\) represents features with prior knowledge. In the context of energy sharing, \(\mathbf{X}^{t}_{de_{partial}}\) comprises the features \(\mathcal{CG}\)/\(\mathcal{PL}\) and \(\mathcal{T}\). Feature \(\mathcal{D}\) is assumed to be a fixed value and known in advance, i.e., prior knowledge. Given the transformed value, i.e., \(\mathbf{P}_{user}\), obtained via Algorithm 2, we employ a \(dict\) function to transform the prior and subsequently combine it with \(\mathbf{X}^{t}_{de_{partial}}\). This results in the decoder input \(\mathbf{X}^{t}_{de}\in\mathbb{R}^{(L_{token}+L_{y})\times d_{model}}\).
_Loss Function:_ With regard to the loss function of Easeformer, we choose the mean squared error loss function for the prediction, and the loss is propagated back from the decoder's outputs across the entire model. Then, our two Easeformer models respectively output two time series predictions, i.e., \(\dot{\mathcal{PL}}=\{PL^{t}_{1},PL^{t}_{2},\dots,PL^{t}_{L_{y}}\}\) and \(\dot{\mathcal{GI}}=\{CG^{t}_{1},CG^{t}_{2},\dots,CG^{t}_{L_{y}}\}\) (See Algorithm 1, Line 3).
_2) Linear regression model for idle state prediction:_ Given battery level history data in the idle state, i.e., \(\mathcal{NPB}\) and \(\mathcal{NCB}\). We use Eq.2 to compute the consumer usage \(\mathcal{CH}\), and Eq.4 to compute the provider usage \(\mathcal{PU}\). We use \(\mathcal{CU}\)/\(\mathcal{PU}\) as the model's target. The input of the linear regression models includes the aforementioned features and time \(\mathcal{T}\). We use two linear regression models, one to predict the time series set \(\dot{\mathcal{PU}}=\{PU^{t}_{1},PU^{t}_{2},\dots,PU^{t}_{L_{y}}\}\) and another to predict the time series set \(\dot{\mathcal{CH}}=\{CU^{t}_{1},CU^{t}_{2},\dots,CU^{t}_{L_{y}}\}\) (See Algorithm 1, Line 4).
_3) ELP Integration Phase_ This phase computes the real received and transferred energy using the predicted consumer gain \(\dot{\mathcal{GI}}\), provider loss \(\dot{\mathcal{PL}}\), consumer usage \(\dot{\mathcal{CH}}\), and provider usage \(\dot{\mathcal{PU}}\). The real transferred energy by the provider, denoted as \(\dot{\mathcal{RT}}\) is computed using Eq.6, while the real received energy by the consumer, represented by \(\dot{\mathcal{RR}}\) is computed using Eq.7. Overall, this phase computes \(\dot{\mathcal{RT}}\) and \(\dot{\mathcal{RR}}\) as shown in Algorithm 1 (Line 5).
_4) ELP Estimation Phase_ The ELP Estimation Phase provides the output of the entire ELP framework. Specifically, the inputs to this phase are \(\dot{\mathcal{RT}}\) and \(\dot{\mathcal{RR}}\). The predicted energy loss, \(\dot{\mathcal{EL}}\), is computed using Eq.5, as demonstrated in Algorithm 1 (Lines 6-7).
## V Experiments and Results
In this section, we first present the conducted experiment and our energy sharing datasets. We then evaluate the effectiveness of the framework in terms of the Mean Squared Error (MSE) and Mean Absolute Error (MAE). Specifically, we examine the MSE and MAE of the Easeformer and linear regression models. Furthermore, we assess the MSE and MAE of the entire ELP framework.
### _Dataset Description_
We collected a set of real-world wireless energy transfer datasets to train our machine learning models. Data collection was conducted using the mobile application developed by [17]. The application enables consumers to connect to a provider via Bluetooth and requests energy based on size or duration. Additionally, the app monitors the energy-sharing process between a consumer and provider by recording their battery levels at specific time intervals (\(mt\)), such as every 5 seconds. The granularity of the monitoring time interval \(mt\)
is determined by the consumer. A fine-grained time interval yields more detailed records of both consumer and provider battery levels. Furthermore, a timestamp is employed to synchronize the monitoring and recording of the energy transfer between the provider and consumer. The current version of the energy-sharing application supports a one-to-one energy transfer mode, meaning that a _single_ energy provider can deliver energy to only a _single_ energy consumer.
In our experiments, we used a Google Pixel 5 smartphone as the provider and a Google Pixel 3 as the consumer. The experiments were conducted based on the design of [15]. Both the consumer and provider were connected to wireless charging coils. To ensure accurate results, all experiments were carried out in a laboratory setting at an approximate temperature of 25\({}^{\circ}\)C. We also used the aforementioned application to request energy by time, setting the charging duration to 30 minutes and the monitoring interval \(mt\) to 1 minute. For the usage (self-consumption) data collection experiments, only Bluetooth and Wi-Fi functions were activated, and the brightness level for both devices was set to the lowest setting.
For the wireless energy sharing data collection experiments, we collected datasets over different wireless energy sharing distances, i.e., 1 cm, 1.5 cm, and 2 cm. In order to simulate the IoT user preference for energy sharing, we repeated the experiments on the aforementioned distances 7, 14, and 21 times respectively. Therefore, we have 1260 data, i.e., (7+14+21)\(\times\)30, for the provider and consumer, respectively. Thus, we collected a total of 2520 data points. In summary, the dataset consists of three attributes: (1) the distance between the coils, (2) the charging duration, and (3) the battery levels of the consumer and provider at every \(mt\) time interval in mAh. In our experiments, \(mt\) was 1 minute. For the usage experiment, we conducted the experiment five times and collected 300 data records in total. The statistics for the experimental environment are listed in Table I. The datasets are shown in Fig.6 and Fig.7.
respectively. From Fig.8 and Fig.9, we can observe that the outliers are accurately detected. Specifically, we detected six abnormal charging data points comprising 360 data points (See Table I). We then eliminated all the detected outliers, after which we conducted the following experiments mainly based on datasets collected from normal energy sharing processes (blue dots in Fig.6 and Fig.7). After removing the outliers, we used 2160 data points as input for the Easeformer. The data points consist of 1080 data points of provider loss \(\mathcal{PL}\) and 1080 data points of consumer gain \(\mathcal{CG}\) (See Table I).
### _Evaluation of the ELP Framework_
In this section, we evaluate the effectiveness of our proposed ELP framework. Specifically, we conducted a set of experiments to evaluate the performance of the prediction and estimation phases (See Fig.4). We used the following settings to analyze the performance of our proposed ELP framework:
* **Metrics:** We use MSE and MAE as performance metrics, which can be respectively computed as follows: \[\text{MSE}=\frac{1}{n}\sum_{i=1}^{n}(\hat{y_{i}}-y_{i})^{2}\quad\text{MAE}= \frac{1}{n}\sum_{i=1}^{n}\lvert\hat{y_{i}}-y_{i}\rvert\] (13) where \(\hat{y_{i}}\) represents the predictions and \(y_{i}\) represents the ground truth.
* **Baseline:** We compared our proposed Easeformer with the standard Informer using collected datasets. The Informer is a multi-head attention model that uses the ProbSparse attention and self-attention distilling technique to enhance its efficiency [23].
In the prediction phase, we first conducted a series of comparison experiments to demonstrate that our Easeformer surpasses the state-of-the-art algorithm in handling energy-sharing datasets. We selected a time series forecasting method, Informer, as our baseline. Next, we performed a grid search over the hyperparameters. Easeformer was optimized using the Adam optimizer, with an initial learning rate of 1\(e^{-4}\), decaying by a factor of 0.5 every epoch. The total number of epochs was set to 10 and early stopping was implemented as needed. The batch size was set to 8. The training, validation, and test datasets comprised 50%, 25%, and 25% of the dataset, respectively. The input of each dataset was zero-mean normalized, except for feature distance \(\mathcal{D}\). The softmax temperature in EIT was set to 0.85. The head number of the multi-head attention mechanism \(h\) (See Fig.5) was set to 8. Furthermore, as the length of the start token, i.e., \(L_{token}\), plays a crucial role in partial generative inference and can influence the performance of Easeformer, we carry out comprehensive experiments to demonstrate the impact of \(L_{token}\) on the effectiveness of Easeformer. Specifically, we progressively extended the length of the start token \(L_{token}\), i.e., \(L_{token}=30\), \(L_{token}=60\), \(L_{token}=90\). Figures 10 and 11 separately display the provider loss and consumer gain prediction when the energy sharing distance is equal to 1.5 cm and \(L_{token}\) is set to 90. Table II presents the results of the comparison experiments between the Informer and Easeformer under varying start token lengths. The best outcomes for each start token length are emphasized in bold font. From Table II, we observe that the proposed Easeformer significantly enhances the prediction effectiveness, demonstrating its success in improving the predictive capacity for energy sharing data. We also conducted supplementary ablation experiments to illustrate the effectiveness of our encoder input transformer EIT mechanism (See Algorithm 2). In the ablation experiment, we
Fig. 11: Consumer gain prediction
Fig. 12: Provider self-consumption (usage) prediction
Fig. 10: Provider loss prediction
used Easeformer1 as a benchmark to eliminate the additional effects of EIT. According to Table II, Easeformer1 completes all the experiments and achieves superior performance, particularly when the start token length is short, i.e., \(L_{token}\)\(=30\) and \(L_{token}=60\). The comparison method, Easeformer2 omits EIT. Considering the advantages of incorporating user distance preference probability as input for the encoder in the energy sharing prediction problem, we conclude that adopting EIT is worthwhile, particularly when distance serves as a crucial indicator and \(L_{token}\) is set to a small value.
Footnote 1: Easeformer2 removes the encoder input transformer mechanism
Additionally, we evaluated the performance of linear regression models. Figures 12 and 13 illustrate the effectiveness of linear regression models. We observe that the self-consumption of both the provider and consumer is stable over time, and the linear regression model is proficient at fitting usage (self-consumption) data. The performance of the linear regression models is shown in Table III.
In the final experiment, we assessed the effectiveness of the ELP framework. The ELP framework aims to predict the energy loss derived from the energy sharing process. The performance of the ELP framework is displayed in Table III. Because the energy loss was computed based on the outputs of the two Easeformer and two linear regression models, the MSE and MAE were higher than the individual results of the aforementioned models. Figure 14 illustrates the effectiveness of the ELP framework. From Fig.14, we observe that our ELP framework is adept at fitting the ground truth of energy loss. |
2308.07960 | Interaction of H$_2$S with H atoms on grain surfaces under molecular
cloud conditions | Hydrogen sulfide (H$_2$S) is thought to be efficiently formed on grain
surfaces through the successive hydrogenation of S atoms. Its non-detection so
far in astronomical observations of icy dust mantles thus indicates that
effective destruction pathways must play a significant role in its interstellar
abundance. While chemical desorption has been shown to remove H$_2$S very
efficiently from the ice, in line with H$_2$S gas-phase detections, possible
solid-state chemistry triggered by the related HS radical have been largely
disregarded so far -- despite it being an essential intermediate in the H$_2$S
+ H reaction scheme. We aim to thoroughly investigate the fate of H$_2$S upon
H-atom impact under molecular cloud conditions, providing a comprehensive
analysis combined with detailed quantification of both the chemical desorption
and ice chemistry that ensues. Experiments are performed in an ultrahigh vacuum
chamber at temperatures between 10--16 K. The changes in the solid phase during
H-atom bombardment are monitored in situ by means of reflection absorption
infrared spectroscopy (RAIRS), and desorbed species are measured with a
quadrupole mass spectrometer (QMS). We confirm the formation of H$_2$S$_2$ via
reactions involving H$_2$S + H, and quantify its formation cross section under
the employed experimental conditions. Additionally, we directly assess the
chemical desorption of H$_2$S by measuring the gas-phase desorption signals
with the QMS, providing unambiguous desorption cross sections. Chemical
desorption of H$_2$S$_2$ was not observed. The relative decrease of H$_2$S ices
by chemical desorption changes from ~85% to ~74% between temperatures of 10 and
16 K, while the decrease as the result of H$_2$S$_2$ formation is enhanced from
~5% to ~26%, suggesting an increasingly relevant sulfur chemistry induced by HS
radicals at warmer environments. The astronomical implications are further
discussed. | Julia C. Santos, Harold Linnartz, Ko-Ju Chuang | 2023-08-15T18:00:13Z | http://arxiv.org/abs/2308.07960v1 | # Interaction of H\({}_{2}\)S with H atoms on grain surfaces under molecular cloud conditions
###### Abstract
Context:Hydrogen sulfide (H\({}_{2}\)S) is thought to be efficiently formed on grain surfaces through the successive hydrogenation of sulfur atoms. Its non-detection so far in astronomical observations of icy dust mantles thus indicates that effective destruction pathways must play a significant role in its interstellar abundance. While chemical desorption has been shown to remove H\({}_{2}\)S very efficiently from the ice, in line with H\({}_{2}\)S gas-phase detections, possible solid-state chemistry triggered by the related HS radical have been largely disregarded so far--despite it being an essential intermediate in the H\({}_{2}\)S + H reaction scheme.
Aims:We aim to thoroughly investigate the fate of H\({}_{2}\)S upon H-atom impact under molecular cloud conditions, providing a comprehensive analysis combined with detailed quantification of both the chemical desorption and ice chemistry that ensues.
Methods:Experiments are performed in an ultrahigh vacuum chamber at temperatures between \(10-16\) K to investigate the reactions between H\({}_{2}\)S molecules and H atoms on interstellar ice analogues. The changes in the solid phase during H-atom bombardment are monitored in situ by means of reflection absorption infrared spectroscopy (RAIRS), and desorbed species are complementarily measured with a quadrupole mass spectrometer (QMS).
Results:We confirm the formation of H\({}_{2}\)S\({}_{2}\) via reactions involving H\({}_{2}\)S + H, and quantify its formation cross section under the employed experimental conditions. Additionally, we directly assess the chemical desorption of H\({}_{2}\)S by measuring the gas-phase desorption signals with the QMS, providing unambiguous desorption cross sections. Chemical desorption of H\({}_{2}\)S\({}_{2}\) was not observed. The relative decrease of H\({}_{2}\)S\({}_{2}\) sees by chemical desorption changes from \(\sim 85\%\) to \(\sim 74\%\) between temperatures of 10 and 16 K, while the decrease as the result of H\({}_{2}\)S\({}_{2}\) formation is enhanced from \(\sim 15\%\) to \(\sim 26\%\), suggesting an increasingly relevant sulfur chemistry induced by HS radicals at warmer environments. The astronomical implications are further discussed.
## 1 Introduction
Interstellar dense clouds are known for harboring a lavish chemical inventory, spanning from simple ions and radicals to a large variety of complex organic molecules (COMs). At the temperatures and densities typical of these environments (T = \(10-20\) K and \(\rho=10^{4}-10^{5}\) cm\({}^{-3}\), respectively; van Dishoeck et al. 2013), thermal desorption cannot take place, and most species--except for H\({}_{2}\) and He--should be fully depleted into interstellar icy dust grains (Collings et al. 2004). Yet, observations with radio-astronomical facilities have detected copious amounts of COMs such as methanol (CH\({}_{3}\)OH), acetaldehyde (CH\({}_{3}\)CHO), methyl formate (CH\({}_{3}\)OCHO), and more, in the gas phase toward dense and cold clouds (see, e.g., Oberg et al. 2010; Bacmann et al. 2012; Cernicharo et al. 2012; Jimenez-Serra et al. 2016; Scibelli & Shirley 2020). Especially given that these hydrogen-rich species are most likely formed in the ice mantles that shroud interstellar dust grains, such observations reveal that non-thermal desorption mechanisms must play a significant role in balancing gas- and solid-phase chemical abundances. For smaller species, such as CO, photodesorption induced by UV photons through the (in-)direct DIET (desorption induced by electronic transitions) mechanism is an efficient desorption process that could explain in part the observed abundances of gaseous species (Oberg et al. 2007; Munoz Caro et al. 2010; Fayolle et al. 2011; Chen et al. 2014; Paardekooper et al. 2016; Sie et al. 2022). However, larger molecules are increasingly susceptible to fragmentation upon UV photon impact, which can then be followed by photochemical desorption (Bertin et al. 2016; Cruz-Diaz et al. 2016). Moreover, recent studies have shown that the photodesorption of CO and CH\({}_{3}\)OH ices induced by IR photons might be astronomically relevant (Santos et al. 2023), shedding light on potential new processes to help explaining gas-phase abundances of COMs.
Complementarily, another promising non-thermal desorption mechanism that proceeds without fragmentation is the so-called "chemical desorption" or "reactive desorption": the ejection of products upon formation in an exothermic reaction. This phenomenon has been consistently shown to improve gas-phase abundances predicted by chemical models (Garrod et al. 2006, 2007; Cazaux et al. 2010; Vasyunin & Herbst 2013; Vidal et al. 2017; Cuppen et al. 2017; Fredon et al. 2021), and has been explored in the laboratory for a range of astronomically-relevant species and substrates (Dulieu et al. 2013; Minissale & Dulieu 2014; Minissale et al. 2016; He et al. 2017; Chuang et al. 2018; Oba et al. 2018, 2019; Nguyen et al. 2020, 2021). Yet, efforts to experimentally quantify chemical desorption efficiencies are still limited, and modelers typically assume a universal input value between 0.01 and 0.1 (Garrod et al. 2007; Cuppen et al. 2017).
Among the species whose observed abundances cannot be explained by gas-phase processes alone, hydrogen sulfide (H\({}_{2}\)S) is perhaps one of the most broadly studied in the recent liter
ature. It has been detected toward various interstellar sources and in the comae of comets (Thaddeus et al., 1972; Minh et al., 1989; van Dishoeck et al., 1995; Hatchell et al., 1998; Vastel et al., 2003; Wakelam et al., 2004; Neufeld et al., 2015; Le Roy et al., 2015; Biver et al., 2015; Calmonte et al., 2016; Phuong et al., 2018; Navarro-Almaida et al., 2020). It was also tentatively identified on the surface of the Galilean satellites Io, Ganymede, and Callisto (Nash & Howell, 1989; McCord et al., 1998). However, solid-phase interstellar H\({}_{2}\)S has not been unequivocally detected yet, and only upper-limits are available in ices so far (Smith, 1991; van der Tak et al., 2003; Jimenez-Escobar & Munoz Caro, 2011).
The main proposed route to form H\({}_{2}\)S is through the successive hydrogenation of sulfur on icy grains (S \(\stackrel{{+\rm H}}{{\longrightarrow}}\) HS \(\stackrel{{+\rm H}}{{\longrightarrow}}\) H\({}_{2}\)S). Once formed, H\({}_{2}\)S can undergo an H-induced abstraction reaction to form the radical HS:
\[\rm H_{2}S\stackrel{{+\rm H}}{{\longrightarrow}}HS+H_{2} \tag{1}\]
by quantum tunneling through an effective barrier of \(\sim 1500\) K (Lamberts & Kastner, 2017). The HS radical can subsequently be hydrogenated to reform H\({}_{2}\)S. Alternatively, H\({}_{2}\)S can also be energetically processed to form species such as H\({}_{2}\)S\({}_{2}\) and a wide range of S allotropes (Moore et al., 2007; Garozzo et al., 2010; Jimenez-Escobar & Munoz Caro, 2011; Jimenez-Escobar et al., 2014; Chen et al., 2015; Shingledecker et al., 2020; Cazaux et al., 2022; Mifsud et al., 2022).
Laboratory studies have reported the hydrogenation of a thin layer (0.7 monolayers, ML) of H\({}_{2}\)S on top of both porous and non-porous amorphous solid water, as well as polycrystalline water ice (Oba et al., 2018, 2019). The experimental data demonstrated that the excess energy generated by the cycle of H-induced abstraction and H\({}_{2}\)S reformation results in chemical desorption with high effectiveness. Kinetic Monte Carlo simulations of such experiments suggest the chemical desorption efficiency to be of \(\sim\)3% per hydrogenation event (Furuya et al., 2022). Contrary to energetically-processed ices, however, new species formed by the HS radicals were not reported--possibly due to the relatively low abundance of H\({}_{2}\)S species in their experiments. In this work, we aim to further constrain the chemical desorption efficiency of H\({}_{2}\)S by incorporating the chemistry involving HS radicals resulting from the (de-)hydrogen of hydrogen sulfide, in particular to form disulfane (H\({}_{2}\)S\({}_{2}\)). Moreover, we present for the first time a comprehensive experimental analysis of the H\({}_{2}\)S chemical desorption phenomenon supported by a strong gas-solid correlation using infrared spectroscopy and mass spectrometry techniques concomitantly.
The experimental setup and techniques employed are described in Section 2. The results are shown and discussed in Section 3, where we provide effective cross sections for the chemical desorption of H\({}_{2}\)S and H\({}_{2}\)S\({}_{2}\) formation. In Section 4, the astrochemical implications of this work are considered, and our main findings are summarized in Section 5.
## 2 Experimental Methods
Experiments are performed using the ultrahigh vacuum (UHV) setup SURFRESIDE[3], which has been described in detail elsewhere (Ioppolo et al., 2013; Qasim et al., 2020). Here, the relevant information is summarized. The main chamber operates at a base pressure of \(\sim 5\times 10^{-10}\) mbar. In its center, a gold-plated copper substrate is mounted on the tip of a closed-cycle He cryostat. The temperature of the substrate can vary between 8 and 450 K through resistive heating, and is monitored by two silicon diode sensors with a relative accuracy of 0.5 K. Ices of H\({}_{2}\)S (Linde, purity 99.5%) are deposited either prior to or simultaneously with H atoms generated by a hydrogen atom beam source (HABS, Tschersich, 2000) during what is referred to here as pre and codeposition experiments, respectively. The hydrogen atoms are cooled to room temperature by colliding with a nose-shaped quartz pipe before reaching the substrate. As described in detail by Ioppolo et al. (2013), the determination of the absolute H-atom flux is done by placing a quadrupole mass spectrometer (QMS) at the exact position of the substrate and monitoring its signal in a series of systematic experiments with varying filament temperatures and inlet gas flow. Such a measurement is not a trivial procedure, but serves as a reference guide for regular calibrations of the relative H flux at different operation conditions through the HO\({}_{2}\) peak intensity formed in the barrierless reaction H + O\({}_{2}\)\(\rightarrow\) HO\({}_{2}\). In order to infer the temperature-dependent kinetics of the processes explored in this work, we perform pre-deposition experiments at a range of temperatures of relevance to interstellar molecular clouds (10, 12, 14, and 16 K). Due to its low sticking coefficient at the studied temperatures, the presence of H\({}_{2}\) molecules on the ice (either incoming from the atom source or formed through H recombination) is not expected to significantly affect the outcome of our experiments (Watanabe & Kouchi, 2002; Ioppolo et al., 2010).
Ice growth through vapor deposition is monitored by Fourier-transform reflection-absorption infrared spectroscopy (FT-RAIRS). The IR spectra are acquired in the range of 700 to 4000 cm\({}^{-1}\), with a resolution of 1 cm\({}^{-1}\). Concurrently, species in the gas phase are ionized upon electron impact with 70 eV and recorded by a quadrupole mass spectrometer (QMS). Once the depositions are finished, temperature-programmed desorption experiments (TPD) are performed by heating the sample at a ramping rate of 5 K min\({}^{-1}\) whilst concomitantly monitoring the solid and gas phases with the RAIRS and QMS techniques, respectively.
The column densities (\(N_{X}\)) of the species in the ice are derived by converting the IR integrated absorbance (\(\int Abs(\nu)d\nu\)) to absolute abundance using a modified Beer-Lambert law:
\[N_{X}=\ln 10\frac{\int Abs(\nu)d\nu}{A^{\prime}(X)} \tag{2}\]
where \(A^{\prime}(X)\) is the apparent absorption band strength of a given species. For H\({}_{2}\)S, band strength values measured by infrared transmission spectroscopy are available in the literature. However, signals obtained in reflection mode are systematically higher than transmission counterparts due to substrate dipole couplings and a typically longer IR pathway in the ice. Thus, to ensure high accuracy in the derivation of the H\({}_{2}\)S ice column density, we performed calibration experiments using the laser interference technique that yield a band strength value of \(A^{\prime}(\rm H_{2}S)_{-2553\,cm^{-1}}\sim(4.7\pm 0.1)\times 10^{-17}\) cm molecule\({}^{-1}\) for our specific experimental settings (see Appendix A).
Since direct determination of the H\({}_{2}\)S\({}_{2}\) band strength is challenging, we estimate \(A^{\prime}(\rm H_{2}S_{2})\) in a similar way as described by Cazaux et al. (2022). The column density ratio (\(N_{\rm H_{2}S_{2}})/(N_{\rm H_{2}S})\) can be derived from the QMS data by the expression (Martin-Domenech et al., 2015):
\[\frac{N_{\rm H_{2}S_{2}}}{N_{\rm H_{2}S}}=\frac{A(66)}{A(34)}\cdot\frac{\sigma^{ +}(\rm H_{2}S)}{\sigma^{+}(\rm H_{2}S_{2})}\cdot\frac{I_{F}(\rm[H_{2}S]^{+})}{I _{F}(\rm[H_{2}S_{2}]^{+})}\cdot\frac{F_{F}(34)}{F_{F}(66)}\cdot\frac{S(34)}{S(66)} \tag{3}\]
where \(A(\rm m/z)\) is the integrated area of a given mass fragment; \(\sigma^{+}(X)\) is the molecule's electronic ionization cross-section; \(I_{F}(\rm z)\)
is the ionization fraction of charge z (here corresponding to unity); \(F_{F}\)(m/z) is the fragmentation fraction; and \(S\)(m/z) is the sensitivity of the QMS at a specific mass. As there are no values for \(\sigma^{+}\)(H\({}_{2}\)S\({}_{2}\)) reported in the literature, it is estimated based on the molecule's polarizability volume (\(\alpha(X)\)) by the empirical correlation (Hudson et al. 2006; Bull et al. 2012):
\[\sigma^{+}_{\rm max}(X)=c\cdot\alpha(X) \tag{4}\]
where \(X\) denotes a given species and \(c\) is a correlation constant of 1.48 A\({}^{-1}\). The maximum ionization cross section (\(\sigma^{+}_{\rm max}\)) of organic species occurs typically around 90 eV, and varies only slightly (\(<5\)%) in intensity from ionizations with 70 eV (Hudson et al. 2003; Bull & Harland 2008). Thus, we utilize this method to derive both \(\sigma^{+}\)(H\({}_{2}\)S\({}_{2}\)) and \(\sigma^{+}\)(H\({}_{2}\)S) from \(\alpha\)(H\({}_{2}\)S\({}_{2}\)) and \(\alpha\)(H\({}_{2}\)S) as calculated by group additivity1. The \(F_{F}\)(m/z) of the relevant mass fragments are inferred from the QMS data acquired during the TPD experiments after codeposition of H\({}_{2}\)S and H, and the sensitivity is obtained from previous calibrations performed at the same setup (Chuang 2018). The employed values are summarized in Table 1.
Footnote 1: Values taken from the NIST Computational Chemistry Comparison and Benchmark Database (CCCBDB), NIST Standard Reference Database Number 101, [http://cccbdb.nist.gov/](http://cccbdb.nist.gov/)
By combining \((N_{\rm H_{2}S_{2}})/(N_{\rm H_{2}S})\) from Equation 3 and \(N_{\rm H_{2}S}\) from Equation 2 one can obtain \(N_{\rm H_{2}S_{2}}\), which in turn can be used to estimate \(A^{\prime}\)(H\({}_{2}\)S\({}_{2}\)) from the integrated absorbance area of the IR spectra:
\[A^{\prime}({\rm H_{2}S_{2}})=\frac{\int Abs(v)d\nu}{N_{\rm H_{2}S_{2}}}. \tag{5}\]
The average between two independent experiments yields an estimated \(A^{\prime}\)(H\({}_{2}\)S\({}_{2}\))\({}_{-2490\;{\rm cm^{-1}}}\sim(9.9\pm 0.2)\times 10^{-17}\) cm molecule\({}^{-1}\).
The details of the experiments performed in this work are summarized in Table 2. The relative errors of both H\({}_{2}\)S and H fluxes are estimated to be \(\sim 5\)%.
## 3 Results and Discussion
### H\({}_{2}\)S + H ice chemistry
The left panel of Figure 1 shows the spectra obtained after deposition of pure H\({}_{2}\)S and codeposition of H\({}_{2}\)S + H at 10 K in the frequency region characteristic of SH-stretching modes. A strong IR feature is observed at \(\sim 2553\) cm\({}^{-1}\), corresponding to the \(\nu_{1}\) (symmetric) and \(\nu_{3}\) (anti-symmetric) SH-stretching modes of H\({}_{2}\)S. In comparison, when H atoms are also present, a new feature peaking at \(\sim 2490\) cm\({}^{-1}\) appears on the red wing of the \(\nu_{1,3}\) mode of H\({}_{2}\)S--consistently with the SH-stretching band (\(\nu_{1}\), sym.; and \(\nu_{5}\), anti-sym.) of H\({}_{2}\)S (Isoniemi et al. 1999). During the TPD experiment performed after codeposition H\({}_{2}\)S + H, the main bands at \(\sim 2553\) cm\({}^{-1}\) and \(\sim 2490\) cm\({}^{-1}\) fully disappear in the temperature ranges of \(10-100\) K and \(100-140\) K, respectively (Figure 1, right panel), which coincides with previously measured desorption temperatures of H\({}_{2}\)S and H\({}_{2}\)S\({}_{2}\) (Jimenez-Escobar & Munoz Caro 2011; Chen et al. 2015; Cazaux et al. 2022).
The assignments of H\({}_{2}\)S and H\({}_{2}\)S\({}_{2}\) are substantiated by their respective mass fragments induced by electron impact during the TPD experiments in Figures 2 and 3, respectively. As shown in Figure 2a, a desorption peak of fragments m/z = 32 and 34 is observed at \(\sim 85\) K in both H\({}_{2}\)S and H\({}_{2}\)S + H cases, amounting to relative intensities consistent with the standard for H\({}_{2}\)S as provided by the NIST database2. This desorption temperature matches the disappearance of the \(\sim 2553\) cm\({}^{-1}\) bands in the IR spectra. In Figure 3, the desorption peak of the mass fragments associated with H\({}_{2}\)S\({}_{2}\) is detected solely in the H\({}_{2}\)S + H experiment, at 126 K--coinciding with the disappearance of the feature at \(\sim 2490\) cm\({}^{-1}\) in the IR spectra. Thus, the assignment of
\begin{table}
\begin{tabular}{l c c c} \hline \hline Species & \(\alpha\) [Å\({}^{3}\)] \({}^{a}\) & \(F_{F}\) (m/z) \({}^{b}\) & \(S\) (m/z) \({}^{b}\) \\ \hline H\({}_{2}\)S & 3.776 & 0.52 & 0.28 \\ H\({}_{2}\)S\({}_{2}\) & 6.828 & 0.31 & 0.08 \\ \hline \hline \multicolumn{4}{l}{\({}^{a}\) CCCCBDB} \\ \multicolumn{4}{l}{\({}^{b}\) Values are given for the molecular ions.} \\ \end{tabular}
\end{table}
Table 1: List of parameters used in the estimation of \(A^{\prime}\)(H\({}_{2}\)S\({}_{2}\)).
Figure 1: Panel a) Comparison between the final infrared spectra after deposition of a pure H\({}_{2}\)S ice (black) superimposed by the final spectrum after codeposition of H\({}_{2}\)S and H atoms (red) with analogous experimental conditions. Panel b) Infrared spectra acquired during the warming up of the H\({}_{2}\)S ice codeposited with H atoms, offset for clarity. In both panels, the assignments of the H\({}_{2}\)S and H\({}_{2}\)S\({}_{2}\) vibrational bands are shown with dashed lines.
the new peak as H\({}_{2}\)S\({}_{2}\) is confirmed by both RAIRS and QMS techniques combined with TPD experiments. Given the lack of laboratory data on its mass fragmentation pattern, we provide for the first time--to the best of our knowledge--the relative intensities of m/z = 32, 34, 64, 65 and 66 generated by 70 eV electron ionization of H\({}_{2}\)S\({}_{2}\) and corrected for the sensitivity of the QMS in the right panel of Figure 3. The contribution from the \({}^{34}\)S isotope (natural abundance of 4.29%) is included in the fragmentation pattern.
When H\({}_{2}\)S is deposited simultaneously with H atoms, HS radicals formed by the hydrogen abstraction of H\({}_{2}\)S (Reaction 1) can thus further associate either with H atoms, reforming H\({}_{2}\)S, or with HS radicals, forming H\({}_{2}\)S\({}_{2}\):
\[\mathrm{HS}\stackrel{{\mathrm{+H}}}{{\longrightarrow}}\mathrm{H _{2}S}\ \mathrm{(solid\ or\ gas)} \tag{6a}\] \[\mathrm{HS}+\mathrm{HS}\rightarrow\mathrm{H_{2}S_{2}}. \tag{6b}\]
Reaction 6a proceeds barrierlessly and can result in chemical desorption due to its high exothermicity (\(\sim\) 45000 K, based on the gas-phase enthalpies of formation of reactants and products). Reaction 6b is also barrierless and has been proposed in previous studies on the energetic processing of H\({}_{2}\)S-containing ices (Jimenez-Escobar and Munoz Caro, 2011; Jimenez-Escobar et al., 2014; Chen et al., 2015; Cazaux et al., 2022; Mifsud et al., 2022).
### H-atom bombardment on H\({}_{2}\)S ice
In both Figures 1 and 2 (left panels), it is shown that the amount of H\({}_{2}\)S ice after the codeposition experiment with H atoms is smaller than that of the pure ice deposition at the same experimental conditions, thus signaling that the interaction of H\({}_{2}\)S with hydrogen leads to a net loss of material as a result of both Reactions 6a and 6b. While the efficiency of the former reaction has been explored in detail (Oba et al., 2018, 2019; Furuya et al., 2022), the contribution from H\({}_{2}\)S\({}_{2}\) formation to depleting H\({}_{2}\)S from the solid phase has not been considered so far. Here, we explore the effectiveness of both reactions thoroughly, and assess their respective relevance to the destruction of the H\({}_{2}\)S ice.
In order to quantify the efficiencies of Reactions 6a and 6b, the abundance of H\({}_{2}\)S and H\({}_{2}\)S\({}_{2}\) is monitored as a function of H-atom fluence during predeposition experiments--in which a deposited H\({}_{2}\)S ice is subsequently bombarded by a constant H-atom flux. The difference spectra after H-atom bombardment for 20, 40, and 60 minutes at 10 K are shown in Figure 4, together with the pure H\({}_{2}\)S sample prior to hydrogenation. Both H\({}_{2}\)S and H\({}_{2}\)S\({}_{2}\) features can be resolved in the difference spectra by decomposition using Gaussian profiles, as shown by the superimposing lines. The interaction with H atoms leads to a loss of H\({}_{2}\)S, as evinced by the decrease in its SH-stretching band at \(\sim\) 2553 cm\({}^{-1}\) (purple dashed line). Concomitantly, a feature due to the SH-stretching modes of H\({}_{2}\)S\({}_{2}\) appears on the red wing of the H\({}_{2}\)S band, and becomes increasingly evident at longer H-atom exposure times (yellow dashed line). The results of the predeposition experiments are therefore consistent with the codeposition counterparts, and indicate a non-negligible contribution to the H\({}_{2}\)S depletion from Reaction 6b. In contrast, neither Oba et al. (2018) nor Oba et al. (2019) have detected any other sulfur-bearing species apart from hydrogen sulfide during similar H\({}_{2}\)S \(\rightarrow\) +H predepositions at 10 - 30 K followed by TPD experiments. Such discrepancy might be due to the limited abundance of H\({}_{2}\)S in the aforementioned works (0.7 ML), compared to the present experiments (\(\sim\)20 ML)--which might not yield product amounts above the instrumental detection limit.
To directly probe the chemical desorption of H\({}_{2}\)S as a result of reactions with H-atoms, its gas-phase signals are monitored via the relevant mass fragments (m/z = 34, [H\({}_{2}\)S]\({}^{+}\); m/z = 33, [HS]\({}^{+}\)) with a QMS during the H-exposure experiments. In Figure 5, data acquired by both the RAIRS and QMS techniques while intermittently (i.e., in three intervals of 20 minutes finalizing with 60 minutes) bombarding the predeposited H\({}_{2}\)S ice with H atoms are presented in the upper and lower panels, respectively. In the first 20 minutes of bombardment, a steep decrease in the H\({}_{2}\)S IR absorbance area is observed, coinciding with an abrupt increase in the m/z = 34 readout by the QMS. Once bombardment is stopped, the area of the H\({}_{2}\)S band remains fairly constant, and the QMS signal drops to the base value. Such results provide unambiguous evidence of the effective chemical desorption of H\({}_{2}\)S upon H-atom exposure. Following the first bombardment, a similar behavior is observed by both RAIRS and QMS techniques for the rest of the exposure periods, albeit to a diminishing extent of H\({}_{2}\)S loss due to saturation of the ice layer within
Figure 3: Panel a) TPD-QMS spectra of m/z = 32 (blue), 34 (red), 64 (green), 65 (purple), and 66 (yellow) after deposition of a pure H\({}_{2}\)S ice and codeposition of H\({}_{2}\)S + H with analogous experimental conditions. Spectra are offset for clarity and shown in the temperature range relevant to H\({}_{2}\)S\({}_{2}\) thermal desorption. Panel b) Mass fragmentation pattern of H\({}_{2}\)S\({}_{2}\) generated by 70 eV electron ionization as measured in this work.
Figure 2: Panel a) TPD-QMS spectra of m/z = 32 (blue) and m/z = 34 (red) after deposition of a pure H\({}_{2}\)S ice and codeposition of H\({}_{2}\)S + H with analogous experimental conditions. Spectra are offset for clarity and shown in the temperature range relevant to H\({}_{2}\)S thermal desorption. Panel b) Comparison between the relative intensities of m/z = 32 and 34 desorbing at 85 K in both H\({}_{2}\)S and H\({}_{2}\)S + H experiments, together with the standard fragmentation pattern of H\({}_{2}\)S from NIST.
the penetration depth of the hydrogen atoms--typically of a few monolayers (see, e.g., Watanabe & Kouchi 2008; Fuchs et al. 2009). No increase in signal is detected for m/z = 66 ([H\({}_{2}\)S\({}_{2}\)]\({}^{+}\)), indicating that, relatively to H\({}_{2}\)S, disulfine does not undergo chemical desorption effectively upon formation. This is rather expected, as H\({}_{2}\)S\({}_{2}\) contains more degrees of freedom and, as inferred from its higher desorption temperature, a higher binding energy than H\({}_{2}\)S. Consequently, H\({}_{2}\)S\({}_{2}\) does not contribute significantly to the measurement of m/z = 34 during H-atom exposure, which can therefore be solely attributed to H\({}_{2}\)S.
The intensity of the m/z = 33 signal relative to m/z = 34 is measured to be \(\sim 0.55\) throughout the H-atom exposure, whereas the expected fragmentation pattern of H\({}_{2}\)S corresponds to \(33/34\sim 0.42\). The excess of [HS]\({}^{+}\) fragments detected during the bombardment is consistent with the transfer of HS radicals to the gas phase through chemical desorption as a result of Reaction 1. This fraction, however, is significantly smaller than the detected gaseous H\({}_{2}\)S, and therefore can be neglected. Indeed, due to the high exothermicity of Reaction 6a, and the fact that its excess energy is concentrated in a single product, H\({}_{2}\)S is expected to be the most susceptible species to chemical desorption during the hydrogenation sequence--as was also suggested by Oba et al. (2018).
Additionally to 10 K, predeposition experiments with analogous conditions are performed at 12 K, 14 K, and 16 K to investigate the effects of different temperatures on H\({}_{2}\)S\({}_{2}\) formation and H\({}_{2}\)S chemical desorption. The percentage of H\({}_{2}\)S lost either to chemical desorption or H\({}_{2}\)S\({}_{2}\) formation by the end of the predeposition experiments can be derived by comparing the final \(\Delta N\) of both species, assuming that other potential processes have a minor contribution in decreasing the H\({}_{2}\)S band. The derived efficiencies are temperature dependent, as shown in Figure 6; the overall H\({}_{2}\)S loss due to chemical desorption varies from \(\sim 85\%\) to \(\sim 74\%\) when the ice temperature increases from 10 K to 16 K. Accordingly, the percentage loss due to the formation of H\({}_{2}\)S\({}_{2}\) varies from \(\sim 15\%\) to \(\sim 26\%\). It should be noted that these values are respective to the relative H\({}_{2}\)S loss at each specific temperature, and not the absolute amount of formed H\({}_{2}\)S\({}_{2}\) or chemically-desorbed H\({}_{2}\)S in each experiment. At higher temperatures, the fraction of H\({}_{2}\)S consumed to form H\({}_{2}\)S\({}_{2}\) increases relatively to the loss due to chemical desorption, suggesting that the former process becomes increasingly relevant in warmer environments. This observation is possibly related to a significant increase in diffusion rates of HS radicals enhancing the overall H\({}_{2}\)S\({}_{2}\) formation, at the expense of chemical desorption by H\({}_{2}\)S formation. In summary, by taking into account this chemical loss channel, it is possible to further constrain the fate of H\({}_{2}\)S molecules upon H-atom bombardment--thus expanding the results from previous works in which H\({}_{2}\)S\({}_{2}\) formation was not observed.
### Kinetic analysis
Information on the kinetics of H\({}_{2}\)S\({}_{2}\) formation and H\({}_{2}\)S consumption can be derived from predeposition experiments. In the upper panel of Figure 7, the variation in column density (\(\Delta N\)) of H\({}_{2}\)S\({}_{2}\) as a function of H-atom fluence measured from the IR spectra at 10 K is shown. The curve is fitted by a single exponential function:
\[\Delta[X]_{\rm r}=[{\rm H_{2}S}]_{0}\cdot a(1-\exp(-\sigma\cdot F)), \tag{7}\]
where \(\Delta[X]\) and \([{\rm H_{2}S}]_{0}\) are, respectively, the abundance of species \(X\) at a given time and the initial abundance of H\({}_{2}\)S. Here, \(a\) is the saturation value, \(F\) is the incident H-atom fluence and \(\sigma\)
Figure 4: Infrared spectrum after deposition of a pure H\({}_{2}\)S ice (black), and the difference spectra after exposure to H atoms for 20 minutes (red), 40 minutes (blue), and 60 minutes (green). Superimposed to the difference spectra are the corresponding gaussian fittings of the H\({}_{2}\)S band (purple), H\({}_{2}\)S\({}_{2}\) band (yellow), and the resulting convoluted feature (brown). Spectra are offset for clarity.
Figure 5: Upper panel: variation in H\({}_{2}\)S column density measured from the \(\sim 2553\) cm\({}^{-1}\) band in the IR spectra as a function of time. Lower panel: Scan of the m/z = 34 ([H\({}_{2}\)S]\({}^{+}\)) as measured by the QMS as a function of time. The shadowed areas denote the periods during which the H-atom flux was stopped.
is the effective formation cross section of H\({}_{2}\)S\({}_{2}\). From this fitting we derive \(\sigma\sim(9.8\pm 0.9)\times 10^{-17}\) cm\({}^{2}\) for H\({}_{2}\)S\({}_{2}\) formation at 10 K. It should be noted, however, that the rate law of H\({}_{2}\)S\({}_{2}\) formation is far from trivial: both Reactions 1 and 6b contribute to the effective cross section, with the latter requiring two HS radicals to occur. Therefore, it cannot be simplified by the pseudo first-order approximation. Moreover, the accurate amount of H atoms available on the surface of the ice is highly difficult to quantify, as a fraction will recombine to form H\({}_{2}\)--hence the use of the "effective" term. The \(\sigma\) value derived here is thus not suited to be directly employed in chemical models as a rate constant, but rather very useful for comparison purposes with other effective cross sections derived with similar conditions.
In the lower panel of figure 7, the effective variation in the column density of H\({}_{2}\)S as a function of H-atom fluence measured from the infrared spectra is shown. In this case, the plot is better fitted by a two-term exponential function:
\[\Delta[\mathrm{H_{2}S}]_{\mathrm{r}}=[\mathrm{H_{2}S}]_{0}(a_{1}(1-\exp(- \sigma_{1}\cdot F))+a_{2}(1-\exp(-\sigma_{2}\cdot F))), \tag{8}\]
where \(a_{n}\) is the saturation value and \(\sigma_{n}\) is the effective destruction cross-section. The interpretation of such a fitting is not straightforward, as it incorporates the contribution from all the processes leading to a decrease in \(N(\mathrm{H_{2}S})\). Nonetheless, the double exponential fitting suggests that the processes dominating the observed decrease in \(N(\mathrm{H_{2}S})\) can be separated into two different timescales, with \(\sigma_{1}\sim 10^{-16}\) cm\({}^{2}\) and \(\sigma_{2}\sim 10^{-17}\) cm\({}^{2}\).
The fast process with \(\sigma_{1}\sim 10^{-16}\) cm\({}^{2}\) is likely due to startup effects, such as collision-induced desorption of the weakly-bound topmost molecules (Chuang et al., 2018). Accordingly, the effective destruction cross section of H\({}_{2}\)S can be approximated as the second exponential term, with \(\sigma_{2}\sim 10^{-17}\) cm\({}^{2}\). Control experiments with neutral helium bombardment of H\({}_{2}\)S ices show that material loss due to collisional impact should account for \(\lesssim 10\%\) of the total H\({}_{2}\)S desorption from the QMS. In comparison, the saturation point of the fast exponential curve (blue line in the lower panel of Figure 7) corresponds to \(\sim 0.3\) of the total loss of H\({}_{2}\)S, and should thus be regarded as an upper limit to the real value.
Given that the interaction of H\({}_{2}\)S with H atoms mostly results in chemical desorption via reaction 6a and H\({}_{2}\)S\({}_{2}\) formation via reaction 6b, it is possible to isolate the H\({}_{2}\)S chemical desorption curve by subtracting the minimum amount of H\({}_{2}\)S consumed to form H\({}_{2}\)S\({}_{2}\) (i.e., twice the column density of H\({}_{2}\)S\({}_{2}\)). The resulting isolated H\({}_{2}\)S chemical desorption curve is shown in the upper panel of Figure 8, and yields an effective cross section of \(\sigma\sim(1.7\pm 0.2)\times 10^{-17}\) cm\({}^{2}\). It should be emphasized, however, that this value is derived using a series of assumptions, and is therefore only a rough estimation.
In addition to the IR approach, it is possible to directly probe the chemical desorption of hydrogen sulfide by utilizing mass spectrometry data acquired during hydrogen exposure. The lower panel of Figure 8 shows the integrated signal for the m/z = 34 ([H\({}_{2}\)S\({}^{+}\)]\({}^{\star}\)) fragment as a function H-atom fluence (i.e., the area of the plot in the lower panel of Figure 5). Similarly to H\({}_{2}\)S\({}_{2}\), this curve can be fitted by an exponential function as described in Equation 7, yielding \(\sigma\sim(3.7\pm 0.3)\times 10^{-17}\) cm\({}^{2}\)--quite compatibly with the IR approach. Assuming similar chemical desorption efficiencies for both \({}^{32}\)S and \({}^{34}\)S isotopes of H\({}_{2}\)S, the contribution from [\({}^{34}\)S]\({}^{+}\) to m/z = 34 does not affect the exponential factor in the fitting and can therefore be neglected. It is important to note that the cross section from the QMS data is likely more accurate than the IR counterpart, as the former is a direct fitting of the measurements, whereas the latter involves a number of presumptions. Both values are similar to the chemical desorption cross sections of \((2.1\pm 0.2)\times 10^{-17}\) cm\({}^{2}\) derived by Oba et al. (2019) from the exposure of H\({}_{2}\)S ices to H atoms at 10 K, and reinforce the relevance of H\({}_{2}\)S chemical desorption to in
Figure 6: Derived contributions from H\({}_{2}\)S\({}_{2}\) formation and H\({}_{2}\)S chemical desorption to the measured loss in \(N(\mathrm{H_{2}S})\) after 120 minutes of H-atom exposure at 10, 12, 14, and 16 K.
Figure 7: Upper panel: Variation in H\({}_{2}\)S\({}_{2}\) column density during H-atom exposure of an H\({}_{2}\)S ice at 10 K. Lower panel: variation in H\({}_{2}\)S column density as a function of H-atom fluence during bombardment of a H\({}_{2}\)S ice at 10 K. The two-term exponential fitting to the points is shown in red, with the fast and slow components of the fitting plotted in blue and green, respectively.
terstellar gas-grain chemistry. Small discrepancies between the two studies are expected due to the different experimental conditions, such as ice thicknesses, growth surfaces, and H-atom fluxes.
Similar experiments were performed at 12, 14, and 16 K, and the derived effective cross sections are summarized in Table 3. The estimated \(\sigma\)(H\({}_{2}\)S\({}_{2}\)) values suggest that the effectiveness of H\({}_{2}\)S\({}_{2}\) formation remains fairly consistent (within the uncertainty range) for temperatures between 10 K and 14 K. At 16 K, the cross section is slightly reduced. This behavior is likely the outcome of competing elementary processes involved in synthesizing H\({}_{2}\)S\({}_{2}\) on ice: while diffusion can be facilitated at higher temperatures--thus enhancing encounters between two HS radicals and favoring Reaction 6b--the sticking coefficient of H atoms on ices diminishes, thus hindering the formation of reactants in the first place. Moreover, faster diffusion rates also imply that H atoms might not have enough available time in the vicinity of a H\({}_{2}\)S molecule to overcome the \(\sim\)1500 K barrier in Reaction 1. Similar findings were described in other H-atom addition experiments (e.g., in the hydrogenation of O\({}_{2}\); Ioppolo et al. 2008; Ioppolo et al. 2010; Cuppen et al. 2010).
The effective cross sections of H\({}_{2}\)S chemical desorption are obtained from the QMS data and show a slight decreasing trend between temperatures of 10, 12, 14, and 16 K. A similar behavior was also observed by Oba et al. (2019) with measurements at 10, 20, and 30 K, which they attribute to the a combination of the H atom availability at \(T>20\) K and the true efficiency of H\({}_{2}\)S chemical desorption at higher temperatures. The slightly lower effective cross sections, they argue, would in reality indicate an increase of the true value at warmer environments, balancing out the considerably diminishing sticking coefficient of H. In the present work, we probe a much smaller temperature range, in which case the availability of H atoms on the surface is not expected to drop as significantly. Nonetheless, some effect of the smaller sticking coefficient of hydrogen at higher temperatures could in principle influence the measured effective cross sections--albeit to a smaller extent than in Oba et al. (2019). Although it is challenging to speculate the effect of the ice temperature on the real \(\sigma_{CD}\)(H\({}_{2}\)S), it seems like a measurable change occurs only from 10 K to 12 K within the range explored here.
## 4 Astrophysical implications
Hydrogen sulfide is thought to be efficiently formed on the surface of interstellar dust through the hydrogenation of S atoms (see, e.g., Tielens & Hagen 1982; Laas & Caselli 2019). It is also the major sulfur-bearing species found in the comae of comets (Calmonte et al. 2016 and references therein), which in turn are thought to harbor the content of pre-stellar ices. The (so far) non-detection of solid-phase H\({}_{2}\)S in interstellar clouds, thus, poses a question regarding the fate of H\({}_{2}\)S in interstellar icy mantles. One likely explanation for its absence in observations is that solid-phase H\({}_{2}\)S is effectively destroyed by, for instance, energetic processing--which is known to result in solid-phase sulfur chemistry (e.g., Moore et al. 2007; Garozzo et al. 2010; Jimenez-Escobar et al. 2014; Chen et al. 2015; Shingledecker et al. 2020; Cazaux et al. 2022; Mifsud et al. 2022). In fact, the photochemistry of H\({}_{2}\)S induced by UV photons has been suggested as a potential sulfur sink, as it shows to produce allotropic forms of S (S\({}_{n}\)) that are largely refractory (especially for \(n>4\)). Additionally to energetic processing, non-energetic routes to remove H\({}_{2}\)S from the solid phase are also essential, as those are the dominant processes taking place within dense clouds. Indeed, recent observations with the James Webb Space Telescope aimed at highly shielded regions within interstellar clouds (with \(A_{V}>50\)) still could not detect H\({}_{2}\)S ices, providing upper limits of 0.6% with respect to H\({}_{2}\)O (McClure et al. 2023). In special for such environments, chemical desorption due to hydrogenation seems to be a particularly prominent mechanism to transfer H\({}_{2}\)S to the gas phase (Oba et al. 2018, 2019). The cross sections derived in this work directly from the chemically-desorbed H\({}_{2}\)S as measured by the QMS--and thus not influenced by additional H\({}_{2}\)S destruction phenomena such as chemical reactions--is fully in line with this proposition.
Another relevant value that can be derived from predeposition experiments is the efficiency of chemical desorption per incident H atom. The reason for deriving a value per incident atom instead of per reactive event is because the true value of H atoms involved in the reactions under our experimental conditions is unknown, as a fraction of them will recombine into H\({}_{2}\) molecules through diffusion. The efficiency derived per incident atom therefore can be regarded as a lower limit to the value per
Figure 8: Upper panel: estimated contribution from chemical desorption to the decrease in \(N\)(H\({}_{2}\)S) as a function of fluence. The simple exponential fitting to the points is shown in red, and the linear fitting to the first 55 minutes of bombardment is shown in blue (dashed line). Lower panel: Integrated intensity of the m/z = 34 signal measured by the QMS as a function of H-atom fluence during the same experiment. The red line shows the exponential fitting to the points.
\begin{table}
\begin{tabular}{c c c} \hline \hline Temperature & \(\sigma\)(H\({}_{2}\)S\({}_{2}\)) & \(\sigma_{CD}\)(H\({}_{2}\)S) \\ (K) & (\(\times 10^{-17}\)cm\({}^{2}\)) & (\(\times 10^{-17}\)cm\({}^{2}\)) \\ \hline
10 & 9.8 \(\pm\) 0.9 & 3.7 \(\pm\) 0.3 \\
12 & 7.8 \(\pm\) 0.9 & 2.8 \(\pm\) 0.1 \\
14 & 8.3 \(\pm\) 0.7 & 2.7 \(\pm\) 0.2 \\
16 & 5.2 \(\pm\) 0.6 & 2.6 \(\pm\) 0.2 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Effective cross sections of H\({}_{2}\)S\({}_{2}\) formation (\(\sigma\)(H\({}_{2}\)S\({}_{2}\))) and H\({}_{2}\)S chemical desorption (\(\sigma_{CD}\)(H\({}_{2}\)S)) derived from the predeposition experiments performed at 10, 12, 14, and 16 K.
reaction event. After isolating the variation in H\({}_{2}\)S column density due to chemical desorption (as described in Section 3.3; see also Figure 8), a linear fit to the points within the first 55 minutes of bombardment at 10 K (blue dashed line in the lower panel of Figure 8) yields an efficiency of \(\sim 0.019\pm 0.001\)--around 4 times higher than the values reported by Oba et al. (2018) and Oba et al. (2019), and consistent with the calculated value per reaction event (i.e., \((3\pm 1.5)\%\)) in Furuya et al. (2022). Similarly to the cross sections, such a discrepancy could be due to the different ice compositions (pure H\({}_{2}\)S versus H\({}_{2}\)S on top of amorphous solid water) and thicknesses (\(\sim\)20 ML versus 0.7 ML). Nonetheless, this estimated efficiency reinforces the key role of H\({}_{2}\)S chemical desorption as a non-thermal mechanism of transferring hydrogen sulfide to the gas phase within dark clouds. Indeed, by combining gas-grain chemical models with millimeter observations, Navarro-Almaida et al. (2020) find that chemical desorption is the main mechanism responsible for gas-phase H\({}_{2}\)S formation.
Complementary to chemical desorption, the interaction of H\({}_{2}\)S with H atoms can also kick-start non-energetic chemistry to form larger sulfur-bearing molecules. The detection of H\({}_{2}\)S\({}_{2}\) under our experimental conditions is one example of how HS radicals produced by Reaction 1 can lead to a higher sulfur-bearing chemical complexity. In fully representative interstellar ices, the probability of two HS radicals to meet is rather low, given the small abundance of H\({}_{2}\)S relatively to other ice components such as H\({}_{2}\)O or CO. However, these radicals can react with more widespread ice species, potentially leading to the formation of sulfur-bearing COMs. The present work therefore serves as a proof of concept that non-energetic sulfur chemistry can be initiated by the formation of HS radicals through Reaction 1, with the simplest example of H\({}_{2}\)S\({}_{2}\). It is also noteworthy that the contributions from each process to the consumption of H\({}_{2}\)S varies significantly with the temperature, with an appreciable increase in sulfur-bearing species formed at 16 K comparatively to 10 K. This is likely due to the enhanced radical diffusion within warmer ices, and signifies that sulfur chemistry could be significantly intensified at regions closer to the edges of dark clouds--where temperatures can approach 20 K.
## 5 Conclusions
In the present work, we experimentally investigate the interaction of H\({}_{2}\)S ices with H atoms under ultrahigh vacuum pressures and astronomically-relevant temperatures (10 - 16 K). Our main findings are summarized below:
* We verified that solid-phase hydrogen sulfide is destroyed and H\({}_{2}\)S\({}_{2}\) is formed as a result of the interaction between H\({}_{2}\)S and H atoms.
* The chemical desorption of H\({}_{2}\)S is directly probed by quantifying the material ejected into the gas phase during H-atom exposure experiments. The calculated effective cross sections for ice temperatures of 10, 12, 14, and 16 K are, respectively, \((3.7\pm 0.3)\times 10^{-17}\) cm\({}^{2}\), \((2.8\pm 0.1)\times 10^{-17}\) cm\({}^{2}\), \((2.7\pm 0.2)\times 10^{-17}\) cm\({}^{2}\), and \((2.6\pm 0.2)\times 10^{-17}\) cm\({}^{2}\).
* From the RAIRS data, we estimate the chemical desorption efficiency per incident H atom at 10 K to be \(\sim 0.019\pm 0.001\).
* The derived values for the effective chemical desorption cross sections and efficiency per incident H strengthen the argument that H\({}_{2}\)S ice is effectively transferred to the gas phase through the excess energy generated by reactions with hydrogen atoms.
* The confirmation of H\({}_{2}\)S\({}_{2}\) formation as a result of HS radical recombination proves that non-energetic sulfur chemistry can take place at temperatures as low as 10 K through radical-radical reactions, which could potentially lead to the formation of sulfur-bearing COMs in more representative interstellar ice mixtures.
* We derive effective formation cross sections for H\({}_{2}\)S\({}_{2}\) of \((9.8\pm 0.9)\times 10^{-17}\) cm\({}^{2}\), \((7.8\pm 0.9)\times 10^{-17}\) cm\({}^{2}\), \((8.3\pm 0.7)\times 10^{-17}\) cm\({}^{2}\), and \((5.2\pm 0.6)\times 10^{-17}\) cm\({}^{2}\) at 10, 12, 14, and 16 K, respectively.
* No chemical desorption was observed upon formation of H\({}_{2}\)S\({}_{2}\) above the current detection limit.
* Approximately 85% to 74% of the H\({}_{2}\)S ice destruction observed under our experimental conditions can be associated with chemical desorption, whereas \(\sim 15-26\%\) is due to H\({}_{2}\)S\({}_{2}\) formation. The relative consumption of H\({}_{2}\)S by the latter process grows with temperature, implying that sulfur chemistry induced by HS radicals becomes increasingly more relevant in warmer environments.
###### Acknowledgements.
This work has been supported by the Danish National Research Foundation through the Center of Excellence "InterCat" (Grant agreement no.: DNRF150); the Netherlands Research School for Astronomy (NOVA); and the Dutch Astrochemistry Network II (DANI). KJC is grateful for support from NWO via a VENI fellowship (VL.Veni.212.296).
|
2302.03585 | The size of the Betti table of Binomial Edge Ideals | Let $G$ be a finite simple graph with $n$ non isolated vertices, and let
$J_G$ its binomial edge ideal. We determine all pairs
$(\mbox{projdim}(J_G),\mbox{reg}(J_G))$, where $G$ ranges over all finite
simple graphs with $n$ non isolated vertices, for any $n$. | Antonino Ficarra, Emanuele Sgroi | 2023-02-07T16:51:45Z | http://arxiv.org/abs/2302.03585v1 | # The size of the Betti table of binomial edge ideals
###### Abstract.
Let \(G\) be a finite simple graph with \(n\) non isolated vertices, and let \(J_{G}\) its binomial edge ideal. We determine all pairs \((\operatorname{proj\,dim}(J_{G}),\operatorname{reg}(J_{G}))\), where \(G\) ranges over all finite simple graphs with \(n\) non isolated vertices, for any \(n\).
Key words and phrases:binomial edge ideals, betti tables, projective dimension, regularity 2020 Mathematics Subject Classification: Primary 13F20; Secondary 13H10
## Introduction
One never-ending source of inspiration in Combinatorial Commutative Algebra is the study of the minimal free resolutions of graded ideals. Let \(R=K[x_{1},\ldots,x_{n}]\) be the standard graded polynomial ring, with \(K\) a field, and let \(I\subset R\) a graded ideal. The behaviour of the minimal resolution of \(I\) is hard to predict. Two important homological invariants of \(I\), that provide a measure of the complexity of its minimal resolution, are the _projective dimension_, \(\operatorname{proj\,dim}(I)\), and the _regularity_, \(\operatorname{reg}(I)\). The pair \((\operatorname{proj\,dim}(I),\operatorname{reg}(I))\) determines the size of the _Betti table_ of \(I\).
A central question is the following. For a given class \(\mathcal{C}\) of graded ideals of \(R\), can we determine the set of the sizes of the Betti tables of the ideals in \(\mathcal{C}\)? That is, can we determine all pairs \((\operatorname{proj\,dim}(I),\operatorname{reg}(I))\), \(I\in\mathcal{C}\)? Such a problem is very difficult, and the behaviour of these pairs is quite mysterious. On the other hand, if the graded ideals in \(\mathcal{C}\) arise from combinatorics, as it happens in many cases, then their combinatorial nature helps us to better understand and sometimes also answer such a question.
A problem of this type is considered in [8]. Hereafter, for a graph \(G\) we mean a finite simple graph. Recall that the _edge ideal_\(I(G)\) of \(G\) is the ideal generated by the monomials \(x_{i}x_{j}\) where \(\{i,j\}\) is an edge of \(G\). In [8], Ha and Hibi studied the question of determining all admissible pairs \((\operatorname{proj\,dim}(I(G)),\operatorname{reg}(I(G)))\), as \(G\) ranges over all graphs with a given number of vertices. This question is related to the search of a max min vertex cover and a min max independent set. In turn, these classical problems of graph theory are known to be NP-hard, and they received a lot of attention lately [2, 3, 4, 9]. On the other hand, the combinatorics of \(G\) yields the following surprising lower bound: \(\operatorname{proj\,dim}(I(G))\geq 2\sqrt{n}-3\). The authors determined all pairs \((\operatorname{proj\,dim}(I(G)),\operatorname{reg}(I(G)))\) when the projective dimension reaches this lower bound, and also when the regularity reaches its minimal possible value, namely \(\operatorname{reg}(I(G))=2\). The same question was posed and completely solved by Erey and Hibi in [6], for the class of connected bipartite graphs. Similar problems are treated in [12, 13, 14, 15, 16, 17, 19] and the references therein.
Another family of graded ideals arising from graphs is that of _binomial edge ideals_. In 2010, Herzog, Hibi, Heinsdottir, Kahle and Rauh in [10], and independently, Ohtani in [24], introduced the _binomial edge ideal_. Let \(G\) be a graph with \(n\) vertices \(1,\ldots,n\). Then \(J_{G}\) is defined to be the graded ideal of \(S=K[x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}]\) generated by the binomials \(x_{i}y_{j}-x_{j}y_{i}\) for all edges \(\{i,j\}\) of \(G\). This class of ideals generalizes the classical determinantal ideals. Indeed, \(J_{G}\) may be seen as the ideal generated by an arbitrary set of maximal minors of a \((2\times n)\)-matrix of indeterminates. An huge effort has been made to understand the homological properties of binomial edge ideals. Consult the survey [23] for the current state of art.
In this article, we address H\(\ddot{\rm a}\) and Hibi problem for the class of binomial edge ideals. One would guess, as in the case of edge ideals, that this problem is difficult, and that answering this question in an explicit fashion may not be possible. Quite surprisingly, we succeed to deliver a fairly comprehensive and explicit solution.
Let \(n\geq 1\) be an integer. Denote by \({\rm Graphs}(n)\) the class of all finite simple graphs with \(n\) non isolated vertices. Then, we define
\[{\rm pdreg}(n)\ =\ \big{\{}({\rm proj}\dim(J_{G}),{\rm reg}(J_{G})):G\in{\rm Graphs }(n)\big{\}},\]
which is the set of the sizes of the Betti tables of \(J_{G}\), as \(G\) ranges over all graphs with \(n\) non isolated vertices. Our main result in the article is the following theorem.
**Theorem 3.1**_For all \(n\geq 3\),_
\[{\rm pdreg}(n)\ =\ \big{\{}(n-2,2),(n-2,n)\big{\}}\cup\bigcup_{r=3}^{ \lfloor\frac{n}{2}\rfloor+1}\big{(}\bigcup_{p=n-r}^{2n-5}\{(p,r)\}\big{)}\ \cup\\ \cup\bigcup_{r=\lceil\frac{n}{2}\rceil+1}^{n-2}\big{(}\bigcup_{p=r -2}^{2n-5}\{(p,r)\}\big{)}\cup A_{n},\]
_where \(A_{n}=\{(p,r)\in{\rm pdreg}(n):r=n-1\}\)._
The reader may see that when the regularity is \(n-1\) we leave the set \(A_{n}\) not determined. Indeed, our experiments show a quite unexpected behaviour.
**Conjecture 3.4** Let \(G\) be a graph with \(n\geq 7\) non isolated vertices. Suppose that \({\rm reg}(J_{G})=n-1\). Then \({\rm proj}\dim(J_{G})\leq n\).
The article is structured as follows. In Section 1, we state some general bounds for the projective dimension and the regularity of binomial edge ideals. The proof of Theorem 3.1 is by induction on the number of vertices of the graph. On the other hand, there are some special graphs giving some of the pairs \((p,r)\in{\rm pdreg}(n)\) that we did not obtain by inductive arguments. In Section 2, we discuss these special classes of graphs. They give the pairs \((p,r)\in{\rm pdreg}(n)\) with \(p=2n-5,2n-6\), or \(r=3,n-2\). Section 3 contains the main result in the article. Our answer is nearly complete. Indeed, only for the graphs with almost maximal regularity, namely \({\rm reg}(J_{G})=n-1\), we do not know yet the projective dimension. It would be interesting to classify the binomial edge ideals with almost maximal regularity.
We gratefully acknowledge the use of _Macaulay2_[7] and in particular of the package NautyGraphs [21].
## 1. General bounds for the Betti table of binomial edge ideals
Let \(I\) be a graded ideal of a standard graded polynomial ring \(R=K[x_{1},\ldots,x_{n}]\), where \(K\) is a field. Then \(I\) posses a unique minimal graded free resolution
\[\mathbb{F} : \cdots\to F_{i}\to F_{i-1}\to\cdots\to F_{1}\to F_{0}\to I\to 0,\]
with \(F_{i}=\bigoplus_{j}R(-j)^{\beta_{i,j}(I)}\), where the \(\beta_{i,j}(I)\) are the _graded Betti numbers_ of \(I\). The _projective dimension_ and the _regularity_ of \(I\), are, respectively,
\[\operatorname{proj}\dim(I) =\ \max\{i:\beta_{i,j}(I)\neq 0,\text{ for some }j\},\] \[\operatorname{reg}(I) =\ \max\{j-i:\beta_{i,j}(I)\neq 0,\text{ for some }i\text{ and }j\}.\]
Throughout the article, we consider only finite simple graphs. Hence we will refer to them simply as graphs. Let \(G\) be a graph. By \(V(G)\) we denote the _vertex set_ of \(G\), and by \(E(G)\) the _edge set_ of \(G\). Two vertices \(u\) and \(v\) are _adjacent_ if \(\{u,v\}\in E(G)\). A vertex \(u\) is called _isolated_ if \(\{u,v\}\notin E(G)\) for all \(v\in V(G)\setminus\{u\}\).
Let \(K\) be a field and \(G\) be a graph with vertex set \(\{1,\ldots,n\}\). The _binomial edge ideal_\(J_{G}\) of \(G\) is the following binomial ideal of \(S=K[x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}]\):
\[J_{G}=(x_{i}y_{j}-x_{j}y_{i}\ :\ \{i,j\}\in E(G)).\]
Let \(G\) be a graph. If \(W\) is a subset of \(V(G)\) we denote by \(G_{W}\) the _induced subgraph of \(G\) on \(W\)_, that is \(V(G_{W})=W\) and \(E(G_{W})=\{\{u,v\}\in E(G):u,v\in W\}\). It is known by [20, Corollary 2.2] that
\[\operatorname{proj}\dim(J_{G})\geq\operatorname{proj}\dim(J_{G_{W}})\ \text{ and }\ \operatorname{reg}(J_{G})\geq\operatorname{reg}(J_{G_{W}}),\]
for all subsets \(W\) of \(V(G)\). We will use freely this fact.
Suppose that \(G\) is connected. Then, we say that \(G\) is \(\ell\)_-vertex-connected_ if for all subsets \(W\) of \(V(G)\) with \(|W|<\ell\), the induced subgraph \(G_{V(G)\setminus W}\) is connected. The _vertex-connectivity_ of \(G\), denoted by \(\ell(G)\), is the maximum integer \(\ell\) such that \(G\) is \(\ell\)-vertex-connected. It is clear that \(\ell(G)\geq 1\).
**Theorem 1.1**.: _Let \(G\) be a graph with \(n\geq 3\) non isolated vertices. Then_
\[\operatorname{proj}\dim(J_{G})\leq 2n-5.\]
_Furthermore, if \(G\) is connected, then \(\operatorname{proj}\dim(J_{G})\geq n-2\)._
Proof.: By the work of [26, Theorem 5.2] it is known that \(\operatorname{depth}(S/J_{G})\geq 4\) for all graphs \(G\). Therefore, by the Auslander-Buchsbaum formula we have
\[\operatorname{proj}\dim(J_{G})=\operatorname{proj}\dim(S/J_{G})-1=2n- \operatorname{depth}(S/J_{G})-1\leq 2n-5.\]
Suppose now that \(G\) is connected. Then, by [1, Theorems 3.19 and 3.20] we have \(\operatorname{proj}\dim(J_{G})=\operatorname{proj}\dim(S/J_{G})-1\geq n+\ell( G)-3\). Since \(\ell(G)\geq 1\), we obtain \(\operatorname{proj}\dim(J_{G})\geq n-2\), as desired.
We denote by \(K_{n}\) the _complete graph_ with \(n\) vertices, that is \(V(K_{n})=\{1,\ldots,n\}\) and \(\{i,j\}\in E(K_{n})\) for all \(i,j\in V(K_{n})\), \(i\neq j\). Whereas, by \(P_{n}\) we denote the _path_ of length \(n\), that is \(V(P_{n})=\{1,\ldots,n\}\) and \(E(P_{n})=\{\{1,2\},\{2,3\},\ldots,\{n-1,n\}\}\).
**Theorem 1.2**.: _Let \(G\) be a graph with \(n\geq 3\) non isolated vertices. Then_
\[2\leq\operatorname{reg}(J_{G})\leq n.\]
_Moreover,_
* \(\operatorname{reg}(J_{G})=2\) _if and only if_ \(G=K_{n}\) _and, in this case,_ \(\operatorname{proj}\dim(J_{K_{n}})=n-2\)_._
* \(\operatorname{reg}(J_{G})=n\) _if and only if_ \(G=P_{n}\) _and, in this case,_ \(\operatorname{proj}\dim(J_{P_{n}})=n-2\)_._
Proof.: Since \(J_{G}\) is generated in degree two, \(\operatorname{reg}(J_{G})\geq 2\). By [20, Theorem 1.1], \(\operatorname{reg}(J_{G})\leq|V(G)|=n\). Statement (i) follows from [28, Theorem 2.1], while statement (ii) follows from [18, Theorem 3.2].
The following observation will be used several times.
**Remark 1.3**.: Suppose \(G=G_{1}\sqcup G_{2}\sqcup\dots\sqcup G_{c}\) is a graph without isolated vertices and \(c\) connected components \(G_{i}\), \(i=1,\dots,c\). Let \(S_{i}=K[x_{v},y_{v}:v\in V(G_{i})]\) and \(n_{i}=|V(G_{i})|\), for all \(i\). Then \(\sum_{i=1}^{c}n_{i}=n\) and \(J_{G_{i}}\) is a binomial ideal of \(S_{i}\). Since the polynomial rings \(S_{i}\) are in pairwise disjoint sets of variables, we have
\[S/J_{G}\cong\bigotimes_{i=1}^{c}S_{i}/J_{G_{i}}.\]
Hence, \(\operatorname{proj}\dim(S/J_{G})=\sum_{i=1}^{c}\operatorname{proj}\dim(S_{i}/ J_{G_{i}})\) and \(\operatorname{reg}(S/J_{G})=\sum_{i=1}^{c}\operatorname{reg}(S_{i}/J_{G_{i}})\). Taking into account that for a graded ideal \(I\) of a polynomial ring \(R\) we have \(\operatorname{proj}\dim(R/I)=\operatorname{proj}\dim(I)+1\) and \(\operatorname{reg}(R/I)=\operatorname{reg}(I)-1\), we obtain the following useful identities,
\[\operatorname{proj}\dim(J_{G}) = \sum_{i=1}^{c}\operatorname{proj}\dim(J_{G_{i}})+(c-1), \tag{2}\] \[\operatorname{reg}(J_{G}) = \sum_{i=1}^{c}\operatorname{reg}(J_{G_{i}})-(c-1). \tag{1}\]
The lower bound given in Theorem 1.1 for the projective dimension, can be refined in terms of the regularity.
**Proposition 1.4**.: _Let \(n\geq 3\), \(r\) be positive integers such that \(3\leq r\leq\lfloor\frac{n}{2}\rfloor+1\). Then_
\[n-r\leq\operatorname{proj}\dim(J_{G})\leq 2n-5,\]
_for all graphs \(G\) with \(n\) non isolated vertices._
Proof.: The upper bound for \(\operatorname{proj}\dim(J_{G})\) is stated in Theorem 1.1. To prove the lower bound, assume the notation of Remark 1.3. By formula (2) and Theorem 1.2,
\[r = \sum_{i=1}^{c}\operatorname{reg}(J_{G_{i}})-(c-1)\geq\sum_{i=1}^{ c}2-(c-1)\] \[= 2c-c+1=c+1.\]
Therefore, \(c\leq r-1\). The upper bound \(c\leq r-1\leq\lfloor\frac{n}{2}\rfloor\) we determined, agrees with the fact that the number of connected components of \(G\) is at most \(\lfloor\frac{n}{2}\rfloor\).
Now, by formula (1) and Theorem 1.1,
\[\begin{split}\operatorname{proj}\dim(J_{G})&=\ \sum_{i=1}^{c} \operatorname{proj}\dim(J_{G_{i}})+(c-1)\\ &\geq\ \sum_{i=1}^{c}(n_{i}-2)+(c-1)\\ &=\ \sum_{i=1}^{c}n_{i}-2c+c-1\\ &=\ n-c-1\geq n-r,\end{split} \tag{3}\]
since \(c\leq r-1\).
**Proposition 1.5**.: _Let \(n\geq 3\), \(r\) be positive integers such that \(\lceil\frac{n}{2}\rceil+1\leq r\leq n-1\). Then_
\[r-2\leq\operatorname{proj}\dim(J_{G})\leq 2n-5,\]
_for all graphs \(G\) with \(n\) non isolated vertices._
Proof.: The upper bound follows from Theorem 1.1. For the proof of the lower bound, assume the notation of Remark 1.3. By formula (2) and Theorem 1.2,
\[\begin{split} r&=\ \sum_{i=1}^{c}\operatorname{reg}(J_{G_{i} })-(c-1)\leq\sum_{i=1}^{c}n_{i}-(c-1)\\ &=\ n-c+1.\end{split}\]
Thus, \(c\leq n-r+1\). Note that \(c\leq n-r+1\leq n-(\lceil\frac{n}{2}\rceil+1)+1=n-\lceil\frac{n}{2}\rceil= \lfloor\frac{n}{2}\rfloor\). Therefore, the upper bound \(c\leq n-r+1\) agrees with the fact that the number of connected components of \(G\) is at most \(\lfloor\frac{n}{2}\rfloor\).
Using the computation in (3), we obtain
\[\operatorname{proj}\dim(J_{G})\ \geq\ n-c-1\geq r-2,\]
since \(c\leq n-r+1\).
## 2. Special classes of graphs
In this section, we determine some classes of graphs that have a given projective dimension or a given regularity. These families will be used to get our main result.
Let \(G_{1},G_{2}\) be two graphs with different vertex sets \(V_{1}\) and \(V_{2}\) and edge sets \(E_{1}\) and \(E_{2}\). The _join_ of \(G_{1}\) and \(G_{2}\) is defined to be the graph \(G_{1}*G_{2}\) with vertex set \(V_{1}\cup V_{2}\) and edge set \(E_{1}\cup E_{2}\cup\{\{v_{1},v_{2}\}:v_{1}\in V_{1},v_{2}\in V_{2}\}\).
The following formula due to Madani and Kiani ([29, Theorem 2.1]) plays a pivotal role in our article. Suppose \(G_{1}\) and \(G_{2}\) are graphs with disjoint vertex sets \(V_{1}\) and \(V_{2}\), and that not both of them are complete. Then,
\[\operatorname{reg}(J_{G_{1}*G_{2}})=\max\{\operatorname{reg}(J_{G_{1}}), \operatorname{reg}(J_{G_{2}}),3\}. \tag{4}\]
It is worth mentioning that, even if \(G_{1}\) and \(G_{2}\) may have isolated vertices, all vertices in \(G_{1}*G_{2}\) are non isolated, and \(G_{1}*G_{2}\) is always a connected graph. Moreover, if \(G_{1}\neq K_{1}\) and \(G_{2}=K_{1}\), then \(G_{1}*G_{2}\) is called a _cone_.
### Graphs with projective dimension \(2n-5\)
In [26, Theorem 5.3], Malayeri, Madani and Kiani have characterized all graphs \(G\) that have minimal depth possible, _i.e._, \(\operatorname{depth}(S/J_{G})=4\). For such graphs, by the Auslander-Buchsbaum formula we have maximal projective dimension:
\[\operatorname{proj}\dim(J_{G})=\operatorname{proj}\dim(S/J_{G})-1=2n- \operatorname{depth}(S/J_{G})-1=2n-5.\]
Hereafter, for a positive integer \(m\geq 1\), we denote by \(mK_{1}\) a graph consisting of \(m\) isolated vertices.
**Theorem 2.1**.: _Let \(G\) be a graph with \(n\geq 5\) non isolated vertices. Then, the following conditions are equivalent._
1. \(\operatorname{proj}\dim(J_{G})=2n-5\)_._
2. \(G=\widetilde{G}*2K_{1}\) _for some graph_ \(\widetilde{G}\) _with_ \(n-2\) _vertices._
**Corollary 2.2**.: _Let \(G\) be a graph with \(n\geq 5\) non isolated vertices such that \(\operatorname{proj}\dim(J_{G})=2n-5\). Then,_
\[3\leq\operatorname{reg}(J_{G})\leq n-2.\]
_Furthermore, given \(r\in\{3,\ldots,n-2\}\), there exists a graph \(G\) with \(n\) non isolated vertices such that \(\operatorname{proj}\dim(J_{G})=2n-5\) and \(\operatorname{reg}(J_{G})=r\)._
Proof.: By the previous theorem, \(G=\widetilde{G}*2K_{1}\) for some graph \(\widetilde{G}\) with \(n-2\) vertices. Since \(2K_{1}\) is not complete and \(J_{2K_{1}}=(0)\), by formula (4),
\[\operatorname{reg}(J_{G})=\max\{\operatorname{reg}(J_{\widetilde{G}}), \operatorname{reg}(J_{2K_{1}}),3\}=\max\{\operatorname{reg}(J_{\widetilde{G}} ),3\}.\]
Hence \(\operatorname{reg}(J_{G})\geq 3\). Moreover, by Theorem 1.2, \(\operatorname{reg}(J_{\widetilde{G}})\leq|V(\widetilde{G})|=n-2\) and so \(\operatorname{reg}(J_{G})\leq n-2\), since \(n-2\geq 3\).
Now, let \(r\in\{3,\ldots,n-2\}\). Set \(\widetilde{G}=P_{r}\sqcup(n-2-r)K_{1}\) and \(G=\widetilde{G}*2K_{1}\). Then \(\operatorname{proj}\dim(J_{G})=2n-5\), by the previous theorem. Moreover,
\[\operatorname{reg}(J_{G})=\max\{\operatorname{reg}(J_{\widetilde{G}}), \operatorname{reg}(J_{2K_{1}}),3\}=\max\{r,3\}=r,\]
since \(J_{\widetilde{G}}=J_{P_{r}}\) has regularity \(r\) by Theorem 1.2(ii).
### Graphs with projective dimension \(2n-6\)
After the case of minimal depth, Malayeri, Madani and Kiani classified in [27, Theorem 5, Section 5] the graphs \(G\) with \(\operatorname{depth}(S/J_{G})=5\), that is \(\operatorname{proj}\dim(J_{G})=2n-6\).
To state their result, we introduce the following class of graphs. Hereafter, if \(n\) is a positive integer, we denote by \([n]\) the set \(\{1,2,\ldots,n\}\). If \(v\) is a vertex of a graph \(G\), \(N_{G}(v)=\{u\in V(G)\setminus\{v\}:\{u,v\}\in E(G)\}\) is the _neighbourhood of \(v\) in \(G\)_.
**Definition 2.3**.: Let \(T\subset[n]\) with \(|T|=n-2\). The family \(\mathcal{G}_{T}\) consists of all graphs \(G\) with vertex set \([n]\) such that there exist two non adjacent vertices \(u\) and \(v\) of \(G\) with \(u,v\in[n]\setminus T\), and three disjoint subsets of \(T\), say \(V_{0}\), \(V_{1}\) and \(V_{2}\) with \(V_{1},V_{2}\neq\emptyset\) and \(V_{0}\cup V_{1}\cup V_{2}=T\), such that the following conditions hold:
1. \(N_{G}(u)=V_{0}\cup V_{1}\) and \(N_{G}(v)=V_{0}\cup V_{2}\)
2. \(\{v_{1},v_{2}\}\in E(G)\), for every \(v_{1}\in V_{1}\) and \(v_{2}\in V_{2}\).
Now, we introduce the class of \(D_{5}\)-type graphs.
**Definition 2.4**.: Let \(G\) be a graph with \(n\) vertices such that \(G\neq H*2K_{1}\), for all graphs \(H\). \(G\) is called a _\(D_{5}\)-type graph_, if one of the following conditions holds:
1. \(G=\widetilde{G}*3K_{1}\), for some graph \(\widetilde{G}\).
2. \(G=\widetilde{G}*(K_{1}\sqcup K_{2})\), for some graph \(\widetilde{G}\).
3. \(G\in\mathcal{G}_{T}\), for some \(T\subset V(G)\) with \(|T|=n-2\).
**Theorem 2.5**.: _Let \(G\) be a graph with \(n\geq 5\) non isolated vertices. Then, the following conditions are equivalent._
1. \(\operatorname{proj}\dim(J_{G})=2n-6\)_._
2. \(G\) _is a_ \(D_{5}\)_-type graph._
The following observation will be useful later.
**Remark 2.6**.: Let \(n\geq 5\). By Theorems 2.1 and 2.5, it follows that all graphs \(G\) with \(n\geq 5\) non isolated vertices such that \(\operatorname{proj}\dim(J_{G})=2n-5\) or \(\operatorname{proj}\dim(J_{G})=2n-6\) are connected.
For the proof of the next result, we need a lemma of Ohtani [24, Lemma 4.8], see also [19, Lemma 3.1] and formula (1) in the same article.
We recall that a _clique_ of a graph \(G\) is a subset \(W\) of \(V(G)\) such that \(G_{W}\) is an induced complete subgraph of \(G\). A _maximal clique_ of \(G\) is a clique of \(G\) that is not contained in any other clique of \(G\). A vertex \(v\in V(G)\) is called _simplicial_ if \(N_{G}(v)\) is a clique of \(G\), otherwise is called _internal_. Let \(v\in V(G)\). We denote \(G_{V(G)\setminus\{v\}}\) by \(G\setminus v\). Whereas, by \(G_{v}\) we denote the graph with vertex set \(V(G)\) and edge set \(E(G)\cup\{\{w_{1},w_{2}\}:w_{1},w_{2}\in N_{G}(v)\}\).
Let \(v\) be an internal vertex of a graph \(G\). Then Ohtani lemma, see also formula (1) in [19], implies that the following short sequence is exact:
\[0\to\frac{S}{J_{G}}\longrightarrow\frac{S}{(x_{v},y_{v},J_{G\setminus v})} \oplus\frac{S}{J_{G_{v}}}\longrightarrow\frac{S}{(x_{v},y_{v},J_{G_{v} \setminus v})}\to 0. \tag{5}\]
In the next proposition we also use freely the following upper bound for the regularity proved in [5, Theorem 2.1], see also formula (3) in the same article. For a connected graph \(G\) with \(n\) vertices,
\[\operatorname{reg}(J_{G})\leq n+2-|W|,\quad\text{for any clique $W$ of $G$.}\]
**Proposition 2.7**.: _Let \(G\) be a graph with \(n\geq 6\) non isolated vertices such that \(\operatorname{proj}\dim(J_{G})=2n-6\). Then,_
\[3\leq\operatorname{reg}(J_{G})\leq n-2.\]
Proof.: By the previous theorem, \(G\) is a \(D_{5}\)-type graph. Therefore \(G\) is not a complete graph. By Theorem 1.2(i), \(\operatorname{reg}(J_{G})\geq 3\).
Let us prove the upper bound. Firstly, suppose \(G=\widetilde{G}*3K_{1}\) or \(G=\widetilde{G}*(K_{1}\sqcup K_{2})\), for some graph \(\widetilde{G}\) with \(n-3\) vertices. Then, arguing as in Corollary 2.2 we obtain \(\operatorname{reg}(J_{G})\leq n-3\) in this case.
Suppose now that \(G\in\mathcal{G}_{T}\) for some \(T\subset V(G)\) with \(|T|=n-2\). Then \(V(G)=\{u,v\}\cup V_{0}\cup V_{1}\cup V_{2}\) where the union is disjoint, \(V_{1},V_{2}\neq\emptyset\), \(u\) and \(v\) are non adjacent, \(N_{G}(u)=V_{0}\cup V_{1}\), \(N_{G}(v)=V_{0}\cup V_{2}\) and \(\{v_{1},v_{2}\}\in E(G)\) for all \(v_{1}\in V_{1}\) and \(v_{2}\in V_{2}\).
Let us show that \(\operatorname{reg}(J_{G})\leq n-2\). We distinguish three cases.
**Case 1.** Suppose both \(u\) and \(v\) are simplicial vertices of \(G\). Then, \(T=V_{0}\cup V_{1}\cup V_{2}\) is a maximal clique of \(G\). Therefore, \(\operatorname{reg}(J_{G})\leq n+2-|T|=4\leq n-2\) as \(n\geq 6\).
**Case 2.** Suppose that \(u\) is a simplicial vertex, but \(v\) is internal. Then \(\{u\}\cup V_{0}\cup V_{1}\) is a clique of \(G\). Since \(v\) is internal, we can apply Ohtani lemma. By the short exact sequence (5) we have
\[\operatorname{reg}(J_{G})\leq\max\{\operatorname{reg}(x_{v},y_{v},J_{G\setminus v }),\operatorname{reg}(J_{G_{v}}),\operatorname{reg}(x_{v},y_{v},J_{G_{v} \setminus v})+1\}.\]
Since \(x_{v},y_{v}\) do not divide any generator of \(J_{G\setminus v}\) and \(J_{G_{v}\setminus v}\), we obtain
\[\operatorname{reg}(J_{G})\leq\max\{\operatorname{reg}(J_{G\setminus v}), \operatorname{reg}(J_{G_{v}}),\operatorname{reg}(J_{G_{v}\setminus v})+1\}. \tag{6}\]
Note that \(G\setminus v\) is not a path. Indeed \(N_{G}(u)=V_{0}\cup V_{1}\) is non empty, as \(V_{1}\neq\emptyset\). Hence, if \(N_{G}(u)=\{w\}\) then \(|V_{2}|=|T|-1=n-3\geq 3\). Recall that all vertices of \(V_{1}\) and \(V_{2}\) are adjacent. Then \(w\) would be adjacent to at least three vertices, which can not happen in a path. Otherwise, if \(|N_{G}(u)|\geq 2\) and \(w_{1},w_{2}\in N_{G}(u)\), then \(G\) has an induced cycle with vertices \(u,w_{1},w_{2}\). But a path does not have induced cycles. It follows that \(G\setminus v\) is not a path. Thus, by Theorem 1.2(ii),
\[\operatorname{reg}(J_{G\setminus v})\ \leq\ |V(G\setminus v)|-1=n-2. \tag{7}\]
Note that in the graphs \(G_{v}\) and \(G_{v}\setminus v\), the set \(T=V_{0}\cup V_{1}\cup V_{2}\) is a clique of size \(n-2\). Therefore, since \(n\geq 6\) we have
\[\operatorname{reg}(J_{G_{v}})\ \leq\ n+2-|T|=4\leq n-2, \tag{9}\] \[\operatorname{reg}(J_{G_{v}\setminus v})+1\ \leq\ (n-1)+2-|T|+1=4\leq n-2. \tag{8}\]
Combining (6) with (7), (8) and (9) we obtain \(\operatorname{reg}(J_{G})\leq n-2\), as desired.
**Case 3.** Suppose that both \(u\) and \(v\) are internal vertices. We first apply Ohtani lemma to \(v\) and get the inequality,
\[\operatorname{reg}(J_{G})\leq\max\{\operatorname{reg}(J_{G\setminus v}), \operatorname{reg}(J_{G_{v}}),\operatorname{reg}(J_{G_{v}\setminus v})+1\}. \tag{10}\]
As in the previous case, \(G\setminus v\) is not a path. Indeed \(N_{G}(u)=V_{0}\cup V_{1}\) is non empty. If \(N_{G}(u)=\{w\}\) then \(|V_{2}|=|T|-1=n-3\geq 3\). Then \(w\) would be adjacent to at least three vertices, which can not happen in a path. Otherwise, if \(|N_{G}(u)|\geq 2\) let \(w_{1},w_{2}\in N_{G}(u)\). If \(w_{1},w_{2}\) are adjacent, then \(G\) has an induced cycle with vertices \(u,w_{1},w_{2}\). Otherwise, let \(w_{3}\in V_{2}\), then \(G\) has an induced cycle with vertices \(u,w_{1},w_{2},w_{3}\). But a path does not have induced cycles. Thus, by Theorem 1.2(ii),
\[\operatorname{reg}(J_{G\setminus v})\leq n-2. \tag{11}\]
Since \(u\) and \(v\) are non adjacent in \(G\) and in \(G_{v}\), \(u\) is internal (simplicial) in \(G_{v}\) if and only if \(u\) is internal (simplicial) in \(G_{v}\setminus v\). We distinguish the two cases.
**Subcase 3.1**.: Suppose \(u\) is simplicial in \(G_{v}\) and \(G_{v}\setminus v\). Then \(T=V_{0}\cup V_{1}\cup V_{2}\) is a clique of size \(n-2\) in both graphs. Therefore,
\[\operatorname{reg}(J_{G_{v}})\ \leq\ n+2-|T|=4\leq n-2, \tag{13}\] \[\operatorname{reg}(J_{G_{v}\setminus v})+1\ \leq\ (n-1)+2-|T|+1=4\leq n-2. \tag{12}\]
Combining (10) with (11), (12) and (13), we obtain \(\operatorname{reg}(J_{G})\leq n-2\), as desired.
**Subcase 3.2**.: Suppose \(u\) is internal in \(G_{v}\) and \(G_{v}\setminus v\). We apply Ohtani lemma to get
\[\operatorname{reg}(J_{G_{v}}) \leq\ \max\{\operatorname{reg}(J_{G_{v}\setminus u}),\operatorname{ reg}(J_{(G_{v})_{u}}),\operatorname{reg}(J_{(G_{v})_{u}\setminus u})+1\}, \tag{15}\] \[\operatorname{reg}(J_{G_{v}\setminus v}) \leq\ \max\{\operatorname{reg}(J_{G_{v}\setminus\{u,v\}}), \operatorname{reg}(J_{(G_{v}\setminus v)_{u}}),\operatorname{reg}(J_{(G_{v} \setminus v)_{u}\setminus u})+1\}. \tag{14}\]
The graph \(G_{v}\setminus u\) is easily seen to be not a path. Hence, \(\operatorname{reg}(J_{G_{v}\setminus u})\leq|V(G_{v}\setminus u)|-1=n-2\). Moreover, in \((G_{v})_{u}\) the set \(T\) is a clique of size \(n-2\), and the same calculation as in (12) gives \(\operatorname{reg}(J_{(G_{v})_{u}})\leq n-2\). Finally, in \((G_{v})_{u}\setminus u\), \(T\) is a clique. The same calculation as in (13) gives \(\operatorname{reg}(J_{(G_{v})_{u}\setminus u})+1\leq n-2\). These calculations and equation (14) yield
\[\operatorname{reg}(J_{G_{v}})\leq n-2. \tag{16}\]
We have \(\operatorname{reg}(J_{G_{v}\setminus\{u,v\}})\leq|V(G_{v}\setminus\{u,v\})|=n-2\). Note that in \((G_{v}\setminus v)_{u}\), \(T\) is a clique. As before, we have \(\operatorname{reg}(J_{(G_{v}\setminus v)_{u}})\leq n-2\). Moreover, \((G_{v}\setminus v)_{u}\setminus u\) is a complete graph. Consequently, \(\operatorname{reg}(J_{(G_{v}\setminus v)_{u}\setminus u})+1=3\leq n-2\). Thus,
\[\operatorname{reg}(J_{G_{v}\setminus v})\leq n-2. \tag{17}\]
Finally, combining (10) with (11), (16) and (17) we obtain that \(\operatorname{reg}(J_{G})\leq n-2\), as desired. The proof is complete.
**Corollary 2.8**.: _Let \(n\geq 5\) and \(3\leq r\leq n-2\) be positive integers. Then, there exists a graph \(G\) with \(n\) non isolated vertices such that_
\[\operatorname{proj}\dim(J_{G})=2n-6\quad\text{and}\quad\operatorname{reg}(J_{ G})=r.\]
Proof.: Let \(3\leq r\leq n-3\), and set \(\widetilde{G}=P_{r}\sqcup(n-r-3)K_{1}\) and \(G=\widetilde{G}*3K_{1}\). Then, by Theorem 2.5, \(\operatorname{proj}\dim(J_{G})=2n-6\), and by formula (4), \(\operatorname{reg}(J_{G})=\operatorname{reg}(J_{\widetilde{G}})= \operatorname{reg}(J_{P_{r}})=r\).
Let now \(r=n-2\). Let \(G\) be the graph with vertex set \(V(G)=\{u,v,1,\ldots,n-2\}\) and edge set
\[E(G)=\big{\{}\{i,i+1\},\{i,u\}:i=1,\ldots,n-3\big{\}}\cup\big{\{}\{i,v\}:i=1, \ldots,n-4\big{\}}\cup\{\{n-2,v\}\}.\]
Then, setting \(T=\{1,\ldots,n-2\}\), we have that \(G\in\mathcal{G}_{T}\). To see why this is true, using the same notation as in Definition 2.3, just take \(V_{0}=\{1,\ldots,n-4\}\), \(V_{1}=\{n-3\}\) and \(V_{2}=\{n-2\}\). Therefore, by Theorem 2.5, \(\operatorname{proj}\dim(J_{G})=2n-6\). Note that the induced subgraph \(P=G_{\{1,\ldots,n-2\}}\) is a path with \(n-2\) vertices. Hence, \(\operatorname{reg}(J_{G})\geq\operatorname{reg}(J_{P})=n-2\). On the other hand, by Proposition 2.7, \(\operatorname{reg}(J_{G})\leq n-2\). Consequently, \(\operatorname{reg}(J_{G})=n-2\) and \(G\) is the graph we are looking for.
### Graphs with regularity 3
We quote the following result [29, Theorem 3.2].
**Theorem 2.9**.: _Let \(G\) be a non-complete graph with \(n\) non isolated vertices. Then \(\operatorname{reg}(J_{G})=3\) if and only if one of the following conditions holds:_
1. \(G=K_{r}\sqcup K_{s}\) _with_ \(r,s\geq 2\) _and_ \(r+s=n\)_, or_
2. \(G=G_{1}*G_{2}\)_, where_ \(G_{i}\) _is a graph with_ \(n_{i}<n\) _vertices such that_ \(n_{1}+n_{2}=n\) _and_ \(\operatorname{reg}(J_{G_{i}})\leq 3\)_, for_ \(i=1,2\)_._
**Remark 2.10**.: If \(G\) is a graph with \(\operatorname{reg}(J_{G})=3\) and \(\operatorname{proj}\dim(J_{G})\geq n-2\), then \(G\) must be connected. Suppose by contradiction that there exists a disconnected graph \(G=G_{1}\sqcup\cdots\sqcup G_{c}\) with regularity \(3\) and projective dimension \(\operatorname{proj}\dim(J_{G})\geq n-2\). Arguing as in Proposition 1.4 we obtain \(c\leq r-1=2\). Thus \(c=2\) and by formula (2) we must have \(\operatorname{reg}(J_{G_{1}})+\operatorname{reg}(J_{G_{2}})-1=3\). This formula holds if and only if \(\operatorname{reg}(J_{G_{1}})=\operatorname{reg}(J_{G_{2}})=2\). Thus \(G=K_{r}\sqcup K_{s}\) as in Theorem 2.9(i). But then \(\operatorname{proj}\dim(J_{G})=n-3\), a contradiction.
**Corollary 2.11**.: _Let \(n\geq 4\) be an integer. Then, for all \(n-3\leq p\leq 2n-5\), there exists a graph \(G\) with \(n\) non isolated vertices such that_
\[\operatorname{proj}\dim(J_{G})=p\quad\text{and}\quad\operatorname{reg}(J_{G} )=3.\]
Proof.: We proceed by induction on \(n\geq 4\). Suppose \(n=4\). Then, the binomial edge ideals of the graphs \(2K_{2}\), \((P_{2}\sqcup K_{1})*K_{1}\), \(P_{2}*2K_{1}\) have regularity \(3\) and projective dimension, respectively, \(1\), \(2\) and \(3\).
Suppose now \(n>4\). If \(p=2n-5\) or \(p=2n-6\), then a graph \(G\) with \(n\) non isolated vertices such that \(\operatorname{proj}\dim(J_{G})=p\) and \(\operatorname{reg}(J_{G})=3\) exists by Corollaries 2.2, 2.8. Therefore, we can assume \(n-3\leq p\leq 2n-7\). If \(p=n-3\), then the binomial edge ideal of \(K_{r}\sqcup K_{s}\) with \(r,s\geq 2\) and \(r+s=2\) has projective dimension \(n-3\) and regularity \(3\), by using Theorem 2.9(i). Now, let \(n-2\leq p\leq 2n-7\). Then \((n-1)-3\leq p-2\leq 2(n-1)-7<2(n-1)-5\). Hence, by induction there exists a graph \(\widetilde{G}\) with \(\operatorname{proj}\dim(J_{\widetilde{G}})=p-2\) and \(\operatorname{reg}(J_{\widetilde{G}})=3\). Let \(G=\widetilde{G}*K_{1}\). If \(p-2=(n-1)-3\), then \(\widetilde{G}\) is disconnected. By Lemma 3.2 in the next section,
\[\operatorname{proj}\dim(J_{G}) =\max\{\operatorname{proj}\dim(J_{\widetilde{G}})+2,n-3\}\] \[=\max\{p,n-3\}=\max\{n-2,n-3\}=n-2.\]
Otherwise, if \(p-2\geq(n-1)-2\), then \(\widetilde{G}\) is connected by Remark 2.10. Then, by Lemma 3.2, \(\operatorname{proj}\dim(J_{G})=\operatorname{proj}\dim(J_{\widetilde{G}})+2=p\). The induction is complete and the assertion follows.
### Graphs with regularity \(n-2\)
In the next result, we consider graphs \(G\) with \(n\) non isolated vertices having regularity \(r=\operatorname{reg}(J_{G})=n-2\). If \(n=5\), then \(r=3\) and we can apply Corollary 2.11. Therefore, we assume \(n\geq 6\). In this case \(r=n-2\geq\lceil\frac{n}{2}\rceil+1\). Thus, by Proposition 1.5, the projective dimension for such graphs varies as follows: \(r-2=n-4\leq\operatorname{proj}\dim(J_{G})\leq 2n-5\).
For the next result, we need the concept of _decomposable graph_ introduced by Rauf and Rinaldo in [25]. A graph \(G\) is called _decomposable_ if there exist two graphs \(G_{1}\) and \(G_{2}\) such that \(V(G)=V(G_{1})\cup V(G_{2})\), \(V(G_{1})\cap V(G_{2})=\{v\}\) where \(v\) is a simplicial vertex for both \(G_{1}\) and \(G_{2}\), and such that \(E(G)=E(G_{1})\cup E(G_{2})\). In such case, we write \(G=G_{1}\cup_{v}G_{2}\) and say that \(G\)_is obtained by gluing \(G_{1}\) and \(G_{2}\) along \(v\)_. By [11, Proposition 1.3] it follows that
\[\operatorname{proj}\dim(J_{G}) =\ \operatorname{proj}\dim(J_{G_{1}})+\operatorname{proj}\dim(J_{G_ {2}})+1,\] \[\operatorname{reg}(J_{G}) =\ \operatorname{reg}(J_{G_{1}})+\operatorname{reg}(J_{G_{2}})-1.\]
**Lemma 2.12**.: _Let \(G\) be the graph with vertex set \(V(G)=[n]\) and edge set_
\[E(G)=\big{\{}\{i,j\}:1\leq i<j\leq n-1\big{\}}\cup\big{\{}\{m,n\},\{m+1,n\},\ldots, \{n-1,n\}\big{\}},\]
_for some \(m\in[n-1]\). Then \(\operatorname{projdim}(J_{G})\leq 2n-m-3\) and \(\operatorname{reg}(J_{G})\leq 3\)._
Proof.: For the regularity, note that in \(G\) the set \(W=[n-1]\) is a clique. Therefore, \(\operatorname{reg}(J_{G})\leq n+2-|W|=3\). For the projective dimension, we proceed by descending induction on \(m\leq n-1\), starting with \(m=n-1\). In such a case, \(G\) is decomposable: \(G=G_{[n-1]}\cup_{n-1}G_{\{n-1,n\}}\). Note that \(G_{[n-1]}\) is a complete graph and \(J_{G_{\{n-1,n\}}}\) is a principal ideal. Thus,
\[\operatorname{projdim}(J_{G}) =\operatorname{projdim}(J_{G_{[n-1]}})+\operatorname{projdim}(J_ {G_{\{n-1,n\}}})+1\] \[=(n-1)-2+0+1=n-2.\]
Since \(m=n-1\), then \(2n-m-3=n-2\). Hence the base case is verified.
Suppose now \(m<n-1\). Then, \(m\) is an internal vertex of \(G\), because it belongs to two different maximal cliques, namely \([n-1]\) and \(\{m,m+1,\ldots,n\}\). Applying Ohtani lemma to \(m\), by the short exact sequence (5) we obtain
\[\operatorname{projdim}(J_{G})\leq\] \[\leq\max\{\operatorname{projdim}(x_{m},y_{m},J_{G\setminus m}), \operatorname{projdim}(J_{G_{m}}),\operatorname{projdim}(x_{m},y_{m},J_{G_{m \setminus m}})-1\}.\]
By induction, \(\operatorname{projdim}(J_{G\setminus m})\leq 2(n-1)-m-3=2n-m-5\). Therefore,
\[\operatorname{projdim}(x_{m},y_{m},J_{G\setminus m})=\operatorname{projdim}(J _{G\setminus m})+2\leq 2n-m-3.\]
For the other two inequalities, note that \(G_{m}\) and \(G_{m}\setminus m\) are complete graphs with \(n\) and \(n-1\) vertices, respectively. Hence,
\[\operatorname{projdim}(J_{G_{m}}) =n-2,\] \[\operatorname{projdim}(x_{m},y_{m},J_{G_{m}\setminus m})-1 =\operatorname{projdim}(J_{G_{m}\setminus m})+2-1=n-3+1=n-2.\]
But \(n-2\leq 2n-m-3\) because \(m\leq n-1\), by construction. Finally, all the inequalities obtained show that \(\operatorname{projdim}(J_{G})\leq 2n-m-3\), as desired.
We need the following technique. Let \(e=\{u,v\}\in E(G)\). By \(G\setminus e\) we denote the graph with \(V(G\setminus e)=V(G)\) and \(E(G\setminus e)=E(G)\setminus\{e\}\). Whereas, by \(G_{e}\) we denote the graph with \(V(G_{e})=V(G)\) and \(E(G_{e})=E(G)\cup\{\{w_{1},w_{2}\}:w_{1},w_{2}\in N_{G}(u)\text{ or }w_{1},w_{2}\in N_{G}(v)\}\). Set \(f_{e}=x_{u}y_{v}-x_{v}y_{u}\). Then, we a short exact sequence
\[0\to\frac{S}{(J_{G\setminus e}:f_{e})}(-2)\longrightarrow\frac{S}{J_{G \setminus e}}\longrightarrow\frac{S}{J_{G}}\to 0. \tag{18}\]
By [22, Theorem 3.7], we have
\[J_{G\setminus e}:f_{e}=J_{(G\setminus e)_{e}}+I_{G}, \tag{19}\]
where
\[I_{G}=(g_{P,t}:P:u,u_{1},\ldots,u_{s},v\text{ is a path between }u\text{ and }v\text{ in }G\text{ and }0\leq t\leq s),\]
with \(g_{P,0}=x_{u_{1}}\cdots x_{u_{s}}\) and \(g_{P,t}=y_{u_{1}}\cdots y_{u_{t}}x_{u_{t+1}}\cdots x_{u_{s}}\) for every \(1\leq t\leq s\).
**Proposition 2.13**.: _Let \(n\geq 6\) be a positive integer. Then, for all \(n-4\leq p\leq 2n-5\), there exists a graph \(G\) with \(n\) non isolated vertices, such that_
\[\operatorname{proj}\dim(J_{G})=p\quad\text{and}\quad\operatorname{reg}(J_{G})=n -2.\]
Proof.: If \(p=n-4\), then \(G=P_{2}\sqcup P_{2}\sqcup P_{n-4}\) has \(\operatorname{proj}\dim(J_{G})=n-4\) and \(\operatorname{reg}(J_{G})=n-2\). If \(p=n-3\), then \(G=K_{3}\sqcup P_{n-3}\) has \(\operatorname{proj}\dim(J_{G})=n-3\) and \(\operatorname{reg}(J_{G})=n-2\). If \(p=n-2\), then consider the graph depicted below
It is clear that \(G\) is decomposable as \((G_{\{1,2,n-1\}}\cup_{2}G_{\{2,3,n\}})\cup_{3}G_{\{3,4,\ldots,n-2\}}\). We set \(G_{1}=G_{\{1,2,n-1\}}\), \(G_{2}=G_{\{2,3,n\}}\) and \(G_{3}=G_{\{3,4,\ldots,n-2\}}\). \(G_{1}\) and \(G_{2}\) are complete graphs with three vertices each, while \(G_{3}\) is a path with \(n-4\) vertices. Therefore,
\[\operatorname{proj}\dim(J_{G})=\sum_{i=1}^{3}\operatorname{proj}\dim(J_{G_{i} })+2=1+1+n-6+2=n-2,\]
\[\operatorname{reg}(J_{G})=\sum_{i=1}^{3}\operatorname{reg}(J_{G_{i}})-2=2+2+n-4 -2=n-2.\]
It remains to consider the case \(n-1\leq p\leq 2n-5\). For this purpose, let \(m\in\{1,\ldots,n-3\}\) and consider the graph \(G\) depicted below.
That is, \(V(G)=[n]\) and
\[E(G) =\big{\{}\{i,i+1\}:i=1,\ldots,n-3\big{\}}\] \[\cup\big{\{}\{i,n-1\}:i=m,\ldots,n-2\big{\}}\] \[\cup\big{\{}\{i,n\}:i=1,\ldots,n-2\big{\}}.\]
We claim that \(\operatorname{proj}\dim(J_{G})=2n-m-4\) and \(\operatorname{reg}(J_{G})=n-2\). Since
\[\{2n-m-4:m=1,\ldots,n-3\}=\{n-1,n,\ldots,2n-5\},\]
the claim will conclude the proof.
Firstly, we note that if \(m=1\), then \(G=G_{[n-2]}*G_{\{n-1,n\}}\). Since \(G_{[n-2]}\) is a path with \(n-2\) vertices and \(G_{\{n-1,n\}}\) consists of two isolated vertices, by Theorem 2.1 and formula (4) we obtain
\[\operatorname{proj}\dim(J_{G})=2n-5,\quad\operatorname{reg}(J_{G})=n-2,\]
in this case.
If \(m=2\), then \(G\) is a \(D_{5}\)-type graph. Indeed, \(G\in\mathcal{G}_{[n-2]}\). To see why this is true, using the notation of Definition 2.3, it is enough to set \(u=n-1\), \(v=1\), \(V_{0}=\{2\}\), \(V_{1}=\{3,\ldots,n-2\}\) and \(V_{2}=\{n\}\). Hence \(\operatorname{proj}\dim(J_{G})=2n-6\). Furthermore, \(\operatorname{reg}(J_{G})\geq\operatorname{reg}(J_{G_{[n-2]}})=n-2\), but also \(\operatorname{reg}(J_{G})\leq n-2\) by Proposition 2.7.
Thus our claim holds for \(m=1,2\). Now, we proceed by induction on \(n\geq 5\). If \(n=5\), then \(m\in\{1,2\}\) and there is nothing to prove. Suppose \(n\geq 6\) and let \(m\in\{3,\ldots,n-3\}\). Set \(e=\{1,n\}\) and let \(f_{e}=x_{1}y_{n}-x_{n}y_{1}\). Then, by the short exact sequence (18), see also [18, Proposition 2.1(a)], we have
\[\operatorname{reg}(J_{G})\leq\max\{\operatorname{reg}(J_{G\setminus e}), \operatorname{reg}(J_{G\setminus e}:f_{e})+1\}. \tag{20}\]
Note that \(G\setminus e\) is decomposable as \(G\setminus e=G_{\{1,2\}}\cup_{2}G_{\{2,\ldots,n\}}\). Using the induction on \(G_{\{2,\ldots,n\}}\) we have
\[\operatorname{proj}\dim(J_{G\setminus e}) =\operatorname{proj}\dim(J_{G_{\{1,2\}}})+\operatorname{proj} \dim(J_{G_{\{2,\ldots,n\}}})+1\] \[=2(n-1)-(m-1)-4+1\] \[=2n-m-4, \tag{22}\] \[\operatorname{reg}(J_{G\setminus e}) =\operatorname{reg}(J_{G_{\{1,2\}}})+\operatorname{reg}(J_{G_{\{2,\ldots,n\}}})-1=2+n-3-1=n-2. \tag{21}\]
By equation (19), \(J_{G\setminus e}:f_{e}=J_{(G\setminus e)_{e}}+I_{G}\). Note than any path in \(G\) from \(1\) to \(n\) different from the path \(1,n\) must pass through the vertex \(2\). Hence \(I_{G}=(x_{2},y_{2})\). Consequently, \(J_{G\setminus e}:f_{e}=(J_{\widetilde{G}},x_{2},y_{2})\), where \(\widetilde{G}\) is the graph \((G\setminus e)_{e}\setminus\{2\}\) with \(V(\widetilde{G})=\{3,\ldots,n\}\) and
\[E(\widetilde{G}) =\big{\{}\{i,j\}:3\leq i<j\leq n-2\big{\}}\] \[\cup\big{\{}\{i,n\}:i=3,\ldots,n-2\big{\}}\] \[\cup\big{\{}\{j,n-1\}:j=m,\ldots,n-2\big{\}}.\]
Therefore, using Lemma 2.12 applied on \(\widetilde{G}\) we get
\[\operatorname{proj}\dim(J_{G\setminus e}:f_{e}) =\operatorname{proj}\dim(J_{\widetilde{G}},x_{2},y_{2})= \operatorname{proj}\dim(J_{\widetilde{G}})+2\] \[\leq 2(n-2)-m-3+2\] \[=2n-m-5, \tag{24}\] \[\operatorname{reg}(J_{G\setminus e}:f_{e})+1 =\operatorname{reg}(J_{\widetilde{G}})+1\leq 4\leq n-2, \tag{23}\]
as \(n\geq 6\). By (21) and (23) we have \(\operatorname{proj}\dim(J_{G\setminus e}:f_{e})<\operatorname{proj}\dim(J_{G \setminus e})\). Thus, using the short exact sequence (18) we obtain
\[\operatorname{proj}\dim(J_{G})=\operatorname{proj}\dim(J_{G\setminus e})=2n- m-4.\]
Whereas, combining (20) with (22) and (24) we obtain \(\operatorname{reg}(J_{G})\leq n-2\). Since also \(\operatorname{reg}(J_{G})\geq\operatorname{reg}(J_{G_{[n-2]}})=n-2\), as \(G_{[n-2]}\) is a path with \(n-2\) vertices, we obtain the equality. The inductive proof is complete.
**Remark 2.14**.: Note that the graphs constructed in the previous proposition, for \(p\geq n-2\), are all connected.
### The size of betti tables of binomial edge ideals of small graphs
By putting together all results in this section we can determine the set
\[\operatorname{pdreg}(n)\ =\ \bigl{\{}(\operatorname{proj}\dim(J_{G}),\operatorname{ reg}(J_{G})):G\in\operatorname{Graphs}(n)\bigr{\}},\]
for small values of \(n\), where \(\operatorname{Graphs}(n)\) denotes the class of all finite simple graphs with \(n\) non isolated vertices.
**Example 2.15**.: We determine the set \(\operatorname{pdreg}(n)\) for \(n=3,4,5\) and \(6\).
**(\(\mathbf{n=3}\))**: We have \(\operatorname{pdreg}(3)=\{(1,2),(1,3)\}\). In the following list we display all the graphs \(G\) with three non isolated vertices and below each of them the pair \((\operatorname{proj}\dim(J_{G}),\operatorname{reg}(J_{G}))\).
**(\(\mathbf{n=4}\))**: We have \(\operatorname{pdreg}(4)=\{(2,2),(1,3),(2,3),(3,3),(2,4)\}\). The following is a list of graphs with four non isolated vertices that gives such pairs.
Note that the second, third and fourth graph are, respectively, \(K_{2}\sqcup K_{2}\), \((P_{2}\sqcup K_{1})*K_{1}\), \(P_{2}*2K_{1}\). These graphs have regularity \(3\), by Theorem 2.9.
**(\(\mathbf{n=5}\))**: It is \(\operatorname{pdreg}(5)=\{(3,2),(2,3),(3,3),(4,3),(5,3),(2,4),(3,4),(4,4),(3,5)\}\). Furthermore, a list of graphs giving such pairs is given below.
Note that each graph \(G\) displayed, with \(\operatorname{proj}\dim(J_{G})\geq n-2=3\), is connected. The graphs with regularity \(3\) are constructed as in Corollary 2.11. They are, in the given order: \(K_{2}\sqcup K_{3}\), \((K_{2}\sqcup K_{2})*K_{1}\), \(((P_{2}\sqcup K_{1})*K_{1})*K_{1}\), and \((P_{2}*2K_{1})*K_{1}\). Moreover, since \(n=5\), by Corollary 2.2, if \(\operatorname{reg}(J_{G})=n-1=4\) then \(\operatorname{proj}\dim(J_{G})\leq 2n-6=4\).
**(\(\mathbf{n=6}\))**: In the following, we list all pairs of the set \(\operatorname{pdreg}(6)\), and for each pair \((p,r)\) in the set a graph \(G\) with \((\operatorname{proj}\dim(J_{G}),\operatorname{reg}(J_{G}))=(p,r)\).
## 3. The size of the Betti table of binomial edge ideals
Let \(n\geq 1\) be an integer. Denote by \(\operatorname{Graphs}(n)\) the class of all finite simple graphs with \(n\) non isolated vertices.
We define
\[\operatorname{pdreg}(n)\ =\ \big{\{}(\operatorname{proj}\dim(J_{G}),\operatorname{ reg}(J_{G})):G\in\operatorname{Graphs}(n)\big{\}},\]
which is the set of the sizes of the Betti tables of \(J_{G}\subset S=K[x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}]\), as \(G\) ranges over all graphs with \(n\) non isolated vertices. Note that we are allowing \(K\) to be any field.
Finally, we can state our main result in the article.
**Theorem 3.1**.: _For all \(n\geq 3\),_
\[\begin{split}\operatorname{pdreg}(n)\ =\ \big{\{}(n-2,2),(n-2,n) \big{\}}\cup\bigcup_{r=3}^{\lfloor\frac{n}{2}\rfloor+1}\big{(}\bigcup_{p=n-r} ^{2n-5}\{(p,r)\}\big{)}\ \cup\\ \cup\bigcup_{r=\lceil\frac{n}{2}\rceil+1}^{n-2}\big{(}\bigcup_{p =r-2}^{2n-5}\{(p,r)\}\big{)}\cup A_{n},\end{split} \tag{25}\]
_where \(A_{n}=\{(p,r)\in\operatorname{pdreg}(n):r=n-1\}\)._
For the proof of the theorem, we need the following lemma which is an immediate consequence of [19, Theorems 3.4 and 3.9].
**Lemma 3.2**.: _Let \(\widetilde{G}\) be a graph with \(n-1\) vertices, and set \(G=\widetilde{G}*K_{1}\). Then,_
\[\operatorname{proj}\dim(J_{G})=\begin{cases}\operatorname{proj}\dim(J_{ \widetilde{G}})+2,&\text{if $\widetilde{G}$ is connected},\\ \max\{\operatorname{proj}\dim(J_{\widetilde{G}})+2,n-3\},&\text{if $\widetilde{G}$ is disconnected}.\end{cases}\]
Proof.: If \(\widetilde{G}\) is connected, by [19, Theorem 3.4], \(\operatorname{depth}_{S}(S/J_{G})=\operatorname{depth}_{\widetilde{S}}( \widetilde{S}/J_{\widetilde{G}})\), where \(\widetilde{S}=K[x_{v},y_{v}:v\in V(\widetilde{G})]\). Using the Auslander-Buchsbaum formula, we obtain \(2n-\operatorname{proj}\dim(J_{G})=2(n-1)-\operatorname{proj}\dim(J_{ \widetilde{G}})\), and consequently,
\[\operatorname{proj}\dim(J_{G})=\operatorname{proj}\dim(J_{\widetilde{G}})+2.\]
Suppose now that \(\widetilde{G}\) is disconnected. By [19, Theorem 3.9],
\[\operatorname{depth}_{S}(S/J_{G})=\min\{\operatorname{depth}_{\widetilde{S}} (\widetilde{S}/J_{\widetilde{G}}),n+2\}.\]
Using the Auslander-Buchsbaum formula we obtain
\[2n-\operatorname{proj}\dim(J_{G})-1=\min\{2(n-1)-\operatorname{proj}\dim(J_{ \widetilde{G}})-1,n+2\}.\]
Therefore,
\[\operatorname{proj}\dim(J_{G}) =\ 2n-1-\min\{2(n-1)-1-\operatorname{proj}\dim(J_{\widetilde{G}}),n+2\}\] \[=\ \max\{\operatorname{proj}\dim(J_{G})+2,n-3\},\]
as desired.
The following picture describes the set \(\operatorname{pdreg}(n)\) for \(n\geq 3\). In the \((p,r)\)th position of the diagram we collocate the pair \((p,r)\) if there exists \(G\in\operatorname{Graphs}(n)\) such that \(\operatorname{proj}\dim(J_{G})=p\) and \(\operatorname{reg}(J_{G})=r\).
Note that the lattice points in the \((n-1)\)th row are empty, because we do not specify the set \(A_{n}\) in Theorem 3.1.
Now, we are ready to prove our main result in the article.
Proof of Theorem 3.1.: By Propositions 1.4, 1.5 the set \(\operatorname{pdreg}(n)\) is contained in the second set written in (25). Therefore, we only need to prove the other inclusion.
We proceed by induction on \(n\geq 3\). By induction we also prove the following
**Claim**\((*)\) For all pairs \((p,r)\in\operatorname{pdreg}(n)\setminus A_{n}\) with \(p\geq n-2\), there exists a connected graph \(G\in\operatorname{Graphs}(n)\) such that \((\operatorname{projdim}(J_{G}),\operatorname{reg}(J_{G}))=(p,r)\).
Now, we start with our inductive proof. If \(n=3,4,5,6\), by Example 2.15, the theorem and the **Claim**\((*)\) hold true.
Suppose now \(n\geq 7\). Let \((p,r)\in\operatorname{pdreg}(n)\). If \(r=2\) or \(r=n\), then \(p=n-2\) and the pairs \((n-2,2)\), \((n-2,n)\) belongs to \(\operatorname{pdreg}(n)\), by virtue of Theorem 1.2(i) and (ii). Hence, we can consider \(3\leq r\leq n-1\). If \(p=2n-5,2n-6\), then \(3\leq r\leq n-2\) and \((p,r)\in\operatorname{pdreg}(n)\) for all such values of \(p\) and \(r\), by using Corollaries 2.2 and 2.8 and Proposition 2.7. If \(r=3\) or \(r=n-2\), all possible pairs \((p,3)\) and \((p,n-2)\) belong to \(\operatorname{pdreg}(n)\), by Corollary 2.11 and Proposition 2.13.
It remains to construct \(G\in\operatorname{Graphs}(n)\) such that \((\operatorname{projdim}(J_{G}),\operatorname{reg}(J_{G}))=(p,r)\) for all \(4\leq r\leq n-3\) and all admissible values that \(p\leq 2n-7\) can assume.
Suppose \(n-2\leq p\leq 2n-7\) and \(4\leq r\leq n-3\). Then \((p-2,r)\in\operatorname{pdreg}(n-2)\). To prove this, note that \((n-1)-3\leq p-2\leq 2(n-1)-7\) and \(4\leq r\leq(n-1)-2\). Therefore, by induction there exists \(\widetilde{G}\in\operatorname{Graphs}(n-1)\) such that
\[(\operatorname{projdim}(J_{\widetilde{G}}),\operatorname{reg}(J_{\widetilde{G }}))=(p-2,r).\]
Set \(G=\widetilde{G}*K_{1}\). If \(p=n-2\), then \(p-2=(n-1)-3<(n-1)-2\) and so \(\widetilde{G}\) is disconnected by Theorem 1.1. By Lemma 3.2,
\[\operatorname{projdim}(J_{G})=\max\{\operatorname{projdim}(J_{\widetilde{G}}) +2,n-3\}=\max\{n-2,n-3\}=n-2=p\]
and by formula (4), \(\operatorname{reg}(J_{G})=\operatorname{reg}(J_{\widetilde{G}})=r\). Hence, \(G\in\operatorname{Graphs}(n)\) and
\[(\operatorname{projdim}(J_{G}),\operatorname{reg}(J_{G}))=(n-2,r),\]
and so \((n-2,r)\in\operatorname{pdreg}(n)\). If \(p>n-2\), then \(p-2\geq(n-1)-2\) and by induction and **Claim**\((*)\) we may assume that \(\widetilde{G}\) is connected. By Lemma 3.2\(\operatorname{projdim}(J_{G})=\operatorname{projdim}(J_{\widetilde{G}})+2=p\) and by formula (4), \(\operatorname{reg}(J_{G})=\operatorname{reg}(J_{\widetilde{G}})=r\). Once again, \((p,r)\in\operatorname{pdreg}(n)\).
Now the inductive proof of the **Claim**\((*)\) is completed. Indeed, by Remarks 2.6, 2.10, 2.14 the claim holds for all pairs \((p,r)\in\operatorname{pdreg}(n)\setminus A_{n}\), \(p\geq n-2\), with \(p=2n-6\) or \(p=2n-r\) or \(r=3\) or \(r=n-2\). For all other pairs \((p,r)\in\operatorname{pdreg}(n)\setminus A_{n}\) with \(p\geq n-2\), the claim also holds because the various graphs \(\widetilde{G}*K_{1}\) constructed are connected.
Suppose now \(4\leq r\leq\lfloor\frac{n}{2}\rfloor+1\) and \(n-r\leq p\leq n-3\). Then \((p-1,r-1)\) belongs to \(\operatorname{pdreg}(n-2)\). To see why this is true, note that \(3\leq r-1\leq\lfloor\frac{n-2}{2}\rfloor+1\) and also \((n-2)-(r-1)=n-1-r\leq p-1\leq(n-2)-2\). Therefore, by induction there exists \(\widetilde{G}\in\operatorname{Graphs}(n-2)\) such that
\[(\operatorname{projdim}(J_{\widetilde{G}}),\operatorname{reg}(J_{\widetilde{G }}))=(p-1,r-1).\]
Set \(G=\widetilde{G}\sqcup K_{2}\). Then \(G\in\operatorname{Graphs}(n)\) and
\[(\operatorname{proj}\dim(J_{G}),\operatorname{reg}(J_{G})) =(\operatorname{proj}\dim(J_{\widetilde{G}})+\operatorname{proj }\dim(J_{K_{2}})+1,\operatorname{reg}(J_{\widetilde{G}})+\operatorname{reg}(J_ {K_{2}})-1)\] \[=(p-1+0+1,r-1+2-1)=(p,r).\]
Consequently, \((p,r)\in\operatorname{pdreg}(n)\).
Similarly, for the last case, suppose \(\lceil\frac{n}{2}\rceil+1\leq r\leq n-3\) and \(r-2\leq p\leq n-3\). Then \((p-1,r-1)\in\operatorname{pdreg}(n-2)\). Indeed, \(\lceil\frac{n-2}{2}\rceil+1\leq r-1\leq(n-2)-2\) and so \((r-1)-2\leq p-1\leq(n-2)-2\). Hence, by induction there exists \(\widetilde{G}\in\operatorname{Graphs}(n-2)\) such that
\[(\operatorname{proj}\dim(J_{\widetilde{G}}),\operatorname{reg}(J_{\widetilde {G}}))=(p-1,r-1).\]
Let \(G=\widetilde{G}\sqcup K_{2}\). Arguing as before, we obtain \((p,r)\in\operatorname{pdreg}(n)\), as desired.
The inductive proof is complete, and the theorem is proved.
Denote by \(\operatorname{CGraphs}(n)\) the set of all connected graphs with \(n\) non isolated vertices. We define the set
\[\operatorname{pdreg}_{\operatorname{C}}(n)\ =\ \big{\{}(\operatorname{proj}\dim(J_{G}),\operatorname{reg}(J_{G})):G\in\operatorname{CGraphs}(n)\big{\}}.\]
**Corollary 3.3**.: _For all \(n\geq 3\),_
\[\operatorname{pdreg}_{\operatorname{C}}(n)\ =\ \big{\{}(n-2,2),(n-2,n)\big{\}} \cup\bigcup_{r=3}^{n-2}\big{(}\bigcup_{p=n-2}^{2n-5}\{(p,r)\}\big{)}\cup A_{ \operatorname{C},n},\]
_where \(A_{\operatorname{C},n}=\{(p,r)\in\operatorname{pdreg}_{\operatorname{C}}(n):r=n-1\}\)._
Proof.: For any \(G\in\operatorname{CGraphs}(n)\) we have \(\operatorname{proj}\dim(J_{G})\geq n-2\) by Theorem 1.1. Thus, our statement follows from **Claim**\((*)\) proved in the previous theorem.
At present, we do not know yet the sets \(A_{n}\) and \(A_{\operatorname{C},n}\), for \(n\geq 7\). Note that by Corollary 2.2 and Proposition 2.7, for all \(n\geq 6\), if \(G\) is a graph with \(n\) non isolated vertices and with \(\operatorname{reg}(J_{G})=n-1\), then \(\operatorname{proj}\dim(J_{G})\leq 2n-7\). On the other hand, if \(n\geq 7\) and \(\operatorname{reg}(J_{G})=n-1\), a much stronger bound for the projective dimension of \(J_{G}\) seems to hold. Indeed, our experiments using [21] suggest the following
**Conjecture 3.4**.: Let \(G\) be a graph with \(n\geq 7\) non isolated vertices. Suppose that \(\operatorname{reg}(J_{G})=n-1\). Then \(\operatorname{proj}\dim(J_{G})\leq n\).
Using [21] we could verify our conjecture for \(n=7,8,9\).
|
2307.10441 | Exact formula for 1-lower run overpartitions | We are going to show an exact formula for lower $1$-run overpartitions. The
generating function is of mixed mock-modular type with an overall weight $0.$
We will apply an extended version of the classical Circle Method. The approach
requires bounding modified Kloosterman sums and Mordell integrals. | Lukas Mauth | 2023-07-19T20:15:10Z | http://arxiv.org/abs/2307.10441v1 | # Exact formula for 1-lower run overpartitions
###### Abstract.
We are going to show an exact formula for lower 1-run overpartitions. The generating function is of mixed mock-modular type with an overall weight \(0.\) We will apply an extended version of the classical Circle Method. The approach requires bounding modified Kloosterman sums and Mordell integrals.
Key words and phrases:Circle Method, \(\eta\)-function, partitions 2020 Mathematics Subject Classification: 11B57, 11F03, 11F20, 11F30, 11F37, 11P82
## 1. Introduction and statement of results
An _overpartition_ of a non-negative integer \(n\) is a partition of n in which the final occurrence of a number may be overlined. We denote the number of partitions of \(n\) by \(p(n)\) and the number of overpartitions of \(n\) by \(\overline{p}(n).\) It is well known that their generating functions are
\[P(q) \coloneqq\sum_{n=0}^{\infty}p(n)q^{n}=\prod_{n=1}^{\infty}\frac{1 }{1-q^{n}}=\frac{1}{(q;q)_{\infty}},\] \[\overline{P}(q) \coloneqq\sum_{n=0}^{\infty}\overline{p}(n)q^{n}=\prod_{n=1}^{ \infty}\frac{1+q^{n}}{1-q^{n}}=\frac{(-q;q)_{\infty}}{(q;q)_{\infty}},\]
where for \(a\in\mathbb{C}\) and \(n\in\mathbb{N}\cup\{\infty\}\) we define the \(q\)-Pochhammer symbol \((a)_{n}\coloneqq(a;q)_{n}\coloneqq\prod_{k=1}^{n-1}(1-aq^{k}).\) These generating functions are essentially special cases of meromorphic modular forms, so called _eta-quotients_. Classical questions are to determine asymptotics or exact formulas for the Fourier coefficents of modular forms. The Hardy\(-\)Ramanujan Tauberian Theorem [8] for eta-quotients gives the following asymptotics as \(n\to\infty\)
\[p(n) \sim\frac{1}{4\sqrt{3}n}e^{\pi\sqrt{\frac{2\pi}{3}}},\] \[\overline{p}(n) \sim\frac{1}{8n}e^{\pi\sqrt{n}}.\]
Later Rademacher perfected the Circle Method developed by Hardy and Ramanujan and obtained an exact formula for \(p(n).\) To state Rademacher's result we define the _Kloosterman sums_
\[A_{k}(n):=\sum_{h\ (\mathrm{mod}\ k)^{*}}\omega_{h,k}e^{\frac{-2\pi inh}{k}},\]
where \(*\) indicates that \(h\) only runs over those residue classes that are coprime to \(k\) and \(\omega_{h,k}\) is defined in Section 2. Let \(I_{\kappa}\) denote the modified Bessel function of order \(\kappa.\) Rademacher's exact formula for \(p(n)\) then reads as follows
\[p(n)=\frac{2\pi}{(24n-1)^{3/4}}\sum_{k=1}^{\infty}\frac{A_{k}(n)}{k}I_{\frac{3}{2}} \left(\frac{\pi\sqrt{24n-1}}{6k}\right).\]
In [15] Zuckerman generalized the Circle Method to provide exact formulas for the Fourier coefficients at any cusp for arbitrary modular forms of negative weight for finite index subgroups of the modular group \(\mathrm{SL}_{2}(\mathbb{Z}).\) It is evident that modularity is the work horse in those methods. However, one could ask if the Cirlce Method can be used to find exact formulae for the Fourier coefficients of objects that still have some automorphic structure. Natural candidates are _mock theta functions_. Important examples are Ramanujan's third order mock theta functions
\[f(q) \coloneqq\sum_{n\geq 0}\frac{q^{n^{2}}}{(-q;q)_{n}^{2}},\] \[\varphi(q) \coloneqq\sum_{n\geq 0}\frac{q^{n^{2}}}{(-q^{2};q^{2})_{n}}.\]
Those are not modular forms, but can be understood as holomorphic parts of harmonic Maas forms of half-integral weight. Bringmann and Ono have found exact formulas for the coefficients of harmonic Maas forms of weight \(<\frac{1}{2}.\) This includes all the above examples but their technique does not use the Cirlce Method and rather relies on Poincare series. Another class of automorphic objects which is not covered by the above cases is the space of modular forms tensored with harmonic Maass forms. It is unlikely that a basis of Poincare series exists for this space, so the Circle Method is the method of choice to study the coeffcients of these objects. In [5] Bringmann and Mahlburg worked out a technique based on the Circle Method to obtain asymptotics for the Fourier coeffcients and was later refined by Bridges and Bringmann [3] to provide exact formulas of Rademacher type. They illustrated the method for the function \(p_{2}(n)\) which counts partitions without sequences in its parts and it is the first example of an exact formula for the Fourier coefficients of a mixed-mock modular form of weight \(0.\) We follow their developed method [3, 5] closely to study the Fourier coefficients of _lower \(1\)-run overpartitions_, which are those overpartitons in which any overlined part must occur within a run of exactly k consecutive overlined parts that end below with a gap, where an overpartition is said to have a _gap_ at \(m\) if there are no parts of size \(m.\)
**Example.** The lower \(1\)-run overpartitions of size \(4\) are
\[\overline{4},\quad\overline{3}+1,\quad 3+\overline{1},\quad 2+\overline{2}, \quad\overline{2}+1+1,\quad 2+1+\overline{1},\quad 1+1+1+\overline{1},\]
together with the \(5\) partitions of \(4,\) thus \(\overline{p_{1}}(n)=7+5=12.\)
We will denote the lower \(1\)-run overpartitions of \(n\) by \(\overline{p_{1}}(n).\) The generating function for \(\overline{p_{1}}(n)\) was worked by Bringmann, Holroyd, Mahlburg, and Vlasenko [4] (note that there is a typo in their statement)
\[G_{1}(q)\coloneqq\frac{(q^{4};q^{4})_{\infty}}{(q;q)_{\infty}(q^{2};q^{2})_{ \infty}}\varphi(q).\]
Using Ingham's Tauberian Theorem they showed the following asymptotic
\[\overline{p_{1}}(n)\sim\frac{\sqrt{5}}{4\sqrt{6}n}e^{\pi\sqrt{\frac{5}{6}n}}, \quad(n\to\infty).\]
The generating \(G_{1}(q)\) is a mixed-mock modular form of overall weight \(0\) and thus falls into the preivously mentioned framework developed by Bringmann and Bridges [3]. We will follow their method closely to obtain the following exact formula (Theorem 4.1)
**Theorem**.: _We have, for \(n\in\mathbb{N},\)_
\[\overline{p_{1}}(n) =\frac{\pi}{6\sqrt{6n}}\sum_{\begin{subarray}{c}k\geq 1\\ \gcd(4,k)=1\end{subarray}}\frac{1}{k^{2}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{n+\nu}K_{k}^{[12]}(\nu,n) \mathcal{I}_{\frac{1}{24},k,\nu}(n)\] \[+\frac{5\pi}{12\sqrt{6n}}\sum_{\begin{subarray}{c}k\geq 1\\ \gcd(4,k)=2\end{subarray}}\frac{1}{k^{2}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{n+\nu}K_{k}^{[22]}(\nu,n) \mathcal{I}_{\frac{5}{12},k,\nu}(n).\]
Here we have set for \(b\in\mathbb{R},k\in\mathbb{N},\) and \(\nu\in\mathbb{Z}\)
\[\mathcal{I}_{b,k,\nu}(n):=\int_{-1}^{1}\frac{\sqrt{1-x^{2}}I_{1}\left(\frac{2 \pi}{k}\sqrt{2bn(1-x^{2})}\right)}{\cosh\left(\frac{\pi i}{k}\left(\nu-\frac{ 1}{6}\right)-\frac{\pi}{k}\sqrt{\frac{b}{3}}x\right)}dx.\]
These integrals should be viewed as a natural result of mixing the Mordell integrals and modular factors in the Circle Method. The Mordell integrals appear in the transformation laws of mock theta functions [7] and Bessel functions do appear in the exact formulas for eta-quotients [15]. In that sense \(\mathcal{I}_{b,k,\nu}\) is a natural result of combining Mordell integrals and Bessel functions. As an immediate corollary, we will give another proof of (1). Our strategy is as follows. In Section 2 we will work out all modular transformation laws needed for the Circle Method. It is worthwhile to note that we will not work with \(G_{1}(q)\) directly. Instead we are going to twist the generating function by a root of unity and work instead with
\[\overline{G_{1}}(q)\coloneqq\sum_{n=0}^{\infty}(-1)^{n}\overline{p_{1}}(n)q^ {n}=\frac{(q^{4};q^{4})_{\infty}}{(-q;q)_{\infty}(q^{2};q^{2})_{\infty}}\varphi (-q). \tag{1.1}\]
The main reason for this choice is that the modular properties of (1.1) are easier to work with, than for the other generating function, see Section 2. In Section 3 we are going to give bounds for the Kloosterman sums and the Mordell integrals that appear in the transformations laws for mock theta functions. Finally in Section 4 we are going to apply the Cirlce Method to prove Theorem 4.1.
## Acknowledgements
The author wishes to thank Kathrin Bringmann for suggesting this problem and helpful discussions. Furthermore, the author thanks Giulia Cesana for advice on rewriting Kloosterman sums and aide to verify the main result numerically. The author recieved funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 101001179).
## 2. Modularity Properties
As stated in the introduction we define
\[P(q):=(q;q)_{\infty}^{-1}.\]
For coprime integers \(h\) and \(k\), we find \(h^{\prime}\) such that \(hh^{\prime}\equiv-1\ (\mathrm{mod}\ k).\) We will assume that for even \(k\) this congruence holds \(\ (\mathrm{mod}\ 4k)\) and for odd \(k\) that \(8|h^{\prime}.\) Furthermore, we define by \(z\) a complex variable that satisfies \(\mathrm{Re}\left(z\right)>0\) and \(q=e^{\frac{2\pi i}{k}(h+iz)},\) and set \(q_{1}:=e^{\frac{2\pi i}{k}(h^{\prime}+iz^{-1})}.\) The classical modular transformation law is [1]
\[P(q)=\omega_{h,k}z^{\frac{1}{2}}e^{\frac{\pi(z^{-1}-z)}{12k}}P(q_{1}), \tag{2.1}\]
where \(\omega_{h,k}\) is the \(24k\)-th root of unity defined by
\[\omega_{h,k}\coloneqq\begin{cases}\left(\frac{-k}{h}\right)e^{-\pi i\left( \frac{1}{4}(2-hk-h)+\frac{1}{12}\left(k-\frac{1}{k}\right)\left(2h-h^{\prime} +h^{2}h^{\prime}\right)\right)}&\text{if $h$ is odd},\\ \left(\frac{-h}{k}\right)e^{-\pi i\left(\frac{1}{4}(k-1)+\frac{1}{12}\left(k- \frac{1}{k}\right)\left(2h-h^{\prime}+h^{2}h^{\prime}\right)\right)}&\text{if $k$ is odd}, \end{cases}\]
and \(\left(\widehat{\cdot}\right)\) denotes the Kronecker symbol. From (2.1) we obtain easily a transformation law for \(P(q^{r}),\) where \(r\) is a positive integer. To state the transformation law in a convenient form, we introduce some notation. First, we set \(g_{r}:=\gcd(r,k),\) and define the quantities \(\rho_{r}:=\frac{r}{g_{r}}\) and \(k_{r}:=\frac{k}{g_{r}}.\) Note that \(\gcd(\rho_{r},k_{r})=1.\) Furthermore, we define \(q_{r}:=e^{\frac{2\pi i\left(h_{r}^{\prime}+\frac{i}{\rho_{r}z}\right)}{k_{r}}}.\) It follows that up to a root of unity \(q_{r}=q_{1}^{\frac{g_{r}}{g_{r}}}.\) We will ignore this technicality as it is not relevant for our estimates and just assume that \(q_{r}=q_{1}^{\frac{g_{r}}{g_{r}}}.\) Finally, we have the transformation law
\[P(q^{r})=\omega_{h\rho_{r},k_{r}}(\rho_{r}z)^{\frac{1}{2}}e^{\frac{\pi}{12k_{ r}}\left(\frac{z^{-1}}{\rho_{r}}-\rho_{r}z\right)}P(q_{r}). \tag{2.2}\]
We are now going to study the necessary transformation laws to study (1.1). We exploit the following linear relation [13] between the two third-order mock theta functions \(\varphi(q)\) and \(f(q):\)
\[2\varphi(-q)-f(q)=\frac{(q;q)_{\infty}^{2}}{(-q;q)_{\infty}(q^{2};q^{2})_{ \infty}}.\]
We can then rewrite the generating function (1.1) as
\[\overline{G_{1}}(q)=\underbrace{\frac{(q^{4};q^{4})_{\infty}^{2}(q;q)_{\infty }}{2(q^{2};q^{2})_{\infty}^{4}}f(q)}_{=g_{1}(q)}+\underbrace{\frac{(q^{4};q^{ 4})_{\infty}^{2}(q;q)_{\infty}^{4}}{2(q^{2};q^{2})_{\infty}^{6}}}_{=g_{2}(q)}.\]
We start with the transformation law for \(g_{1}(q).\) Define \(\xi(q):=\frac{P(q^{2})^{4}}{P(q^{4})^{2}P(q)}.\) We will have to distinguish three cases, depending on the different values of \(\gcd(4,k).\) These transformation all follow from iterated application of (2.1) and (2.2).
First, assume that \(4|k.\) Then,
\[\xi(q)=\frac{\omega_{h,\frac{k}{2}}^{4}}{\omega_{h,k}\omega_{h,\frac{k}{4}}^{ 2}}z^{\frac{1}{2}}e^{-\frac{\pi}{12k}(z^{-1}-z)}\xi(q_{1}).\]
Next, if \(\gcd(4,k)=2,\) we have
\[\xi(q)=\frac{\omega_{h,\frac{k}{2}}^{4}}{2\omega_{h,k}\omega_{2h,\frac{k}{2}}^ {2}}z^{\frac{1}{2}}e^{\frac{5\pi}{12kz}+\frac{\pi z}{12k}}\frac{P(q_{1}^{2})^ {4}}{P(q_{1})P(q_{1})^{2}}.\]
Finally, if \(\gcd(4,k)=1,\)
\[\xi(q)=\frac{\omega_{2h,k}^{4}}{\omega_{h,k}\omega_{4h,k}^{2}}z^{\frac{1}{2}}e^{ \frac{\pi}{24kx}+\frac{\pi z}{12k}}\frac{P\left(q_{1}^{\frac{1}{2}}\right)^{4}}{ P(q_{1})P\left(q_{1}^{\frac{1}{4}}\right)^{2}}.\]
Now we turn to the third order mock theta function \(f(q)\). Andrews [2] has shown that \(f(q)\) behaves essentially modular of level \(2\). If \(k\) is even we have
\[f(q) =(-1)^{\frac{k}{2}+1}e^{\pi i\left(\frac{h^{\prime}}{2}-\frac{3h^ {\prime}k}{4}\right)}\omega_{h,k}z^{-\frac{1}{2}}e^{\frac{\pi(z^{-1}-z)}{12k}} f(q_{1})\] \[+\omega_{h,k}\frac{2}{k}z^{\frac{1}{2}}e^{-\frac{\pi z}{12k}}\sum \limits_{\nu\;(\mathrm{mod}\;k)}(-1)^{\nu}e^{\frac{\pi ih(-3\nu^{2}+\nu)}{k}} I_{k,v}(z),\]
where \(I_{k,\nu}(z)\) is the Mordell-integral defined by
\[I_{k,\nu}(z)\coloneqq\int_{-\infty}^{\infty}\frac{e^{-\frac{3\pi zx^{2}}{k}}} {\cosh\left(\frac{\pi i\left(\nu-\frac{1}{6}\right)}{k}-\frac{\pi zx}{k} \right)}dx.\]
Note that there is a typo regarding the term \(e^{\frac{\pi(z^{-1}-z)}{12k}}\) as it is stated in the statement of Theorem 2.2 in [2]. If \(k\) is odd we have
\[f(q) =2(-1)^{\frac{1}{2}(k-1)}e^{\frac{3\pi ih^{\prime}}{4k}}\omega_{h,k}z^{-\frac{1}{2}}e^{-\frac{2\pi}{3kz}-\frac{\pi z}{12k}}\omega\left(q_{1}^{ \frac{1}{2}}\right)\] \[+\frac{2}{k}\omega_{h,k}z^{\frac{1}{2}}e^{-\frac{\pi z}{12k}}\sum \limits_{\nu\;(\mathrm{mod}\;k)}(-1)^{\nu}e^{\frac{\pi ih^{\prime}(-3\nu^{2}- \nu)}{k}}I_{k,\nu}(z),\]
where \(\omega(q)\) denoted the third order mock theta function
\[\omega(q)\coloneqq\sum\limits_{n\geq 0}\frac{q^{2n(n+1)}}{\left(q;q^{2} \right)_{n+1}^{2}}.\]
Finally, we are going to derive the transformation laws for \(g_{2}(q)\) depending on \(\gcd(4,k)\). If \(4|k\) we have
\[g_{2}(q)=\frac{\omega_{h,\frac{k}{2}}^{6}}{2\omega_{h,k}^{4}\omega_{h,\frac{k }{4}}^{2}}g_{2}(q_{1}).\]
Next, if \(\gcd(4,k)=2\) we have
\[g_{2}(q)=\frac{\omega_{h,\frac{k}{2}}^{6}}{4\omega_{h,k}^{4}\omega_{2h,\frac{k }{2}}^{2}}e^{\frac{\pi}{2kz}}\frac{P(q_{1}^{2})^{6}}{P(q_{1})^{6}}.\]
Finally, if \(\gcd(4,k)=1\) we have
\[g_{2}(q)=\frac{\omega_{2h,k}^{6}}{4\omega_{h,k}^{4}\omega_{4h,k}^{2}}e^{-\frac{ \pi}{8kz}}\frac{P\left(q_{1}^{\frac{1}{2}}\right)^{6}}{P(q_{1})^{4}P\left(q_{1}^ {\frac{1}{4}}\right)^{2}}.\]
## 3. Bounds on Kloosterman sums and Mordell Integrals
In [12] Rademacher proved the following bound for Kloosterman sums. Recall that \(k_{1}\) is the denominator of the fraction proceeding \(\frac{h}{k}\) in the Farey sequence of order \(N\in\mathbb{N}\).
**Lemma 3.1**.: _We have for \(k\in\mathbb{N},n,m,\ell\in\mathbb{Z},n\neq 0\) with \(N+1\leq\ell\leq N+k+1,\) for \(\varepsilon>0,\)_
\[K_{k}(n,m) \coloneqq\sum_{\begin{subarray}{c}h\ (\mathrm{mod}\ k)\\ \gcd(h,k)=1\end{subarray}}e^{-\frac{2\pi i}{k}(nh-mh^{\prime})}=O_{\varepsilon }\left(k^{\frac{2}{3}+\varepsilon}\gcd(|n|,k)^{\frac{1}{3}}\right),\] \[\mathbb{K}_{k,\ell}(n,m) \coloneqq\sum_{\begin{subarray}{c}h\ (\mathrm{mod}\ k)\\ \gcd(h,k)=1\\ N<k+k_{1}\leq\ell\end{subarray}}e^{-\frac{2\pi i}{k}(nh-mh^{\prime})}=O_{ \varepsilon}\left(k^{\frac{2}{3}+\varepsilon}\gcd(|n|,k)^{\frac{1}{3}}\right).\]
In the case that \(4|k\) define the following Kloosterman sums
\[K_{k}^{[41]}(n,m) \coloneqq\sum_{\begin{subarray}{c}h\ (\mathrm{mod}\ k)\\ \gcd(h,k)=1\end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{4}}{\omega_{h,\frac{ k}{4}}^{2}}e^{\frac{\pi ih^{\prime}}{2}\left(1-\frac{3k}{2}\right)}e^{\frac{2\pi i }{k}(-nh+mh^{\prime})},\] \[K_{k}^{[42]}(\nu,n,m) \coloneqq\sum_{\begin{subarray}{c}h\ (\mathrm{mod}\ k)\\ \gcd(h,k)=1\end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{4}}{\omega_{h,\frac{ k}{4}}^{2}}e^{\frac{\pi ih^{\prime}}{k}(-3\nu^{2}+\nu)}e^{\frac{2\pi i}{k}(-nh+ mh^{\prime})},\] \[K_{k}^{[43]}(n,m) \coloneqq\sum_{\begin{subarray}{c}h\ (\mathrm{mod}\ k)\\ \gcd(h,k)=1\end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{6}}{\omega_{h,k}^{4} \omega_{h,\frac{k}{4}}^{2}}e^{\frac{2\pi i}{k}(-nh+mh^{\prime})}.\]
Furthermore, we define for \(N+1\leq\ell\leq N+k-1,\)
\[\mathbb{K}_{k,\ell}^{[41]}(n,m) \coloneqq\sum_{\begin{subarray}{c}0\leq h<k\\ \gcd(h,k)=1\\ N<k+k_{1}\leq\ell\end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{4}}{\omega_{h, \frac{k}{4}}^{2}}e^{\frac{\pi ih^{\prime}}{2}\left(1-\frac{3k}{2}\right)}e^{ \frac{2\pi i}{k}(-nh+mh^{\prime})},\] \[\mathbb{K}_{k,\ell}^{[42]}(\nu,n,m) \coloneqq\sum_{\begin{subarray}{c}0\leq h<k\\ \gcd(h,k)=1\\ N<k+k_{1}\leq\ell\end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{4}}{\omega_{h, \frac{k}{4}}^{2}}e^{\frac{\pi ih^{\prime}}{k}(-3\nu^{2}+\nu)}e^{\frac{2\pi i}{ k}(-nh+mh^{\prime})},\] \[\mathbb{K}_{k,\ell}^{[43]}(n,m) \coloneqq\sum_{\begin{subarray}{c}0\leq h<k\\ \gcd(h,k)=1\\ N<k+k_{1}\leq\ell\end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{6}}{\omega_{h, \frac{k}{2}}^{4}}e^{\frac{2\pi i}{k}(-nh+mh^{\prime})}.\]
**Lemma 3.2**.: _We have for \(\varepsilon>0,\)_
\[K_{k}^{[41]}(n,m),K_{k}^{[42]}(\nu,n,m),K_{k}^{[43]}(n,m),\mathbb{K}_{k,\ell}^ {[41]}(n,m),\mathbb{K}_{k,\ell}^{[42]}(\nu,n,m),\mathbb{K}_{k,\ell}^{[43]}(n,m )\ll_{\varepsilon}n^{\frac{1}{3}}k^{\frac{2}{3}}.\]
Proof.: We only give a complete proof for \(K_{k}^{[41]}(n,m)\) as the idea is the same for all other Kloosterman sums. The only hard part is to rewrite the multiplier, so we just have to bound an ordinary Kloosterman sum, which we can do with Lemma 3.1.
We have keeping in mind that \(hh^{\prime}\equiv-1\ (\text{mod}\ 4k)\)
\[\frac{\omega_{h,\frac{k}{2}}^{4}}{\omega_{h,\frac{k}{4}}^{2}}=-e^{\frac{\pi i }{8}(-h^{\prime}k+h^{2}h^{\prime}k-h(4+k))}=-e^{\frac{\pi i}{8}h^{\prime}k+(4 +2k)h}.\]
Hence,
\[K_{k}^{[41]}(n,m) \coloneqq\sum_{\begin{subarray}{c}h\ (\text{mod}\ k)\\ \gcd(h,k)=1\end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{4}}{\omega_{h, \frac{k}{4}}^{2}}e^{\frac{\pi ih^{\prime}}{2}\left(1-\frac{3k}{2}\right)}e^{ \frac{2\pi i}{k}(-nh+mh^{\prime})}\] \[=-\sum_{\begin{subarray}{c}h\ (\text{mod}\ k)\\ \gcd(h,k)=1\end{subarray}}e^{-\frac{2\pi i}{k}\left(\left(n-\frac{2k^{2}+4k}{ 16}\right)h-\left(m+\frac{5k^{2}-4k}{16}\right)\right)}.\]
Since \(4|k\) we have
\[\frac{2k^{2}+4k}{16}\in\mathbb{Z},\quad\frac{5k^{2}-4k}{16}\in\mathbb{Z}.\]
Therefore, we can use Lemma 3.1 to obtain as desired
\[\left|K_{k}^{[41]}(n,m)\right|=\left|K_{k}\left(n-\frac{2k^{2}+4k}{16},m+ \frac{5k^{2}-4k}{16}\right)\right|\ll_{\varepsilon}k^{\frac{3}{3}+\varepsilon }\gcd\left(\left|n-\frac{2k^{2}+4k}{16}\right|,k\right)^{\frac{1}{3}}\ll_{ \varepsilon}n^{\frac{1}{3}}k^{\frac{2}{3}+\varepsilon}.\]
For the Kloosterman sum \(K_{k}^{[43]}\) we distinguish two cases. If \(8|k\) we write
\[\frac{\omega_{h,\frac{k}{2}}^{6}}{\omega_{h,k}^{4}\omega_{h,\frac{k}{4}}^{2}}=e^{ \frac{2\pi i}{k}\left(-\frac{4k}{16}h-\frac{2k}{16}h^{\prime}\right)}.\]
Thus, since \(8|k\) we have
\[\frac{4k}{16}\in\mathbb{Z},\quad\frac{2k}{16}\in\mathbb{Z}.\]
In the case that \(k\equiv 1\ (\mathrm{mod}\ 4)\) we find using that \(h^{2}\equiv 1\ (\mathrm{mod}\ 4)\) (since \(k\) is even)
\[\frac{\omega_{h,\frac{k}{2}}^{6}}{\omega_{h,k}^{4}\omega_{h,\frac{k}{4}}^{2} }=ie^{\frac{2\pi ik}{16}}e^{\frac{2\pi i}{k}\frac{4kh}{16}}.\]
In the case that \(\gcd(4,k)=2\) define the following Kloosterman sums
\[K_{k}^{[21]}(n,m) \coloneqq\sum_{\begin{subarray}{c}h\ (\mathrm{mod}\ k)\\ \gcd(h,k)=1\end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{4}}{\omega_{2h,\frac {k}{2}}^{2}}e^{\frac{\pi ih^{\prime}}{k}\left(1-\frac{2k}{2}\right)}e^{\frac{2 \pi i}{k}\left(-nh+mh^{\prime}\right)},\] \[K_{k}^{[22]}(\nu,n,m) \coloneqq\sum_{\begin{subarray}{c}h\ (\mathrm{mod}\ k)\\ \gcd(h,k)=1\end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{4}}{\omega_{2h,\frac {k}{2}}^{2}}e^{\frac{\pi ih^{\prime}}{k}\left(-3\nu^{2}+\nu\right)}e^{\frac{2 \pi i}{k}\left(-nh+mh^{\prime}\right)},\] \[K_{k}^{[23]}(n,m) \coloneqq\sum_{\begin{subarray}{c}0\leq h<k\leq N\\ \gcd(h,k)=1\end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{6}}{\omega_{h,k}^{4} \omega_{2h,\frac{k}{2}}^{2}}e^{\frac{2\pi i}{k}\left(-nh+mh^{\prime}\right)}.\]
Furthermore, we need the incomplete Kloosterman sums for \(N+1\leq\ell\leq N+k-1\), define
\[\mathbb{K}_{k,\ell}^{[21]}(n,m) \coloneqq\sum_{\begin{subarray}{c}0\leq h<k\\ \gcd(h,k)=1\\ N<k+k_{1}\leq\ell\end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{4}}{\omega_{2h,\frac{k}{2}}^{2}}e^{\frac{\pi ih^{\prime}}{2}\left(1-\frac{3k}{2}\right)}e^{ \frac{2\pi i}{k}\left(-nh+mh^{\prime}\right)},\] \[\mathbb{K}_{k,\ell}^{[22]}(\nu,n,m) \coloneqq\sum_{\begin{subarray}{c}0\leq h<k\\ \gcd(h,k)=1\\ N<k+k_{1}\leq\ell\end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{4}}{\omega_{2h,\frac{k}{2}}^{2}}e^{\frac{\pi ih^{\prime}}{k}\left(-3\nu^{2}+\nu\right)}e^{ \frac{2\pi i}{k}\left(-nh+mh^{\prime}\right)},\] \[\mathbb{K}_{k,\ell}^{[23]}(n,m) \coloneqq\sum_{\begin{subarray}{c}0\leq h<k\\ \gcd(h,k)=1\\ N<k+k_{1}\leq\ell\end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{6}}{\omega_{h,k }^{4}\omega_{2h,\frac{k}{2}}^{2}}e^{\frac{2\pi i}{k}\left(-nh+mh^{\prime} \right)}.\]
**Lemma 3.3**.: _We have for \(\varepsilon>0,\)_
\[K_{k}^{[21]}(n,m),K_{k}^{[22]}(\nu,n,m),K_{k}^{[23]}(n,m),\mathbb{K}_{k,\ell}^ {[21]}(n,m),\mathbb{K}_{k,\ell}^{[22]}(\nu,n,m),\mathbb{K}_{k,\ell}^{[23]}(n,m) \ll_{\varepsilon}n^{\frac{1}{3}}k^{\frac{2}{3}}.\]
Proof.: We will later show \(K_{k}^{[21]}(n,m)=-K_{k}^{[23]}(n,m).\) Thus, is suffices to consider \(K_{k}^{[21]}(n,m).\) Since \(h\) is odd we have \(h^{2}\equiv 1\ (\text{mod}\ 4)\) and thus,
\[\frac{\omega_{h,\frac{k}{2}}^{4}}{\omega_{2h,\frac{k}{2}}^{2}}=ie^{-\frac{ki}{4 }}e^{\frac{2\pi i}{k}\left(-\frac{12+3k^{2}}{24}h^{\prime}\right)}.\]
Now it suffices to note that since \(2|k\) we have
\[\frac{-12+3k^{2}}{24}\in\mathbb{Z}.\]
In the case that \(\gcd(4,k)=1\) define the following Kloosterman sums
\[K_{k}^{[11]}(n,m) \coloneqq\sum_{\begin{subarray}{c}h\ (\text{mod}\ k)\\ \gcd(h,k)=1\end{subarray}}\frac{\omega_{2h,k}^{4}}{\omega_{4h,k}^{2}}e^{ \frac{3\pi i\hbar^{\prime}}{k}}e^{\frac{2\pi i}{k}(-nh+mh^{\prime})},\] \[K_{k}^{[12]}(\nu,n,m) \coloneqq\sum_{\begin{subarray}{c}h\ (\text{mod}\ k)\\ \gcd(h,k)=1\end{subarray}}\frac{\omega_{2h,k}^{4}}{\omega_{4h,k}^{2}}e^{ \frac{\pi i\hbar^{\prime}}{k}(-3\nu^{2}-\nu)}e^{\frac{2\pi i}{k}(-nh+mh^{ \prime})},\] \[K_{k}^{[13]}(n,m) \coloneqq\sum_{\begin{subarray}{c}0\leq h<k\leq N\\ \gcd(h,k)=1\end{subarray}}\frac{\omega_{2h,k}^{6}}{\omega_{h,k}^{4}\omega_{4h,k}^{2}}e^{\frac{2\pi i}{k}(-nh+mh^{\prime})}.\]
Furthermore, we need the incomplete Kloosterman sums for \(N+1\leq\ell\leq N+k-1,\) define
\[\mathbb{K}_{k,\ell}^{[11]}(n,m) \coloneqq\sum_{\begin{subarray}{c}0\leq h<k\\ \gcd(h,k)=1\\ N<k+k_{1}\leq\ell\end{subarray}}\frac{\omega_{2h,k}^{4}}{\omega_{4h,k}^{2}}e^ {\frac{3\pi i\hbar^{\prime}}{k}}e^{\frac{2\pi i}{k}(-nh+mh^{\prime})},\] \[\mathbb{K}_{k,\ell}^{[12]}(\nu,n,m) \coloneqq\sum_{\begin{subarray}{c}0\leq h<k\\ \gcd(h,k)=1\\ N<k+k_{1}\leq\ell\end{subarray}}\frac{\omega_{2h,k}^{4}}{\omega_{4h,k}^{2}}e^ {\frac{\pi i\hbar^{\prime}}{k}(-3\nu^{2}-\nu)}e^{\frac{2\pi i}{k}(-nh+mh^{ \prime})},\] \[\mathbb{K}_{k,\ell}^{[13]}(n,m) \coloneqq\sum_{\begin{subarray}{c}0\leq h<k\\ \gcd(h,k)=1\\ N<k+k_{1}\leq\ell\end{subarray}}\frac{\omega_{2h,k}^{6}}{\omega_{h,k}^{4} \omega_{4h,k}^{2}}e^{\frac{2\pi i}{k}(-nh+mh^{\prime})}.\]
**Lemma 3.4**.: _We have for \(\varepsilon>0,\)_
\[K_{k}^{[11]}(n,m),K_{k}^{[12]}(\nu,n,m),K_{k}^{[13]}(n,m),\mathbb{K}_{k,\ell}^ {[11]}(n,m),\mathbb{K}_{k,\ell}^{[12]}(\nu,n,m),\mathbb{K}_{k,\ell}^{[13]}(n,m )\ll_{\varepsilon}n^{\frac{1}{3}}k^{\frac{2}{3}}.\]
Proof.: We start with \(K_{k}^{[11]}(n,m).\) We will use the notation \([a]_{k}\) to denote the inverse of \(a\ (\text{mod}\ k).\) The multiplier evaluates to
\[\frac{\omega_{2h,k}^{4}}{\omega_{4h,k}^{2}}=e^{\frac{2\pi i}{k}\left(\frac{(k^{2}-1 )(8h^{2}+1)h^{\prime}}{12}\right)}.\]
In the case that \(3|k\) we can write this as
\[\frac{\omega_{2h,k}^{4}}{\omega_{4h,k}^{2}}=e^{\frac{2\pi i}{k}12[12]_{k}(8h-h ^{\prime})}.\]
In this case
\[K_{k}^{[11]}(n,m)=\sum_{\begin{subarray}{c}h\ (\mathrm{mod}\ k)\\ \gcd(h,k)=1\end{subarray}}e^{\frac{2\pi i}{k}\left(-(n-96[12]_{k})h+\left(m-12 [12]_{k}+\frac{3}{8}\right)h^{\prime}\right)}\]
In the beginning we chose \(h^{\prime}\) such that \(8|h^{\prime}.\) To get rid of this condition we perform the change of variables \(h^{\prime}\mapsto 8h^{\prime}\) and \(h\mapsto[8]_{k}h\) and find
\[K_{k}^{[11]}(n,m)=K_{k}\left([8]_{k}n-96[96]_{k},8m-96[12]_{k}+3\right).\]
In the case that \(3\nmid k\) we can write
\[3\frac{\omega_{2h,k}^{4}}{\omega_{4h,k}^{2}}=\frac{\left(k^{2}-1\right)(-8+8ak )h}{4}+\frac{\left(k^{2}-1\right)}{4}h^{\prime}.\]
for some \(a\in\mathbb{Z}.\) It is easy to see that the factors in front of \(h\) and \(h^{\prime}\) are both integers. Since \(3|k\) we can extend the modulus of the Kloosterman sum from \(k\) to \(3k\) and proceeding as before yields
\[K_{k}^{[11]}(n,m)=\frac{1}{3}K_{3k}\left([8]_{k}n-[8]_{k}\left(k^{2}-1\right)( -8+8ak),8m+8\left(k^{2}-1\right)+3\right).\]
For \(K_{k}^{[13]}(n,m)\) we write
\[\frac{\omega_{2h,k}^{6}}{\omega_{h,k}^{4}\omega_{4h,k}^{2}}=e^{\frac{\pi i}{k }(h^{2}h^{\prime}(k-1))}\]
Since \(hh^{\prime}\equiv-1\ (\mathrm{mod}\ k)\) we find that this equals
\[=e^{-2\pi i\frac{k^{2}-1}{2}}.\]
The claim follows after recognizing that since \(k\) is odd \(\frac{k^{2}-1}{2}\in\mathbb{Z}.\)
We agree on the convention that for any Kloosterman sum above we set \(K_{k}(\nu,n)\coloneqq K_{k}(\nu,n,0).\)
For \(b\in\mathbb{R},k\in\mathbb{N}\) and \(0<\nu\leq k\) define \(\mathcal{J}_{b,k,\nu}(z)\eqqcolon ze^{\frac{\pi b}{kz}}I_{k,\nu}(z)\) and define the principal part truncation of \(\mathcal{J}_{b,k,\nu}\) by
\[\mathcal{J}_{b,k,\nu}^{*}(z)\coloneqq\sqrt{\frac{b}{3}}\int_{-1}^{1}\frac{e^{ \frac{\pi b}{kz}}}{\cosh\left(\frac{\pi i}{k}\left(\nu-\frac{1}{6}\right)- \frac{\pi}{k}\sqrt{\frac{b}{3}}x\right)}dx.\]
We have the following estimate for the principal part integrals [5].
**Lemma 3.5**.: _Let \(b\in\mathbb{R},k\in\mathbb{N}\) and \(0<\nu\leq k,\) then we have as \(z\to 0:\)_
1. _If_ \(b\leq 0,\) _then we have_ \[|\mathcal{J}_{b,k,\nu}(z)|\ll\frac{1}{\left|\frac{\pi}{2}-\frac{\pi}{k}\left( \nu-\frac{1}{6}\right)\right|}.\]
2. _If_ \(b>0,\) _then_ \(\mathcal{J}_{b,k,\nu}(z)=\mathcal{J}_{b,k,\nu}^{*}(z)+\mathcal{E}_{b,k,\nu},\) _where_ \[|\mathcal{E}_{b,k,\nu}(z)|\ll\frac{1}{\left|\frac{\pi}{2}-\frac{\pi}{k}\left( \nu-\frac{1}{6}\right)\right|}.\]
## 4. Circle Method
In this section we are going to derive an exact formula for \(\overline{p_{1}}(n)\) using the Circle Method. For all \(n\geq 1,\) Cauchy's Theorem yields
\[(-1)^{n}\overline{p_{1}}(n)=\frac{1}{2\pi i}\int_{C}\frac{\overline{G_{1}}(q) }{q^{n+1}dq},\]
where we choose \(C\) as the circle with radius \(r=e^{-\frac{2\pi}{N^{2}}}\) with \(N\in\mathbb{N}\) which we take into the limit \(N\to\infty.\) Futhermore, we parametrize the circle with \(q=e^{-\frac{2\pi}{N^{2}}+2\pi it}\) for \(0\leq t\leq 1.\) This yields
\[(-1)^{n}\overline{p_{1}}(n)=\int_{0}^{1}\overline{G_{1}}\left(e^{-\frac{2\pi} {N^{2}}+2\pi it}\right)\cdot e^{\frac{2\pi n}{N^{2}}-2\pi int}dt.\]
We are now going to decompose the circle into standard Farey arcs. Throughout we let \(0\leq h<k\leq N\) with \(\gcd(h,k)=1.\) We define
\[\vartheta_{h,k}^{\prime}\coloneqq\frac{1}{k(k+k_{1})},\quad\vartheta_{h,k}^{ \prime\prime}\coloneqq\frac{1}{k(k+k_{2})},\]
where \(\frac{h_{1}}{k_{1}}<\frac{h}{k}<\frac{h_{2}}{k_{2}}\) are neighbouring Farey fractions in the Farey sequence of order \(N.\) For notational convenience write \(z=k(N^{-2}-i\Phi)\) with \(\Phi=t-\frac{h}{k},\) where we have that \(-\vartheta_{h,k}^{\prime}\leq\Phi\leq\vartheta_{h,k}^{\prime\prime}.\) Splitting the circle along the Farey arcs then yields
\[(-1)^{n}\overline{p_{1}}(n) =\sum_{\begin{subarray}{c}0\leq h<k\leq N\\ \gcd(h,k)=1\end{subarray}}e^{-\frac{2\pi inh}{k}}\int_{-\vartheta_{h,k}^{ \prime}}^{\vartheta_{h,k}^{\prime\prime}}\overline{G_{1}}\left(e^{\frac{2\pi i }{k}(h+iz)}\right)\cdot e^{\frac{2\pi nz}{k}}d\Phi \tag{4.1}\] \[=\sum_{\begin{subarray}{c}0\leq h<k\leq N\\ \gcd(h,k)=1\end{subarray}}e^{-\frac{2\pi inh}{k}}\int_{-\vartheta_{h,k}^{ \prime}}^{\vartheta_{h,k}^{\prime\prime}}\left[g_{1}\left(e^{\frac{2\pi i}{ k}(h+iz)}\right)+g_{2}\left(e^{\frac{2\pi i}{k}(h+iz)}\right)\right]\cdot e^{\frac{2 \pi nz}{k}}d\Phi.\]
We will often use in subsequent calculations the well-known estimate
\[\frac{1}{k+k_{j}}\leq\frac{1}{N+1}, \tag{4.2}\]
for \(j=1,2,\) as well as the bound
\[\operatorname{Re}\left(z^{-1}\right)=\frac{N^{-2}}{kN^{-4}+k\Phi^{2}}\geq \frac{N^{2}}{k+N^{2}k^{-1}}\geq\frac{k}{2}. \tag{4.3}\]
In the following we are going to split the sum in (4.1) according to the different values of \(\gcd(4,k)\) and we will estimate these contributions individually. We write
\[(-1)^{n}\overline{p_{1}}(n)=\sum_{4}+\sum_{2}+\sum_{1},\]
where \(\sum_{d}\) denoted the part of the sum (4.1) where \(\gcd(4,k)=d.\) To improve our estimates we will further decompose the integral long the Farey arc as follows
\[\int_{-\vartheta^{\prime}_{h,k}}^{\vartheta^{\prime\prime}_{h,k}}=\int_{- \frac{1}{k(k+N)}}^{\frac{1}{k(k+N)}}+\int_{-\frac{1}{k(k+k1)}}^{-\frac{1}{k(k+ k_{2})}}. \tag{4.4}\]
Furthermore, we use the decompose the refined decomposition
\[\int_{-\frac{1}{k(k+k1)}}^{-\frac{1}{k(k+N)}}=\sum_{\ell=k+k_{1}}^{k+N-1}\int_ {-\frac{1}{k\ell}}^{-\frac{1}{k(\ell+1)}}.\]
Thus, we obtain that
\[\sum_{d}=\sum_{\begin{subarray}{c}0\leq h<k\leq N\\ \gcd(h,k)=1\\ \gcd(4,k)=d\end{subarray}}\int_{-\frac{1}{k(k+k_{1})}}^{-\frac{1}{k(k+k_{1})} }=\sum_{\begin{subarray}{c}1\leq k\leq N\\ \gcd(4,k)=d\end{subarray}}\sum_{\ell=N+1}^{k+N-1}\sum_{\begin{subarray}{c}0 \leq h<k\\ \gcd(h,k)=1\\ N<k+k_{1}\leq\ell\end{subarray}}\int_{-\frac{1}{k\ell}}^{-\frac{1}{k(\ell+1)}}. \tag{4.5}\]
Before we start estimating we make three final definitions to simplify notation. We define \(a(n),b(n)\) and \(r(n)\) by
\[\sum_{n\geq 0}a(n)q^{n}=g_{1}(q),\quad\sum_{n\geq 0}b(n)q^{n}=g_{2}(q),\quad \sum_{n\geq 0}r(n)q^{n}=\xi(q).\]
We start with \(\sum_{4}.\) We have three sums, the first two coming from \(g_{1}(q)\) and the last one from \(g_{2}(q).\)
\[S_{41}\coloneqq\frac{1}{2}\sum_{\begin{subarray}{c}0\leq h<k \leq N\\ \gcd(h,k)=1\\ 4\end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{4}}{\omega_{h,\frac{k}{4}}^{2} }(-1)^{\frac{k}{2}+1}e^{\pi i\left(\frac{h^{\prime}}{2}-\frac{3h^{\prime}k}{4 }\right)-\frac{2\pi inh}{k}}\int_{-\vartheta^{\prime}_{h,k}}^{\vartheta^{ \prime\prime}_{h,k}}e^{\frac{2\pi nz}{k}}g_{1}(q_{1})d\Phi,\] \[S_{42}\coloneqq\sum_{\begin{subarray}{c}0\leq h<k\leq N\\ \gcd(h,k)=1\\ 4\end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{4}}{k\omega_{h,\frac{k}{4}}^{2} }e^{-\frac{2\pi inh}{k}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{\nu}e^{\frac{\pi ik^{\prime}(-3h^{ \prime}+\nu)}{k}}\int_{-\vartheta^{\prime}_{h,k}}^{\vartheta^{\prime\prime}_{h,k}}e^{\frac{2\pi nz}{k}}e^{-\frac{\pi}{12kz}}\xi(q_{1})zI_{k,\nu}(z)d\Phi,\] \[S_{43}\coloneqq\frac{1}{2}\sum_{\begin{subarray}{c}0\leq h<k \leq N\\ \gcd(h,k)=1\\ 4\end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{6}}{\omega_{h,k}^{4}\omega_{h, \frac{k}{4}}^{2}}e^{-\frac{2\pi inh}{k}}\int_{-\vartheta^{\prime}_{h,k}}^{ \vartheta^{\prime\prime}_{h,k}}e^{\frac{2\pi nz}{k}}g_{2}(q_{1})d\Phi.\]
We start by estimating \(S_{41}.\) We use the splittings (4.4) and (4.5) to decompose \(S_{41}\) into three sums \(S_{41}^{[1]},S_{41}^{[2]}\) and \(S_{41}^{[3]}.\) We find
\[S_{41}^{[1]}=\frac{1}{2}\sum_{\begin{subarray}{c}1\leq k\leq N\\ 4|k\end{subarray}}(-1)^{\frac{k}{2}+1}\sum_{\begin{subarray}{c}0\leq h<k\leq N \\ \gcd(h,k)=1\\ \end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{4}}{\omega_{h,\frac{k}{4}}^{2} }e^{\frac{\pi ik^{\prime}}{2}\left(1-\frac{3k}{2}\right)-\frac{2\pi inh}{k}} \int_{-\frac{1}{k(k+N)}}^{\frac{1}{k(k+N)}}e^{\frac{2\pi nz}{k}}\sum_{m\geq 0}a(m)e^{ \frac{2\pi im}{k}\left(h^{\prime}+\frac{i}{z}\right)}d\Phi\]
\[S_{42}^{[1]} \ll\sum_{\begin{subarray}{c}1\leq k\leq N\\ 4|k\end{subarray}}\frac{1}{k}e^{\frac{2\pi n}{N^{2}}}\sum_{\nu\ (\mathrm{mod}\ k)}n^{\frac{1}{3}k^{\frac{2}{3}+ \varepsilon}}\int_{-\frac{1}{k(k+N)}}^{\frac{1}{k(k+N)}}\left|\mathcal{J}_{- \frac{1}{12},k,\nu}(z)\right|d\Phi\] \[\ll n^{\frac{1}{3}}\sum_{1\leq k\leq N}\frac{k^{-\frac{1}{3}+ \varepsilon}}{k(k+N)}\sum_{\nu\ (\mathrm{mod}\ k)}\frac{1}{\left|\frac{\pi}{2}-\frac{\pi}{k}\left(\nu-\frac{ 1}{6}\right)\right|}\ll\frac{n^{\frac{1}{3}}}{N}\sum_{k=1}^{N}k^{-\frac{1}{3} +\varepsilon}\log(k)\] \[\ll n^{\frac{1}{3}}N^{-\frac{1}{3}+\varepsilon}\log(N)\to 0.\]
We bound \(S_{42}^{[2]}\) in the same way by replacing the Kloosterman sum \(K_{k}^{[42]}\) with the sum of the Kloostermans sums \(\mathbb{K}_{k,\ell}^{[42]}(\nu,n,m)\). The sum \(S_{42}^{[3]}\) is bounded exactly like \(S_{42}^{[1]}\).
Finally, the contribution \(S_{43}\to 0\) as \(N\to\infty\) by the same argument as used for \(S_{41}\), changing the Kloosterman sums \(K_{k}^{[41]}\) and \(\mathbb{K}_{k,\ell}^{[41]}\) to the Kloosterman sums \(K_{k}^{[43]}\) and \(\mathbb{K}_{k,\ell}^{[43]}\). In total, we obtain that \(\sum_{4}\to 0\), as \(N\to\infty\).
We continue to estimate \(\sum_{2}\). Thus, suppose that \(\gcd(4,k)=2\). We decompose \(\sum_{2}\) into three sums
\[S_{21}\coloneqq\frac{1}{4}\sum_{\begin{subarray}{c}0\leq h<k\leq N \\ \gcd(h,k)=1\\ \gcd(4,k)=2\end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{4}}{\omega_{2h,\frac{k }{2}}^{2}}(-1)^{\frac{k}{2}+1}e^{\frac{\pi ih^{\prime}}{2}\left(1-\frac{3k}{2} \right)-\frac{2\pi inh}{k}}\int_{-\theta_{h,k}^{\prime}}^{\theta_{h,k}^{ \prime\prime}}e^{\frac{2\pi nz}{k}+\frac{\pi}{2kz}}\frac{P(q_{1}^{2})^{4}}{P(q _{1})P(q_{1}^{4})^{2}}f(q_{1})d\Phi,\] \[S_{22}\coloneqq\frac{1}{2}\sum_{\begin{subarray}{c}0\leq h<k\leq N \\ \gcd(h,k)=1\\ \gcd(4,k)=2\end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{4}}{k\omega_{2h,\frac {k}{2}}^{2}}e^{-\frac{2\pi inh}{k}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{\nu}e^{\frac{\pi ih^{\prime}(-3\nu^{2}+\nu)}{k}}\int_{- \theta_{h,k}^{\prime}}^{\theta_{h,k}^{\prime\prime}}e^{\frac{2\pi nz}{k}+ \frac{5\pi}{12kz}}\frac{P(q_{1}^{2})^{4}}{P(q_{1})P(q_{1}^{4})^{2}}zI_{k,\nu }(z)d\Phi,\] \[S_{23}\coloneqq\frac{1}{4}\sum_{\begin{subarray}{c}0\leq h<k\leq N \\ \gcd(h,k)=1\\ \gcd(4,k)=2\end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{6}}{\omega_{h,k}^{4} \omega_{2h,\frac{k}{2}}^{2}}e^{-\frac{2\pi inh}{k}}\int_{-\theta_{h,k}^{\prime }}^{\theta_{h,k}^{\prime\prime}}e^{\frac{2\pi nz}{k}+\frac{\pi}{2kz}}\frac{P( q_{1}^{2})^{6}}{P(q_{1})^{6}}d\Phi.\]
Eventhough, on a first glance \(S_{21}\) and \(S_{23}\) individually make a significant contribution to the overall sum, they do in fact cancel each other. This follows from the fact that they grow with the same factor of \(e^{\frac{\pi}{2kz}}\) and
\[\frac{\omega_{h,\frac{k}{2}}^{4}}{\omega_{2h,\frac{k}{2}}^{2}}(-1)^{\frac{k}{ 2}+1}e^{\frac{\pi ih^{\prime}}{2}\left(1-\frac{3k}{2}\right)}=-\frac{\omega_{ h,\frac{k}{2}}^{6}}{\omega_{h,k}^{4}\omega_{2h,\frac{k}{2}}^{2}}. \tag{4.6}\]
To see this, rewrite the equation as
\[(-1)^{\frac{k}{2}}e^{\frac{\pi ih^{\prime}}{2}\left(1-\frac{3k}{2}\right)}= \frac{\omega_{h,\frac{k}{2}}^{2}}{\omega_{h,k}^{4}}.\]
Since \(\gcd(4,k)=2\) we use the definition of \(\omega_{h,k}\) and obtain that
\[\frac{\omega_{h,\frac{k}{2}}^{2}}{\omega_{h,k}^{4}}=e^{\frac{\pi i}{4}(4-h^{ \prime}k+h^{2}h^{\prime}k-h(2+k))},\]
which further simplifies, by using \(hh^{\prime}\equiv-1\ (\mathrm{mod}\ 4k)\), to
\[=(-1)e^{-\frac{\pi i}{4}(h(2+k))}.\]
Using again that \(\gcd(4,k)=2\) and \(hh^{\prime}\equiv-1\ (\mathrm{mod}\ 4k)\) we recognize \((-1)=(-1)^{\frac{k}{2}}\) and furthermore \(e^{-\frac{\pi i}{4}(h(2+k))}=e^{\frac{\pi ih^{\prime}}{2}\left(1-\frac{3k}{2} \right)}\), thus proving (4.6).
We now continue to estimate \(S_{22}.\) The non-principal part vanishes as for \(S_{42}\) in the limit \(N\rightarrow\infty.\) We are left with
\[\mathcal{S}_{22}\coloneqq\frac{1}{2}\sum_{\begin{subarray}{c}0\leq h<k\leq N \\ \gcd(h,k)=1\\ \gcd(4,k)=2\end{subarray}}\frac{\omega_{h,\frac{k}{2}}^{4}}{k\omega_{2h, \frac{k}{2}}^{2}}e^{-\frac{2\pi inh}{k}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{\nu}e^{\frac{\pi ih^{\prime}(-3\nu^{2}+\nu)}{k}}\int_{- \theta_{h,k}^{\prime}}^{\theta_{h,k}^{\prime\prime}}e^{\frac{2\pi nz}{k}} \mathcal{J}_{\frac{5}{12},k,\nu}(z)d\Phi.\]
By Lemma 3.5 (ii) we can write \(\mathcal{J}_{\frac{5}{12},k,\nu}(z)=\mathcal{J}_{\frac{5}{12},k,\nu}^{*}(z)+ \mathcal{E}_{\frac{5}{12},k,\nu},\) where the contribution from \(\mathcal{E}_{\frac{5}{12},k,\nu}\) vanishes as before when \(N\rightarrow\infty.\) We decompose \(\mathcal{S}_{22}\) into \(\mathcal{S}_{22}^{[1]},\mathcal{S}_{22}^{[2]}\) and \(\mathcal{S}_{22}^{[3]}\) using the splittings (4.4) and (4.5).
Recall that \(z=k(N^{-2}-i\Phi).\) We have for \(b>0\)
\[\int_{\frac{1}{k(k+N)}}^{\frac{1}{k(k+N)}}e^{\frac{2\pi nz}{k}} \mathcal{J}_{b,k,\nu}^{*}(z)d\Phi=\int_{\frac{1}{k(k+N)}}^{\frac{1}{k(k+N)}}e^ {\frac{2\pi nz}{k}}\sqrt{\frac{b}{3}}\int_{-1}^{1}\frac{e^{\frac{\pi b}{kz}(1- x^{2})}}{\cosh\left(\frac{\pi i}{k}\left(\nu-\frac{1}{6}\right)-\frac{\pi}{k}\sqrt{ \frac{b}{3}}x\right)}dxd\Phi\] \[=\frac{\sqrt{b}}{ik\sqrt{3}}\int_{-1}^{1}\frac{1}{\cosh\left( \frac{\pi i}{k}\left(\nu-\frac{1}{6}\right)-\frac{\pi}{k}\sqrt{\frac{b}{3}}x \right)}\int_{\frac{k}{N^{2}}-\frac{i}{k+N}}^{\frac{k}{N^{2}}+\frac{i}{k+N}}e ^{\frac{\pi b}{kz}(1-x^{2})+\frac{2\pi nz}{k}}dzdx.\]
After the change of variables \(w=\frac{z}{k}\) the last expression equals
\[=\frac{\sqrt{b}}{i\sqrt{3}}\int_{-1}^{1}\frac{1}{\cosh\left( \frac{\pi i}{k}\left(\nu-\frac{1}{6}\right)-\frac{\pi}{k}\sqrt{\frac{b}{3}}x \right)}\int_{\frac{1}{N^{2}}-\frac{i}{k(k+N)}}^{\frac{1}{N^{2}}+\frac{i}{k(k+ N)}}e^{2\pi nw+\frac{2\pi}{k^{2}}w\frac{b}{2}(1-x^{2})}dwdx.\]
Thus, we find that
\[\mathcal{S}_{22}^{[1]} =\frac{\sqrt{5}\pi}{6}\sum_{\begin{subarray}{c}1\leq k\leq N\\ \gcd(4,k)=2\end{subarray}}\frac{1}{k}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{\nu}K_{k}^{[22]}(\nu,n)\int_{-1}^{1} \frac{L_{k}\left(n,\frac{5}{24}\left(1-x^{2}\right)\right)}{\cosh\left(\frac {\pi i}{k}\left(\nu-\frac{1}{6}\right)-\frac{\pi}{k}\sqrt{\frac{b}{3}}x \right)}dx\] \[+\frac{\sqrt{5}\pi}{6}\sum_{\begin{subarray}{c}1\leq k\leq N\\ \gcd(4,k)=2\end{subarray}}\frac{1}{k}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{\nu}K_{k}^{[22]}(\nu,n)\int_{-1}^{1} \frac{1}{\cosh\left(\frac{\pi i}{k}\left(\nu-\frac{1}{6}\right)-\frac{\pi}{k} \sqrt{\frac{b}{3}}x\right)}\] \[\times\left(\mathcal{E}_{k}^{[1]}\left(n,\frac{5}{24}\left(1-x^{ 2}\right)\right)+\mathcal{E}_{k}^{[2]}\left(n,\frac{5}{24}\left(1-x^{2}\right) \right)+\mathcal{E}_{k}^{[3]}\left(n,\frac{5}{24}\left(1-x^{2}\right)\right) \right)dx,\]
where we denote by the \(R\) the rectangle with edges \(\pm\frac{1}{N^{2}}\pm\frac{i}{k(k+N)}\) around \(0\) with counterclockwise orientation and set
\[L_{k}(n,y) :=\frac{1}{2\pi i}\int_{R}e^{2\pi nw+\frac{2\pi y}{k^{2}w}}dw, \quad\mathcal{E}_{k}^{[1]}(n,y):=\frac{1}{2\pi i}\int_{\frac{1}{N^{2}}+\frac{ i}{k(k+N)}}^{-\frac{1}{N^{2}}+\frac{i}{k(k+N)}}e^{2\pi nw+\frac{2\pi y}{k^{2}w}}dw,\] \[\mathcal{E}_{k}^{[2]}(n,y) :=\frac{1}{2\pi i}\int_{-\frac{1}{N^{2}}+\frac{i}{k(k+N)}}^{- \frac{1}{N^{2}}-\frac{i}{k(k+N)}}e^{2\pi nw+\frac{2\pi y}{k^{2}w}}dw,\quad \mathcal{E}_{k}^{[3]}(n,y):=\frac{1}{2\pi i}\int_{-\frac{1}{N^{2}}-\frac{i}{k( k+N)}}^{\frac{1}{N^{2}}-\frac{i}{k(k+N)}}e^{2\pi nw+\frac{2\pi y}{k^{2}w}}dw.\]
We first bound \(\mathcal{E}_{k}^{[1]}\) and \(\mathcal{E}_{k}^{[2]}.\) On the horizontal edges of \(R\) we have, following [12]
\[w=u\pm\frac{i}{k(k+N)},\quad-\frac{1}{N^{2}}\leq u\leq\frac{1}{N^{2}},\quad \mathrm{Re}\left(w\right)=u\leq\frac{1}{N^{2}},\quad\mathrm{Re}\left(\frac{1 }{w}\right)\leq 4k^{2}.\]
Hence,
\[\left|\mathcal{E}_{k}^{[1]}\left(n,\frac{5}{24}\left(1-x^{2}\right)\right) \right|,\left|\mathcal{E}_{k}^{[1]}\left(n,\frac{5}{24}\left(1-x^{2}\right) \right)\right|\leq\frac{1}{N^{2}\pi}e^{\frac{5\pi}{6}\left(1-x^{2}\right)+ \frac{2\pi n}{N^{2}}}.\]
Following again [12] we have on the vertical part of \(R\)
\[w=-\frac{1}{N^{2}}+iv,\quad-\frac{1}{k(k+N)}\leq v\leq\frac{1}{k(k+N)},\quad \operatorname{Re}\left(w\right)<0,\quad\operatorname{Re}\left(\frac{1}{w} \right)<0.\]
Thus,
\[\left|\mathcal{E}_{k}^{[2]}\left(n,\frac{5}{24}\left(1-x^{2}\right)\right) \right|\leq\frac{1}{\pi kN}.\]
These estimates together with Lemma 3.3 imply \(\mathcal{E}_{k}^{[1]},\mathcal{E}_{k}^{[2]}\) and \(\mathcal{E}_{k}^{[3]}\) contribute at most
\[\ll e^{\frac{2\pi n}{N^{2}}}\sum_{k=1}^{N}\frac{1}{k}\sum_{\nu=1}^{k}k^{\frac{ 2}{3}+\varepsilon}n^{\frac{1}{3}}\int_{-1}^{1}\frac{1}{\left|\cosh\left(\frac{ \pi i}{k}\left(\nu-\frac{1}{6}\right)-\frac{\pi}{k}\sqrt{\frac{b}{3}x}\right) \right|}\frac{1}{kN}e^{\frac{5\pi}{6}\left(1-x^{2}\right)}dx.\]
For \(\alpha\geq 0\) and \(0<\beta<\pi\) we have the bound
\[\left|\cosh(\alpha+i\beta)\right|\geq\left|\sin\left(\frac{\pi}{2}-\beta \right)\right|\gg\left|\frac{\pi}{2}-\beta\right|.\]
Since \(\nu-\frac{1}{6}>0\) we can bound the above as \(N\to\infty\) by
\[\ll\frac{n^{\frac{1}{3}}}{N}\sum_{k=1}^{N}k^{-\frac{4}{3}+ \varepsilon}\sum_{\nu=1}^{k}\frac{1}{\left|\frac{\pi}{2}-\frac{\pi}{k}\left( \nu-\frac{1}{6}\right)\right|}\int_{-1}^{1}e^{\frac{5\pi}{6}\left(1-x^{2} \right)}dx\ll\frac{n^{\frac{1}{3}}}{N}\sum_{k=1}^{N}k^{-\frac{1}{3}+ \varepsilon}\log(k)\] \[\ll n^{\frac{1}{3}}N^{-\frac{1}{3}+\varepsilon}\log(N)\to 0.\]
We now show that
\[L_{k}(n,y)=\frac{1}{k}\sqrt{\frac{y}{n}}I_{1}\left(\frac{4\pi\sqrt{ny}}{k} \right).\]
By the Residue Theorem we have
\[L_{k}(n,y)=\operatorname{Res}_{w=0}\,e^{2\pi nw+\frac{2\pi y}{k^{2}w}}.\]
Using the series expansion of the exponential function we find that
\[\operatorname{Res}_{w=0}\,e^{2\pi nw+\frac{2\pi y}{k^{2}w}} =\sum_{m\geq 0}\frac{1}{m!(m+1)!}(2\pi)^{2m+1}\frac{n^{m}y^{m+1 }}{k^{2m+2}}\] \[=\frac{1}{k}\sqrt{\frac{y}{n}}\sum_{m\geq 0}\frac{1}{m(m+1)}(2 \pi)^{2m+1}\frac{n^{m+\frac{1}{2}}y^{m+\frac{1}{2}}}{k^{2m+1}}.\]
We recognize the last sum as the series representation of the Bessel function [14]
\[I_{\ell}(z)\coloneqq\sum_{m\geq 0}\frac{1}{m!\Gamma(m+\ell+1)}\left(\frac{x}{2} \right)^{2m+\ell},\]
which proves the claim. Therefore, as \(N\to\infty\) we find
\[\mathcal{S}_{22}^{[1]}=\frac{5\pi}{12\sqrt{6n}}\sum_{\begin{subarray}{c}k\geq 1 \\ \gcd(4,k)=2\end{subarray}}\frac{1}{k^{2}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{\nu}K_{k}^{[22]}(\nu,n) \mathcal{I}_{\frac{5}{12},k,\nu}(n).\]
We continue to bound \(\mathcal{S}_{22}^{[2]}.\) The contribution of \(\mathcal{S}_{22}^{[3]}\) can be bounded in the exact same way. We find
\[\mathcal{S}_{22}^{[2]} =\frac{1}{2}\sum_{\begin{subarray}{c}1\leq k\leq N\\ \gcd(4,k)=2\end{subarray}}\frac{1}{k}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{\nu}\sum_{\ell=N+1}^{k+N-1} \mathbb{K}_{k,\ell}^{22}(v,n)\sqrt{\frac{5}{12\cdot 3}}\] \[\times\int_{-1}^{1}\frac{1}{\cosh\left(\frac{\pi i}{k}\left(\nu- \frac{1}{6}\right)-\frac{\pi}{k}\sqrt{\frac{b}{3}x}\right)}\int_{-\frac{1}{k \ell}}^{-\frac{1}{k\ell}}e^{2\pi nw+\frac{5\pi}{12k^{2}w}\left(1-x^{2}\right) }d\Phi dx.\]
Following [12] we bound
\[\mathrm{Re}\left(2\pi nw+\frac{5\pi}{12k^{2}w}\left(1-x^{2}\right)\right)\leq \frac{2\pi n}{N^{2}}+\frac{5\pi}{3}\left(1-x^{2}\right).\]
Thus, using Lemma 3.3 we find as \(N\rightarrow\infty,\)
\[\mathcal{S}_{22}^{[2]} \ll e^{\frac{2\pi n}{N^{2}}}\sum_{k=1}^{N}\sum_{k=1}^{\nu}\sum_{ \ell=N+1}^{k+N-1}k^{\frac{2}{3}+\varepsilon}n^{\frac{1}{3}}\int_{-1}^{1}\frac {e^{\frac{5\pi}{3}\left(1-x^{2}\right)}}{\cosh\left(\frac{\pi i}{k}\left(\nu- \frac{1}{6}\right)-\frac{\pi}{k}\sqrt{\frac{b}{3}x}\right)}dx\int_{-\frac{1}{k \ell}}^{-\frac{1}{k\ell}}d\Phi\] \[\ll n^{\frac{1}{3}}\sum_{k=1}^{N}k^{-\frac{1}{3}+\varepsilon}k \log(k)\frac{1}{k(N+k)}\ll n^{\frac{1}{3}}N^{-\frac{1}{3}+\varepsilon}\log(N )\to 0.\]
Therefore, in total as \(N\rightarrow\infty\) we find that
\[\Sigma_{2}=\frac{5\pi}{12\sqrt{6n}}\sum_{\begin{subarray}{c}k\geq 1\\ \gcd(4,k)=2\end{subarray}}\frac{1}{k^{2}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{\nu}K_{k}^{[22]}(\nu,n) \mathcal{I}_{\frac{5}{12},k,\nu}(n).\]
Finally, we have to evaluate \(\sum_{1}.\) Thus, we assume that \(\gcd(4,k)=1.\) We have the following three
\[S_{11}\coloneqq\sum_{\begin{subarray}{c}0\leq h<k\leq N\\ \gcd(h,k)=1\\ \gcd(4,k)=1\end{subarray}}\frac{\omega_{2h,k}^{4}}{\omega_{4h,k}^{2}}(-1)^{ \frac{k-1}{2}}e^{\frac{3\pi ih^{\prime}}{4k}-\frac{2\pi inh}{k}}\int_{-\vartheta _{h,k}^{\prime}}^{\vartheta_{h,k}^{\prime\prime}}e^{\frac{2\pi ns}{k}-\frac{5 \pi}{8kz}}\frac{P\left(q_{1}^{\frac{1}{2}}\right)^{4}}{P(q_{1})P\left(q^{ \frac{1}{4}}\right)^{2}}\cdot\omega\left(q^{\frac{1}{2}}\right)d\Phi,\]
\[S_{12}\coloneqq\sum_{\begin{subarray}{c}0\leq h<k\leq N\\ \gcd(h,k)=1\\ \gcd(4,k)=1\end{subarray}}\frac{\omega_{2h,k}^{4}}{k\omega_{4h,k}^{2}}e^{- \frac{2\pi nh}{k}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{\nu}e^{\frac{\pi ih^{\prime}(-3 \nu^{2}-\nu)}{k}}\int_{-\vartheta_{h,k}^{\prime}}^{\vartheta_{h,k}^{\prime \prime}}e^{\frac{2\pi ns}{k}+\frac{\pi}{24kz}}\frac{P\left(q_{1}^{\frac{1}{2} }\right)^{4}}{P(q_{1})P\left(q^{\frac{1}{4}}\right)^{2}}zI_{k,\nu}(z)d\Phi,\]
\[S_{13}\coloneqq\frac{1}{4}\sum_{\begin{subarray}{c}0\leq h<k\leq N\\ \gcd(h,k)=1\\ \gcd(4,k)=1\end{subarray}}\frac{\omega_{2h,k}^{6}}{\omega_{h,k}^{4}\omega_{4h,k}^{2}}e^{-\frac{2\pi inh}{k}}\int_{-\vartheta_{h,k}^{\prime}}^{\vartheta_{h,k}^{\prime\prime}}e^{\frac{2\pi ns}{k}-\frac{\pi}{8kz}}\frac{P\left(q_{1}^{ \frac{1}{2}}\right)^{6}}{P(q_{1})^{4}P\left(q_{1}^{\frac{1}{4}}\right)^{2}}d\Phi.\]
Similar to \(S_{41}\) and \(S_{43}\) we can show that as \(N\to\infty\) we have \(S_{11}\to 0\) and \(S_{12}\to 0.\) Furthermore, for \(S_{22}\) the non-principal part vanishes as \(N\to\infty\) and we are left with
\[\mathcal{S}_{12}=\sum_{\begin{subarray}{c}0\leq h<k\leq N\\ \gcd(h,k)=1\\ \gcd(4,k)=1\end{subarray}}\frac{\omega_{2h,k}^{4}}{k\omega_{4h,k}^{2}}e^{- \frac{2\pi nh}{k}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{\nu}e^{\frac{\pi ih^{\prime}(-3\nu^{2}-\nu)}{k}}\int_{ -\vartheta_{h,k}^{\prime}}^{\vartheta_{h,k}^{\prime\prime}}e^{\frac{2\pi ns}{ k}}\mathcal{J}_{\frac{1}{24},k,\nu}d\Phi.\]
Similar to \(\mathcal{S}_{22}\), using Lemma 3.4, one can show that as \(N\to\infty\)
\[\Sigma_{1}=\frac{\pi}{12\sqrt{6n}}\sum_{\begin{subarray}{c}k\geq 1\\ \gcd(4,k)=1\end{subarray}}\frac{1}{k^{2}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{\nu}K_{k}^{[12]}(\nu,n) \mathcal{I}_{\frac{1}{24},k,\nu}(n).\]
Combining our results we have the following
**Theorem 4.1**.: _We have for \(n\geq 1,\)_
\[\overline{p_{1}}(n) =\frac{\pi}{12\sqrt{6n}}\sum_{\begin{subarray}{c}k\geq 1\\ \gcd(4,k)=1\end{subarray}}\frac{1}{k^{2}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{n+\nu}K_{k}^{[12]}(\nu,n) \mathcal{I}_{\frac{1}{24},k,\nu}(n)\] \[+\frac{5\pi}{12\sqrt{6n}}\sum_{\begin{subarray}{c}k\geq 1\\ \gcd(4,k)=2\end{subarray}}\frac{1}{k^{2}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{n+\nu}K_{k}^{[22]}(\nu,n) \mathcal{I}_{\frac{5}{12},k,\nu}(n).\]
It is easy to see that the summand \(k=2\) dominates the rest of the sum and thus we get as a consequence the following asymptotics as \(n\to\infty\)
\[\overline{p_{1}}(n)\sim\frac{5\pi}{48\sqrt{6n}}\sum_{\nu=0,1}(-1)^{n+\nu}K_{ 2}^{[22]}(\nu,n)\mathcal{I}_{\frac{5}{12},2,\nu}(n).\]
We can simplify things quite a bit, by recognizing that
\[K_{2}^{[22]}(0,n)=(-1)^{n},\quad K_{2}^{[22]}(1,\nu)=(-1)^{n+1}.\]
Hence, we see that this removes the twist by the root of unity introduced at the beginning and we find
\[\overline{p_{1}}(n)\sim\frac{5\pi}{48\sqrt{6n}}\left(\mathcal{I}_{\frac{5}{12 },2,0}(n)+\mathcal{I}_{\frac{5}{12},2,1}(n)\right),\quad(n\to\infty).\]
In [9] the author used in detail the saddle point method to find a strong asymptotic formula for \(\mathcal{I}_{\frac{5}{18},1,0}(n).\) The same method can be applied here to find asymptotic formulas for \(\mathcal{I}_{\frac{5}{12},2,0}(n)\) and \(\mathcal{I}_{\frac{5}{12},2,1}(n).\) We leave out the details. The result is the following
**Corollary 4.2**.: _We have for \(n\to\infty\)_
\[\overline{p_{1}}(n)\sim\frac{\sqrt{5}}{4\sqrt{6}n}e^{\pi\sqrt{\frac{5}{6}n}}.\]
|
2305.09769 | Ice Rule Breakdown and frustrated antiferrotoroidicity in an artificial
colloidal Cairo ice | We combine experiments and numerical simulations to investigate the low
energy states and the emergence of topological defects in an artificial
colloidal ice in the Cairo geometry. This type of geometry is characterized by
a mixed coordination ($z$), with coexistence of both $z=3$ and $z=4$ vertices.
We realize this particle ice by confining field tunable paramagnetic colloidal
particles within a lattice of topographic double wells at a one to one filling
using optical tweezers. By raising the interaction strength via an applied
magnetic field, we find that the ice rule breaks down, and positive monopoles
with charge $q=+2$ accumulate in the $z = 4$ vertices and are screened by
negative ones ($q=-1$) in the $z = 3$. The resulting, strongly coupled state
remains disordered. Further, via analysis of the mean chirality associated to
each pentagonal plaquette, we find that the disordered ensemble for this
geometry is massively degenerate and it corresponds to a frustrated
antiferrotoroid. | Carolina Rodríguez-Gallo, Antonio Ortiz-Ambriz, Cristiano Nisoli, Pietro Tierno | 2023-05-16T19:45:04Z | http://arxiv.org/abs/2305.09769v1 | # Ice Rule Breakdown and frustrated antiferrotoroidicity in an artificial colloidal Cairo ice
###### Abstract
We present a new model for the \(\beta\)-model of the \(\beta\)-model
###### Abstract
We combine experiments and numerical simulations to investigate the low energy states and the emergence of topological defects in an artificial colloidal ice in the Cairo geometry. This type of geometry is characterized by a mixed coordination (\(z\)), with coexistence of both \(z=3\) and \(z=4\) vertices. We realize this particle ice by confining field tunable paramagnetic colloidal particles within a lattice of topographic double wells at a one to one filling using optical tweezers. By raising the interaction strength via an applied magnetic field, we find that the ice rule breaks down, and positive monopoles with charge \(q=+2\) accumulate in the \(z=4\) vertices and are screened by negative ones (\(q=-1\)) in the \(z=3\). The resulting, strongly coupled state remains disordered. Further, via analysis of the mean chirality associated to each pentagonal plaquette, we find that the disordered ensemble for this geometry is massively degenerate and it corresponds to a frustrated antiferrotoroid.
## 1 Introduction
The Cairo geometry is a type of Euclidean plane tiling made of a sequence of connected pentagons which share vertices with two types of coordination numbers, namely \(z=3\) and \(z=4\)[1, 2]. Besides its aesthetic beauty, as testified by the presence of the Cairo geometry in numerous artistic paintings and pavements, especially in Egypt [3], such lattice is also important in frustrated spin systems. For example, it has been experimentally found in different magnetic compounds such as the Bi\({}_{2}\)Fe\({}_{4}\)O\({}_{9}\)[4] and the Bi\({}_{4}\)Fe\({}_{5}\)O\({}_{13}\)F [5], apart from being the subject of theoretical studies on Ising-type models [6, 7, 8, 9]. Recently, such geometry has been considered as an interesting way to organize interacting dipolar nanoislands on a plane, also known as artificial spin ice systems (ASIs) [10, 11, 12]. ASIs are lattice of ferromagnetic elements that interact via in-plane dipoles and are arranged to produce geometric frustration effects [13, 14, 15, 16, 17, 18, 19, 20]. In the Cairo geometry, recent experimental works have found a rich behavior due to frustration [21, 22], while Monte Carlo simulations reported the presence of long-range order [23]. Even mechanical analogues of Cairo artificial spin ice were realized via 3D-printing [24].
Particle ice systems are soft matter analogues of ASIs but based on interacting colloids constrained to moved within a lattice of double wells [25, 26]. In contrast to ASIs that feature in-plane dipoles, the colloidal particles present out-of-plane, induced dipoles and pair interactions that can be tuned by an external field. The microscopic size of the particles allows the use of optical microscopy to visualize their dynamics and thus, to extract all relevant degrees of freedom. The particle ice was originally proposed with a set of double wells generated optically [25, 27], while experiments were realized by using lithographically patterned substrates [28, 29, 30]. Moreover, it was shown theoretically that, for a lattice of single coordination number \(z\), the colloidal ice is analogous to an ASI since the low energy states fulfill similar ice-rules, i.e. minimization of the associated topological charge [31]. Such similarity however, breaks down for lattices of mixed coordination such as decimated systems [32]. Indeed, in a colloidal ice, particles at a vertex tend to repel each other, and therefore the single vertex energy is different from
ASI. If we consider the colloid as a token of a topological charge, then each vertex wants to push away as much charge as possible. In certain geometries this is impossible and the same trade-off, corresponding to the ice rule, is realized on each vertex. Thus, the colloidal ice for an extended, single coordinated lattice behaves as a spin ice [31, 33]. In other cases that is not true [32], as we show below for the Cairo geometry, and a transfer of topological charge takes place between vertices of different coordination, breaking the ice rule.
This fact underlines that particle based ice offers the possibility of investigating a rich set of physical phenomena, different than ASIs [30, 33, 34, 35]. And indeed many other recent realizations testify to the broader phenomenology of particle-based systems as models for geometric frustration. These include confined microgel particles [36, 37, 38, 39, 40], mechanical metamaterials [41, 42, 43, 44, 45, 46], patterned superconductors [47, 48], skyrmions in liquid crystals [49, 50] and interacting macroscopic rods [51, 52].
In this article we experimentally realize a Cairo colloidal ice by confining repulsive paramagnetic colloidal particles within a lattice of lithographic double wells, as shown in Fig. 1(a). To place and move these particles within the topographic traps, we use a modified set of laser tweezers that generate an optical ring rather than a focalized spot. This strategy allows to easily trap and move paramagnetic particles, avoiding heating due to absorbed light. By investigating the low energy states, we find that topological charges can accumulate in different sublattices, breaking locally the ice rules. We complement our finding with Brownian dynamic simulations, which show good agreement with the experimental data in terms of fraction of vertices, topological charges and net chirality associated to each pentagonal plaquette of the Cairo lattice. Finally we perform simulations on an extended system to calculate chirality correlation functions and to elucidate the system frustration at high field strength.
## 2 The artificial colloidal ice
The schematic in Fig. 1(b) and the optical microscope image in Fig. 1(c) illustrates the basic features of a Cairo colloidal ice. The system presents a lattice of lithographic elliptical traps composed of two wells of lateral elevation \(h\sim 3\mu\)m that are connected by a small central hill. These wells are filled with paramagnetic colloidal particles of diameter \(d=10\,\mu\)m and magnetic volume susceptibility \(\kappa=0.025\). A particle of volume \(V=\pi d^{3}/6\) has to overcome a gravitational potential \(U_{g}=\Delta\rho Vgh\sim 2000\,\mathrm{k_{B}T}\) to jump outside the double well due to thermal fluctuations. Here \(\Delta\rho=\rho_{p}-\rho_{w}\sim 0.6\,\mathrm{g}\,\mathrm{cm^{-3}}\) is the difference between the mass density of the particle (\(\rho_{p}\)) and the dispersing medium (\(\rho_{w}\)). Thus, the particles are essentially confined within the elliptical traps and cannot change their location from one well to another unless subjected to an external force, such as the repulsion from a neighboring colloid. We induce such repulsion by applying an external magnetic field \(\mathbf{B}\) perpendicular to the sample pane, Fig. 1(b). Once the field is applied, each particle acquires an induced dipole moment \(\mathbf{m}=V\kappa\mathbf{B}/\mu_{0}\), being \(\mu_{0}\) the permeability of the medium. Thus, a pair of particles (\(i,j\)) placed at a relative distance
\(r=|\mathbf{r}_{i}-\mathbf{r}_{j}|\) experience a repulsive dipolar interaction which is isotropic and inversely proportional to \(r^{3}\), \(U_{dd}=\mu_{0}m^{2}/(4\pi r^{3})\). This interaction potential can be tuned via the applied field and, for an amplitude of \(B=10\)mT, \(U_{dd}=122\,k_{B}T\) (\(U_{dd}=4.7\,k_{B}T\)) for the closest (farthest) distance of \(r=13\mu\)m between two particles in a \(z=3\) vertex (\(r=38.4\mu\)m in a \(z=4\) resp.).
The mapping between the colloidal ice and an ASI [25] can be obtained by assigning an Ising-like spin to each double well, such that it points where the particle is located Fig. 1(d). Using this mapping, one can distinguish between different type of vertices depending on the lattice coordination. For example, for the \(z=4\) (square lattice) there are 6 possible arrangements of the particles with different energetic weights, while for the \(z=3\) (honeycomb lattice) these reduce to 4. Moreover, in analogy to the ASI, one can assign a topological charge to each vertex defined here as \(q=2n-z\) being \(n\) the number of spins that point toward the vertex center. Note that we can talk of topological charges when considering the vertices within a lattice, not isolated ones. By this notion of charge, an extended lattice is overall charge-neutral, and thus charges appear in pair and disappear only when annihilated by other defects in order to guarantee the charge conservation, \(\sum q_{i}=0\). For example, the vertex with four colloids pointing outwards (\(q=-4\)) is characterized by the lowest energetic weight of the \(z=4\), and thus, when considered alone it will be the natural state of repulsive colloids. However, within a
Figure 1: (a) Image of a lithographic structure made of double wells and arranged along the Cairo geometry with the different parameters overlaid: \(\theta\) is the bond angle, \(a\) is the distance between two \(z=3\) vertices, \(l\) between \(z=3\) and \(z=4\) vertices; scale bar is \(10\,\mu\)m. (b) Schematic showing the double wells filled with paramagnetic colloidal particles. The external field \(\mathbf{B}\) induces an equal dipole moment \(\mathbf{m}\) in each particle. (c) Experimental realization of a Cairo colloidal ice, where the colloidal particles appear as black disks. The shaded gray region excludes vertices from the statistical analysis, in order to minimize effects due to open boundaries. Scale bar is \(20\,\mu\)m. (d) Different configurations with corresponding topological charges \(q\) (black) and rescaled total energy \(U/U_{0}\) (blue) for vertices of coordination \(z=4\) (top row) and \(z=3\) (bottom row). Here \(U_{0}=6.2\)k\({}_{\rm B}\)T is the energy of the ground state vertex in the \(z=4\) for a field of \(B=10\) mT. The last vertex in the \(z=4\) illustrates the Ising-like spins associated to each double well. The green box shows the vertices obeying the ice rules for the coordination \(z=4\) (\(q=0\)) and \(z=3\) (\(q=\pm 1\)).
lattice the \(q=-4\) is topologically connected to the \(q=+4\) due to particle conservation and thus they are unlikely to occur. One can prove that in a lattice, lower absolute values of the topological charges \(|q|=0,1\), corresponding to the ice-rule, are favored [33]. Indeed the ice rules, highlighted by the green box in Figure 1(d) are a prescription of the minimization of \(|q|\), given by vertices with \(q=0\) for \(z=4\) and \(q=\pm 1\) for \(z=3\).
Regarding the Cairo geometry, we have a mixture of \(z=4\) and \(z=3\) vertices, the latter characterized by unequal lengths of the double wells, as shown in Fig. 1(d). Upon analysis of the total magnetic interaction energy of a vertex \(U\), we find that the \(z=4\) vertices have same energy hierarchy as the square colloidal ice ice investigated in previous works [28, 29]. Here \(U=\sum_{i=1}^{N_{v}-1}\sum_{j=i+1}^{N_{v}}U_{ij}^{dd}\), being \(N_{v}\) the number of particles in a vertex. However, the presence of the small double well in the \(z=3\) vertices, i.e. a length of \(4.53\,\mu\)m while in the \(z=4\) is \(10\,\mu\)m, induces an energetic spitting of the vertices 1-in-2-out and 2-in-1-out depending on the location of the paramagnetic colloid. This energy difference between the \(z=3\) vertices, which does not affect the associated topological charge, is particular of the Cairo geometry, and it is not present in the \(z=3\) vertices of the classical honeycomb [53, 54] and triangular [55] colloidal ice, where all traps have the same length. Even if the energy difference is relatively small, as we will show later this will induce a disordered ground state.
## 3 Experimental methods
The Cairo lattice is realized via soft-lithography using Polydimethylsiloxane (PDMS). This substrate was first designed using a commercial software (AutoCad, Adobe) and fabricated above a 5-inch soda-lime glass covered with a 500nm Cromium (Cr) layer. The chosen geometric parameters, as shown in Fig. 1(a), are \(a=19.54\,\mu\)m, \(l=26.7\,\mu\)m and \(\theta=30^{\circ}\). We use laser lithography (DWL 66, Heidelberg Instruments Mikrotechnik GmbH) to write the double wells on the substrate with a 405nm laser diode working at a speed of \(5.7\,\mathrm{mm}^{2}\,\mathrm{min}^{-1}\). After that, we replicate the double wells of the Cr mask in the PDMS following two steps. In the first one we duplicate the structure using an epoxy-based negative photoresist (SU-8) on top of a silicon wafer. Then we covered the structure with liquid PDMS by spinning the sample at 4000 rpm during 1 min with an angular speed of 2000 rpm (Spinner Ws-650Sz, Laurell). With this process we obtain a layer of \(\approx 20\)\(\mu m\) thickness. The PDMS is solidified by baking for 30 min at \(95^{\circ}\)C in a leveled hot plate. After solidification of the PDMS, we peel off the structure with the help of a cover glass (MENZEL-GLASER, Deckglaser). The resulting sample is \(\sim 170\,\mu m\) thick, and enough transparent to visible light.
Once fabricated, the sample is placed on the stage of an inverted optical microscope (TiU, Nikon) which is connected to a complementary metal oxide semiconductor camera (MQ013CG-E2, Ximea) able to record videos of the particle dynamics at 30Hz. The microscope is equipped with a \(40\times\) oil immersion objective (Nikon, numerical aperture 1.3) which is used both for observation and optical trapping purpose.
One microscope port is modified in order to accommodate an incoming beam
generated by a butterfly laser diode (wavelength \(\lambda=976\)nm, operated at a power of 70mW, BL976-SAG300 Thorlabs). The optical path of the laser is composed of a series of optical lenses including a spatial light modulator (SLM, Hamamatsu X10468-03) which is commanded by a LCOS-SLM controller (Hamamatsu), Figs. 2(a,b). The holograms are generated with a custom made Labview program.
The SLM is used to generate holographic optical tweezers (HOTs) which are composed of 4 lenses. The first two constitute a telescope and have focal lengths \(f=-75\) mm (Thorlabs LC 1582-B) and \(f=175\) mm (Thorlabs LA 1229-B), respectively. After the SLM there is another telescope with a lens of focal \(f=500\) mm (Thorlabs LA1908-B) which focuses the beam after being deflected in order to filter the 0 mode via a diaphragm, and reaching a final lens of \(f=750\) mm (Thorlabs LA 1727-C), as shown in the detailed diagram of Fig. 2(b). Due to the optical absorption of the particles used, which are highly doped with nanoscale iron oxide grains (\(\sim 20\%\) by w.), we have implemented a novel strategy to trap the colloids without damaging them due to the generated heat. Instead of using a single focalized spot, we program the SLM such that the deflected beam generates an optical ring as shown in Figs. 2(a,c).
The external magnetic field was generated via a custom made coil located below
Figure 2: (a) Schematic showing a colloidal particle trapped above the topographic substrate by laser tweezers. The beam is focused with a microscope objective that passes through a magnetic coil. Side image shows an enlargement of the particle and the optical ring. (b) Detailed sketch of the experimental system illustrating the different optical components to send a near-infrared laser beam onto a spatial light modulator (SLM), and the system dynamics are visualized via a complementary metal oxide semiconductor (CMOS) camera. (c) Microscope image of a particle and the corresponding optical ring.
the sample. The magnetic coil is powered by an amplifier (BOP-20 10M, KEPCO), that is computer controlled using a digital analogue card (NI 9269) with a custom made LabVIEW program. The field was applied via a ramp at a rate \(0.05\,\mathrm{m}\mathrm{T}\mathrm{s}^{-1}\) until reaching a maximum value of 10 mT.
## 4 Numerical simulation
We complement the experiments performing Brownian dynamics simulations using as input parameters the experimental data. In particular, we use Euler's method to integrate the overdamped equations of motion for each colloidal particle \(i\) at position \(\mathbf{r}_{i}\):
\[\gamma\frac{d\mathbf{r}_{i}}{dt}=\mathbf{F}_{i}^{\mathrm{T}}+\mathbf{F}_{i}^{\mathrm{dd}} +\mathbf{\eta}\;\;, \tag{1}\]
being \(\gamma=0.033\mathrm{p}\mathrm{N}\,\mathrm{s}\,\mathrm{m}^{-1}\) the friction coefficient. Further terms in Eq. 1 are the force from the topographic double well \(\mathbf{F}_{i}^{\mathrm{T}}\) which is is modeled as a piece-wise harmonic bistable potential,
\[\mathbf{F}_{i}^{\mathrm{T}}=-\mathbf{e}_{\perp}kR_{\perp}+\mathbf{e}_{\parallel}\delta\;\;. \tag{2}\]
Here \((\mathbf{e}_{\parallel},\mathbf{e}_{\perp})\) are unit vectors oriented parallel and perpendicular with respect to the line of length \(L\) that joins the two minima in the double well, whose associated vector is \(\mathbf{R}\equiv(R_{\parallel},R_{\perp})\). Moreover \(\delta=\xi_{1}R_{\parallel}\) if \(|R_{\parallel}|\leq\frac{\delta}{2}\) and \(\delta=k(\frac{L}{2}-|R_{\parallel}|)\mathrm{sign}(R_{\parallel})\) otherwise. As stiffness we use \(k=1\cdot 10^{-4}\,\mathrm{p}\mathrm{N}/\mathrm{nm}\) which keeps the particle confined to the elongated region around the center of the trap, and \(\xi_{1}=3\) pN/nm that creates a potential hill equivalent to the gravitational hill within the double wells.
The dipolar force on a particle \(i\) is given by,
\[\mathbf{F}_{i}^{\mathrm{dd}}=\frac{3\mu_{0}}{4\pi}\sum_{j\neq i}\frac{\mathbf{m}^{2} \hat{r}_{ij}}{|r_{ij}|^{4}}\;\;, \tag{3}\]
being \(\mu_{0}=4\pi\times 10^{-7}\mathrm{H}/\mathrm{m}\) and \(\hat{r}_{ij}=(\mathbf{r}_{i}-\mathbf{r}_{j})/|\mathbf{r}_{i}-\mathbf{r}_{j}|\). Dipolar interactions are calculated in an iterative form such that the global field \(\mathbf{B}\) includes also that generated by all other dipoles. Moreover, we apply a large cut-off of \(200\mu\mathrm{m}\) to consider the effect of long range dipolar interactions.
Finally the last term in Eq. 1 is a random force characterized by a zero mean, \(\langle\mathbf{\eta}\rangle=0\) and delta correlated, \(\langle\mathbf{\eta}\left(t\right)\mathbf{\eta}\left(t^{\prime}\right)\rangle=2k_{ \mathrm{B}}T\gamma\delta(t-t^{\prime})\), being \(k_{\mathrm{B}}\) the Boltzmann constant and \(T=300\mathrm{K}\) the ambient temperature.
To reproduce the experimental results we start by solving Eq. 1 with \(N_{1}=180\) particles which are arranged along \(3\times 3\) unit cells and open boundary conditions, similar to Fig. 1(c). However, to consider a larger system when measuring the chirality correlation functions, we extend the simulations also to \(N_{2}=2000\) particles, where 800 are arranged along the \(z=3\) vertices and 400 in the \(z=4\). This situation corresponds to a colloidal ice of \(10\times 10\) unit cells also with open boundary conditions. To avoid that most of the particles in the \(z=3\) vertices localize on top of the topographic hills, due to strong dipolar forces, we also reduce the particle magnetic susceptibility to \(\kappa_{2}=0.005\)
and raise the spring constant of the central hill to \(\xi_{2}=25\)pN/nm. In all cases, we numerically integrate the equation of motion using a time step of \(\Delta t=0.01\)s.
## 5 Measurements of the topological charges.
We start our experiments by first randomly placing the particles with the optical ring within the double wells according to a random number generator. In the initial, random configuration the highly charged monopoles \(q=\pm 4\) are the 10% of the total vertices, \(q=\pm 3\) are the 15%, while the 25% of vertices corresponds to low charged \(q=\pm 2\), the rest is leave to the ice rule vertices. Then, we slowly raise the applied field with a ramp of \(0.05\,\)mT\(\,\)s\({}^{-1}\). Fig. 3(a) shows the evolution of the fraction of topological charges as classified in Fig. 1(d), for the Cairo ice. By increasing \(\mathbf{B}\), we find that already after \(B\sim 3\) mT, the fraction of high topological charges \(q=\pm 4\) in the \(z=4\) and \(q=\pm 3\) in the \(z=3\) vertices reduces almost to zero in favor of the low charged ones. In particular, vertices obeying the ice selection rules in the \(z=4\) rise up to the 50%, being only overcomed by the \(q=-1\) in the \(z=3\) (\(\sim 60\%\)), while the \(q=+1\) reduces to \(\sim 50\%\). This reduction is accompanied by a slight increase of the charged monopoles \(q=+2\) and a decrease of the \(q=-2\).
Crucially, Fig. 3(b) plots the average topological charge (\(\bar{q}=\frac{1}{N_{z}}\sum q_{z_{i}}\)\(N_{z}\) being the number of vertices \(z\)) for vertices of coordination 3 and 4. It shows a net separation of topological charges between vertices of different coordination, and thus charge transfer between sublattices, breaking the ice rule. Thus, while the total topological charge is conserved, or \(\sum q=0\), at the sublattice level we observe a transfer of topological charges as the field reaches \(B=8\) mT. A net fraction of positive monopoles accumulate in the \(z=4\) vertices, while the \(z=3\) vertices are, on average, negatively charged. Note also the relatively good agreement between experimental data (open symbols) and numerical simulations (continuous lines), which are plot together in both graphs, while small deviations fall within the experimental/simulation error bars.
Importantly, charge transfer effect between sublattices does not occur in an ASI, whose ice rule is instead robust in most geometries as it is inscribed into the energetics of the vertices. It only occurs in a colloidal ice with mixed coordination geometry as the Cairo. This effect results from the different nature of geometric frustration in ASI and in the colloidal system [31]. While both systems display similar behavior in terms of vertex fraction for single coordination lattices, in a mixed coordination geometry the difference at the single vertex level emerges: repulsive colloids cannot emulate in-plane ferromagnetic spins as in ASI, and topological charges tend to redistribute in order to minimize the system energy.
While charge transfer and ice-rule fragility had been predicted [31] and experimentally verified [32] in a decimated square ice, the sign of the effect in the Cairo lattice is inverted. In Ref. [32], negative monopoles form on the \(z=4\), breaking the ice rule. In that case the \(z=3\) vertices, which unlike the \(z=4\) vertices are charged even when obeying the ice rule because of their odd coordination, do not violate the ice rule
but screen the negative charge of the \(z=4\) vertices by changing their relative admixture of \(\pm 1\) charges, and thus assuming a net positive charge. In the Cairo system instead, the mechanism is similar but the sign of the charges is inverted. This is because Cairo is structurally different from a decimated square, with shorter and longer traps and thus a more complex energetics, as shown in Fig. 1(d). This allows also for structural transitions in the sign of the transferred charge as Cairo is deformed, and which we study elsewhere, while here we focus on the Cairo geometry.
### Frustrated antiferrotoroidicity in the Cairo system
To better characterize the disorder of the charge-unbalanced, low-energy state of strongly-interacting Cairo colloidal ice, we study the chirality value, or associated toroidal moment, to each pentagonal plaquette. As shown in the inset in Fig. 4(a), this moment acquires a maximum value of \(\chi=-1\) (\(+1\)) for a full counter clockwise (clockwise) cell. Two natural questions are, firstly whether the plaquettes are naturally chiral, and secondly what is the mutual arrangement of these chiralities. To answer the first question, note that for a single pentagonal plaquette, a configuration that is fully chiral obeys the ice rule, that we know breaks down at high fields. At the same time, the long range interactions among colloids in further neighboring traps favors chirality. In the Cairo geometry the plaquette chirality is also favored by the asymmetry in the vertices of coordination \(z=3\). Because the traps between two of these vertices are considerably shorter than the others, moving these colloids _in_, i.e. towards the vertex center, it leads to a smaller change in energy than moving colloids along the longer traps. Thus, the particles in the two longer traps prefer to stay head-to-toe which gives a contribution \(2/5\) to chirality. However, those longer traps impinge on vertices of coordination \(z=4\). Assuming that they preferentially are in an antiferromagnetic configuration, as shown in the third image in Fig 4(a), that adds
Figure 3: (a) Fraction of topological charges \(q\) for all vertex types (legend on left), and (b) mean topological charge \(\bar{q}\) for \(z=3\) (empty circles) and \(z=4\) (filled squares) vertices versus applied field strength \(B\). In both graphs the symbols are experimental data, while the lines are results from numerical simulations.
Figure 4: (a) Colormaps at different field amplitudes showing the evolution of the chirality \(\chi\) associated to each pentagonal plaquette for a system of 100 plaquettes. Small inset at the bottom of the first image illustrates a counter clockwise plaquette with \(\chi=-1\). (b-e) Different panels correspond to two system size and parameters for simulations: left column \(N_{1}=180\) particles, \(\xi_{1}=3\,\mathrm{pN\,nm^{-1}}\) and \(\kappa_{1}=0.025\) right column \(N_{2}=2000\) particles, \(\xi_{2}=25\,\mathrm{pN\,nm^{-1}}\) and \(\kappa_{2}=0.005\). In the first row (b) are the Histograms of positions of colloidal particles in the long (cyan) and short (pink) traps. Second (c) the mean chirality \(\bar{\chi}\), third (d) the correlation \(\bar{sigma}\) (d) and fourth (e) row the correlation \(\bar{\Psi}\) all versus applied magnetic field \(B\). In all graphs on the left column, orange squares indicate experimental data while navy disk simulation one obtained for a 10 vertex lattice. Blue triangles on the right graphs are simulation results for a 100 vertex lattice.
another 2/5 contribution to the chirality for a total of \(\chi=0.8\).
Thus we expect that at large field strengths, the average absolute chirality of the systems, \(\bar{\chi}=\frac{1}{N_{pl}}\sum_{i}|\chi_{i}|\) being \(N_{pl}\) the number of plaquette, would tend towards \(\bar{\chi}\to 0.8\). However we found that this was not the case, as shown in Fig 4(c), where both experiments and simulations show that at the largest applied field \(\bar{\chi}\sim 0.5\). This lower value was due to the fact that, in the Cairo geometry, the magnetic colloids were found to locate above the central hill at the largest field amplitude, \(B=10\)mT. Indeed this effect is shown in the first graph in Fig 4(b), where histograms of the particle positions from the simulations are reported for both the long and short double wells in the Cairo geometry. In this situation, since the trap bistability is lost, it is difficult to extract an accurate determination of \(\bar{\chi}\).
To circumvent this problem numerically, we have performed further simulations using a larger system size and stronger confinement, i.e. increasing the hill spring constant from \(\xi_{1}\to\xi_{2}=25\,\mathrm{pN}\,\mathrm{nm}^{-1}\) and decreasing the magnetic volume susceptibility to \(\kappa_{1}\to\kappa_{2}=0.005\). The resulting histograms of the particle positions, shown in the right panel of Fig. 4(b), confirm that for these new parameters the bistability is recovered even for a larger field of \(B=25\)mT. Thus, for the latter system, we obtain the value of \(\bar{\chi}\sim 0.7\) at the largest field which is in the ballpark of our estimate \(\bar{\chi}=0.8\) based on a single plaquette. Note that Fig. 4(a) shows an alternation of sign among many nearest neighboring plaquettes. However, the lattice of the pentagonal plaquettes is not bipartite, and thus frustrates the anti alignment of the plaquettes. A configuration in which all the neighboring plaquettes have opposite chirality (e.g., the check board pattern observed for the square spin ice system [56]) is geometrically impossible which suggests that the system has, at least, a disordered landscape of low energy states. As Figure 4(a) suggests, the configuration corresponds to an antiferrotoroid. Moreover, we have check that also with the new numerical conditions \((N_{2},\xi_{2},\kappa_{2})\) we observe the topological charge transfer similar to Fig. 3(b).
To answer the second question, concerning the mutual distribution of chiralities, we note that Fig. 4(a) shows a largely antiferrotoroidal arrangement. However, the lattice of the plaquettes is frustrated and therefore no full antiferrotoroidicity can exist. We explore this angle by extracting the following nearest neighbor correlations:
\[\bar{\psi}=\frac{1}{N_{nn}}\sum_{\langle ij\rangle}(1-\frac{\chi_{i}\chi_{j}}{ |\chi_{i}\chi_{j}|})\frac{1}{2}\ ;\quad\bar{\sigma}=\frac{1}{N_{nn}}\sum_{ \langle ij\rangle}\frac{\chi_{i}\chi_{j}}{\bar{\chi}}\ ; \tag{4}\]
where the sum is performed over the \(N_{nn}\) nearest neighboring plaquettes. Both correlations counts how many links among nearest neighboring plaquettes are antiferrotoroidal, regardless of the intensity of the chiralities of the plaquettes. If all the nearest neighboring plaquettes were antiferrotoroidal (which is impossible because of the frustration of the lattice of the plaquettes) then \(\bar{\psi}\) would be equal to 1, and \(\bar{\sigma}\to-1\).
We plot \(\bar{\psi}\) and \(\bar{\sigma}\) in Figs. 4(d,e). In the left images, both correlations show similar trends between experiments and numerical simulations. For the large system size (right images) \(\bar{\psi}\) increases steadily with the field reaching 0.68: almost the 68% of the links
are anti-ferromagnetic, namely 3.5 over 5 nearest neighbors. This results also reflects the arrangement of the toroidal moments shown in Fig. 4(a) for \(B=25\) mT, where the system organizes in a lattice of full chiral cells placed in an alternating order. These results allow to characterize the disorder of the strongly coupled state of the Cairo ice in terms of plaquette chirality as a frustrated antiferrotoroid. The charge transfer prevents all plaquettes from reaching maximum chirality, while the the lattice of the plaquettes frustrates antiferrotoroidicity, preventing order.
## 6 Conclusions
We have investigated the arrangement of repulsive magnetic colloids confined in a lattice of double wells in the Cairo geometry. To experimentally realize such structure we have modified the lithographic process and developed a novel technique to trap absorbing magnetic particles using an optical ring. This strategy could be extended to other works aiming at trapping light-adsorbing magnetic particles to avoid undesired heating effects. We have observed that the Cairo ice breaks the ice rule as predicted for mixed coordination geometries, however, it does so with an inversion of the net charge transfer with respect to a previous experimental realization [32]. Moreover we have characterized this novel ensemble but looking at the effective toroidal moment associated to each pentagonal plaquette. We have found that the strongly coupled ensembled of the Cairo colloidal ice is a massively degenerate antiferroid.
Further extensions of our work include investigating the transition from Shakti to Cairo (ongoing) or using different size of particles to investigate hysteresis and memory effects [27, 35] that could emerge when the particles localize above the double wells [57]. Finally, it will be also interesting to investigate how the topological charges freeze or move after long time and the presence of aging of topological defects in our system. This will require longer observation time, beyond our current experimental capabilities, and thus could be a challenge for future work.
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement no. 811234). P. T. acknowledge support the Generalitat de Catalunya under Program "ICREA Academia". The work of Cristiano Nisoli was carried out under the auspices of the U.S. DoE through the Los Alamos National Laboratory, operated by Triad National Security, LLC (Contract No. 892333218NCA000001).
|
2306.10768 | Performance Analysis of Spoke Resonators, Statistics from Cavity
Fabrication to Cryomodule Testing | Ir\`{e}ne Joliot-Curie Laboratory (IJCLab) has been leading the development
of spoke resonators in multiple international SRF projects, from fundamental
R\&D, prototyping, to series production. The European Spallation Source (ESS)
superconducting linac is the first of its kind to put into operation the spoke
resonators. After three prototype cavities, 29 ESS production cavities have
been processed, tested, assembled into cryomodules at IJCLab, and then shipped
to Uppsala for the site acceptance test. Seven prototypes for two other major
projects, Multi-purpose hYbrid Research Reactor for High-tech Application
(MYRRHA) and Proton Improvement Plan II (PIP-II), designed in collaboration
with external institutions, have as well been processed and tested at IJCLab. A
new challenge is to fully process series cavities in industry, following the
successful implementation of 1.3~GHz elliptical cavities in the other projects.
This paper summarises main results obtained from fabrication to final testing,
including frequency tuning strategy, performance, limitation in vertical
cryostat, and identifies future direction of projects and R\&D in the field of
spoke cavities. | A. Miyazaki, P. Duchesne, D. Ledrean, D. Longuevergne, G. Olry | 2023-06-19T08:22:41Z | http://arxiv.org/abs/2306.10768v1 | # Performance Analysis of Spoke Resonators, Statistics from Cavity Fabrication to Cryomodule Testing
###### Abstract
Irene Joliot-Curie Laboratory (IJCLab) has been leading the development of spoke resonators in multiple international SRF projects, from fundamental R&D, prototyping, to series production. The European Spallation Source (ESS) superconducting linac is the first of its kind to put into operation the spoke resonators. After three prototype cavities, 29 ESS production cavities have been processed, tested, assembled into cryomodules at IJCLab, and then shipped to Uppsala for the site acceptance test. Seven prototypes for two other major projects, Multi-purpose hYbrid Research Reactor for High-tech Application (MYRRHA) and Proton Improvement Plan II (PIP-II), designed in collaboration with external institutions, have as well been processed and tested at IJCLab. A new challenge is to fully process series cavities in industry, following the successful implementation of 1.3 GHz elliptical cavities in the other projects. This paper summarises main results obtained from fabrication to final testing, including frequency tuning strategy, performance, limitation in vertical cryostat, and identifies future direction of projects and R&D in the field of spoke cavities.
## 1 Introduction
Superconducting spoke cavities are the choice in a medium-\(\beta\) section of proton drivers. Since the late 1980s [1], spoke cavities have been developed and their technology is matured today [2; 3; 4; 5; 6]. However, practical challenges for the deployment of the these cavities in real machines need to be identified and overcome. Unlike the 1.3 GHz TESLA-type cavities, spoke cavities are not sufficiently standardised and there are many open questions towards the successful operation of accelerators. Irene Joliot-Curie Laboratory (IJCLab) plays a leading role in this crucial matter in international projects.
IJCLab has been pioneering the development of spoke cavities technology from fundamental studies [7; 8] to design and prototyping work [4; 9] and even deployment in the machines. In this paper, we overview state-of-the-art technology in developing spoke resonators at IJCLab with three international projects as examples. The series production of European Spallation Source (ESS) [10] double-spoke cavities revealed delicate issues in frequency tuning, including fabrication, chemical etching, and heat treatment. We discuss how we overcame these issues with statistics obtained during the production. These ESS cavities have been qualified in cold tests, integrated in cryomodules, and all passed the site-acceptance tests at Uppsala University. The next challenge of ESS is installation, commissioning, and of course, operation in the machine.
We completed prototyping four single-spoke cavities for the Multi-purpose hYbrid Research Reactor for High-tech Application (MYRRHA) [11]. The next challenge is to industrialise the surface treatment of spoke cavities, whose complicated shape may require special attentions compared to conventional elliptical cavities. We also show preliminary results on heat treatment in prototype MYRRHA cavities, which may be a breakthrough towards 4 K operation of spoke resonators. Finally, we started prototyping Single Spoke Resonator 2 (SSR2) for Proton Improvement Plan II (PIP-II). We discuss preliminary results and trade-off of cavity design between RF performance and cleaning process.
## 2 Surfactech
IJCLab hosts the SUPRATECH facilities, where one can perform chemical treatments, high-pressure water rinsing (HPR), heat treatment, mechanical frequency tuning, cold tests at 4 K and 2 K, assembly of cavity string and cryomodules, and cryomodule testing. As shown in Fig. 1, two spoke cavities can be tested at a time in a vertical cryostat (\(\phi\)800) with a vacuum insert which requires a helium tank welded around the cavity [7]. In SUPRATECH, we do not measure bare cavities today because of the following strategic reasons. First, we can save a huge amount of liquid helium for cold tests, because helium supply is a global issue for the SRF community today. Secondly, the small heat capacitance of the cryostat enables quick cool down and warm up. As a drawback, however, careful frequency tuning is required in fabrication, surface processing, and even cooling down because we skip the cold test of a bare cavity to check the frequency before welding the helium tank. This unique tuning and testing strategy has been successful in the ESS project and the same was adopted for other similar projects (MYRRHA and PIP-II SSR2). Therefore, a global standard of future projects can follow our strategy: accurate frequency tuning and only one cold test in a vertical test-stand.
## 3 ESS Series Cavities
ESS is a proton driver in Sweden for neutron science via nuclear spallation. In the spoke section from 90 MeV to 216 MeV, it deploys 13 cryomodules, each of which accommodates two double-spoke cavities with 352 MHz. The target gradient is \(E_{\mathrm{acc}}=9\) MV/m with an unloaded quality factor \(Q_{0}=1.5\times 10^{9}\) at 2 K. Since the geometry factor is \(G=133\)\(\Omega\) and the peak field ratio is \(6.9\,\mathrm{mT}(\mathrm{MV/m})^{-1}\), the target surface resistance and peak magnetic field are 89 m\(\Omega\) and 69 mT, respectively. Compared to state-of-the-art ellip
tical cavities, the target performance is more conservative partially because the field level is somewhat limited by beam dynamics in the spoke section. However, we achieved performance beyond the specification by one order of magnitude. Typical \(Q_{0}\) achieved was above \(2\times 10^{10}\) at 9 MV/m and maximum gradient was above 15 MV/m.
A particular challenge of these cavities was frequency tuning without a bare cavity testing at cold. The behavior of spoke cavities, with intrinsically complicated shape compared to the conventional elliptical cavities, must be fully understood and controlled in such a very delicate process. We describe the strategy in fabrication, heat treatment, and chemical etching in the following subsections.
### Tuning at fabrication
The goal of fabrication is to keep the frequency tolerance within \(\pm 150\) kHz. The body of the cavity has 5 mm margin for both sides before welding the end-caps there, as indicated by red arrows in Fig. 2. Depending on the frequency, preliminary measured by clamping these parts, and based on frequency shifts by the electron-beam welding (EBW), the trimming lengths were decided by IJCLab. The first leak check was performed after this final EBW and frequency was permanently shifted due to vacuum pumping inside of the cavities.
After the first leak check, a helium jacket was manually welded to the cavity body by Gas tungsten arc (TIG) welding. In the beginning of the series production, frequency shifts by different welders were evaluated by four pre-series cavities in order to estimate the influence of this manual welding on the helium jacket. The jacket welding of series cavities were performed by the selected welder and further statistics were recorded. After the welding, careful machining was performed in order to form the parts, which are dedicated to mounting cold tuners on the cavity during the module assembly process. The cavities frequency was further shifted due to a strong supporting force that holds the cavity during machining because of the required high mechanical tolerance for the tuner. As summarized in Fig. 3, although a few exceptions were observed, the frequency tuning during the manufacturing process was under control.
### Chemical Treatment
The chemical treatment is the major mean to fine-tune the cavity frequency after the manufacturing process. Ports originally prepared for HPR, enabled Buffer Chemical Polishing (BCP) in two orientations. Horizontal BCP decreases the frequency by \(-0.62\) kHz/\(\mu\)m while vertical one increases it by \(+0.34\) kHz/\(\mu\)m as shown in Fig. 4. Depending on the
Figure 1: Vacuum insert of the vertical cryostat at SUPRATECH. Two cavities are mounted in the same insert and share the same beam vacuum.
Figure 3: Statistical distributions of frequency shifts due to the final EB welding, TIG welding of the helium jacket, and final machining.
Figure 2: Configuration of the frequency measurement before the trimming and final electron-beam welding. The red arrows indicate the trimming margins and the end-caps are temporarily clamped for the RF measurement at warm.
frequency at the reception, we optimised the periods of horizontal and vertical BCPs, in order to meet the frequency tolerance within \(\pm 40\) kHz. After heat treatment, light BCP was performed to remove surface contamination generated during annealing. This light BCP is mainly in the vertical orientation; however, fine-tuning by additional horizontal BCP was sometimes necessary due to unexpected frequency detuning caused by the heat treatment as described in the next subsection.
### Heat Treatment
After bulk BCP, heat treatment was performed in order to degas hydrogen and avoid Q-disease. The annealing parameters were optimised to 650\({}^{\circ}\)C for 10 hours because higher temperature was not necessary due to little gain by flux expulsion as described in Ref. [7]. Since the annealing temperature is marginal, some flanges were even copper-brazed in advance.
The frequency shift by heat treatment showed an unexpected behavior as seen in Fig. 5. It statistically distributes around +10 kHz with a substantially large standard deviation at 32 kHz. The heat treatment either increases or decreases the resonant frequency of cavities unpredictably. This may be due to the helium jacket (made of titanium) annealed together with the niobium cavity, releasing mechanical stress in its material history. We did not know the stress level of this titanium jacket at the stage of heat treatment. In the series production of the ESS cavities, we made use of horizontal and vertical BCPs to compensate the unexpected frequency shift. In a few cases, we performed mechanical tuning by pressurising the helium circuit. For more details, see Ref. [12].
### Cold Tests
In order to qualify the RF performance of the cavities, cold tests were performed after HPR. All 29 cavities passed the tests although a few of them required several iterations with HPR and sometimes even light BCP to remove contaminants causing field emission. We obtained excellent performance in all the cavities as shown in Fig. 6, with low-field surface resistance ranging from 2 to 7 n\(\Omega\). In order to evaluate the trapped flux sensitivity for the series cavities, two cavities (DSPK07 and 17) were measured without active compensation of the ambient magnetic field and still met the project specification. This is consistent to the dedicated studies with prototype cavities [7]. Another concern was the fact that the cavity mounted at the lower side might capture more contamination than the upper one, because the lower one got cold even when the upper was still at room temperature. Nevertheless, as shown in Fig. 6, we did not find significant systematic differences in the cavities tested at upper or lower positions of the vacuum insert.
Note that no HPR was performed between the cold test and module assembly1. Therefore, the pick-up antenna was not touched until the site acceptance tests so that the calibrated field to power values were preserved. This may provoke a concern about degradation of field emission onset in the cryomodule, but we did not observe any substantial increase of X-rays in the site acceptance test at Uppsala University [13]. This evidences that ICLab's strategy was successful.
Footnote 1: HPR was performed for two cavities DSPK06 and 13 exceptionally after the cold test and field emission in DSPK23 observed in the test was successfully removed in the cryomodule. Although the pick-up antennas were carefully re-mounted after the HPR, the field calibration could be potentially uncertain. However, no major impact was observed in the site acceptance test.
## 4 Myrrha Prototype Cavities
A proton driver in MYRRHA will provide a high-power proton beam to an accelerator-driven subcritical nuclear reactor. The first stage of the accelerator is composed of 60 single spoke cavities to provide protons at 100 MeV. IJCLab developed four prototype single spoke cavities in collab
Figure 4: Statistics of frequency detuning by BCP. The Gaussian fits show mean values of \(-0.62\,\mathrm{kHz}/\mathrm{\mu m}\) and \(+0.34\,\mathrm{kHz}/\mathrm{\mu m}\) for horizontal and vertical BCPs, respectively.
Figure 5: Statistics of frequency detuning by annealing at 650\({}^{\circ}\)C for 10 h. The Gaussian fit shows the mean at 9.8 kHz with standard deviation of 32 kHz.
oration with SCK-CEN. The cavities will be operated at 352 MHz and its target gradient is at 9 MV/m including fault tolerance during the reactor operation. The peak field ratio of MYRRHA's single spoke cavities is slightly higher than that of ESS, \(7.3\,\mathrm{mT}/(\mathrm{MV}/\mathrm{m})^{-1}\) while the geometrical factor is slightly lower \(G=109\,\Omega\).
The RF performance is shown in Fig. 7 plotted on top of all the ESS series cavity results in a gray scale. MYRRHA prototype cavities showed as excellent performance as ESS and therefore the future series production looks promising. Note that the peak magnetic field and surface resistance are slightly different in ESS and MYRRHA due to geometrical factors, but this does not have any impact in this conclusion.
The next challenge in the MYRRHA project is full industrialization of all the production process including chemical, heat treatments, and HPR. This is one of the major objectives of the pre-series cavity development led by SCK-CEN. IJCLab contributes to vertical tests of the pre-series cavities as well as giving practical advice based on our long experience in spoke cavity development for ESS. Note that such industrialization has been successful in 1.3 GHz TESLA-type elliptical cavities but its application to spoke cavities is highly non-trivial due to its fundamentally complicated shape intrinsically required by the RF performance at low-\(\beta\).
## 5 Baking of spoke cavities
During the series production of ESS double-spoke cavities, IJCLab did not perform conventional baking after HPR except for very mild heating (\(120^{\circ}\)C, 3 h) for just drying water from the surface and reducing multipacting (MP). Baking at \(120^{\circ}\)C for 48 h is known as low temperature (low-T) baking [14] and can improve the accelerating gradient and \(Q_{0}\) in elliptical cavities at 1.3 GHz. The former is thanks to removing the high-field Q-slope so that the peak magnetic field exceeds 100 mT (around 25 MV/m for typical elliptical cavities). The latter is mainly thanks to lowering loss contributions from thermally excited quasi-particles on _dirtier_ surface, so-called BCS resistance \(R_{\mathrm{BCS}}\). Consequently, low-T baking has been included in the standard procedure of other projects such as the International Linear Collider. However, as a byproduct, a temperature-insensitive component, so-called residual resistance \(R_{\mathrm{res}}\) usually increases. Because of this issue, the spoke cavities do not necessarily benefit from this standard process.
The field levels of low-\(\beta\) cavities are limited by beam dynamics even if the ultimate field is improved. Typically, for the spoke cavities, around 9 MV/m or maximum 12 MV/m is the field level required by the projects. Clearly, one does not need to remove the high-field Q-slope with low-T baking. Moreover, at 2 K, the low frequency (below 400 MHz) leads to \(R_{\mathrm{BCS}}<1\,\mathrm{n\Omega}\) at 2 K because \(R_{\mathrm{BCS}}\) has approximately a parabolic dependence on the RF frequency. In this case, \(R_{\mathrm{res}}\) dominates the loss, so that low-T baking may even deteriorate the unloaded quality factor at low field.
When one takes into account the 4 K operation, the benefit of baking should be re-evaluated. Since \(R_{\mathrm{BCS}}\) is higher than \(50\,\mathrm{n\Omega}\) at 4 K with 352 MHz, the unloaded quality factor
Figure 6: RF performance of the ESS series cavities mounted at higher (upper figure) and lower (lower figure) position in the vacuum insert.
Figure 7: RF performance of MYRRHA prototype cavities projected onto the ESS series cavities. The difference in the geometrical factors are disregarded in this comparison.
can be significantly improved by baking. Figure. 8 shows the substantial improvement of cavity performance at 4 K after low-T baking. Although the MYRRHA project is primarily designed for the 2 K operation, even the 4 K cavity performance after low-T baking met the specification of the machine. This is a potential breakthrough in the future spoke cavity technology [15].
Considering the recent progress in various baking methods beyond the conventional low-T baking, we suppose that the medium temperature baking (mid-T baking; 200-400\({}^{\circ}\)C) may be the most promising as the new research direction. IJCLab developed an excellent vacuum furnace [16] and we plan to perform fundamental studies in baking spoke cavities in coming years by using a spare ESS series cavity and prototype MYRRHA cavities.
## 4 PIPII Prototype Cavities
The Proton Improvement Plan II (PIPII) is an international project to build a proton driver, hosted by Fermilab, for answering questions of fundamental physics, such as Dirac CP phase in neutrino, muon physics, and dark sector [17]. This project includes two types of spokes cavities, SSR1 (\(\beta=0.22\)) and SSR2 (\(\beta=0.47\)), in which IJCLab is strongly involved on SSR2 section since the design phase [18] and has agreed an in kind contribution for the production phase of SSR2 [19].
One objective of PIPII SSR2 is the same as MYRRHA, i.e., industrialization including fabrication and surface processing. IJCLab performs the vertical tests of cavities fully prepared by the manufacturer and evaluate their quality of surface preparation. When field emission is observed, we perform our own surface treatment, fully qualified by ESS series cavities, and provide feedback to the manufacturer.
The design of SSR2 is based on lessons learned from SSR1 and ESS cavities. As is well known, substantial MP is one of the major challenges of spoke cavities. Figure. 9 shows the MP bands of ESS cavities before being conditioned. Clearly, the MP bands even cover the nominal accelerating gradient, which is a typical field level for the spoke cavities. This differentiates spoke cavities from the elliptical cavities whose MP bands are usually sufficiently lower than the operational gradient. The potential concern is any unexpected influence in stability during accelerator operation even if the MP bands are conditioned in advance during machine commissioning.
In the prototype SSR2, the cavity structure was designed to avoid MP bands at the nominal field level at the expense of a slight degradation of peak field ratios [20], being inspired by the balloon spoke cavities developed in TRIUMF [21]. IJCLab has tested three prototypes of this design with preliminary results shown in Fig. 10[19]. Surprisingly, we observed a deterministic field emission whose onset has been systematically around 4-5 MV/m in all the prototype cavities. Moreover, MP bands at the low fields are substantially more difficult to condition compared to the ones in ESS and MYRRHA cavities.
The cause of the field emission is traced back to the fact that the designed SSR2 shape,
Figure 8: Influence of low temperature baking for a prototype MYRRHA cavity at 4 K.
Figure 10: Preliminary results of PIPII prototype cavities[19].
Figure 9: Q vs E curves of ESS cavities with multipacting bands.
the nominal fields, is so different from previous cavities that existing HPR tooling cannot cover the whole surface of the cavity. During the design phase, a better RF performance was prioritized. Fermilab and IJCLab are currently optimizing the HPR tooling to completely clean the complex surface of this spoke cavity. We are forced to spend a factor of three times longer to pass the MP bands at low field. Although the MP bands were well predicted at low field during design phase, the strength and conditioning dynamic of MP is not predictable with the today code. Moreover, we are starting R&D of preventive plasma processing during surface preparation that will definitely improve and speed up MP conditioning.
These preliminary results and discussions may imply important trade-offs among RF performance, surface cleaning, cold tests and operation for successful implementation of new cavities of complicated shape. For example, as mentioned in this paper, the ESS series cavities are equipped with additional ports for HPR. These ports enable horizontal and vertical BCP and therefore they offer thorough surface cleaning as well as fine tuning of the cavity frequency. However, the ultimate reach of RF performance is slightly degraded by such ports, influencing the geometrical factors slightly. The MP bands around the operational gradient are certainly of great concern but they have been easily conditioned (within 30 minutes) in the cold tests at IJCLab. On the contrary, lower field MP needs several hours to overcome, even by experts of cavity measurement. We could optimise RF performance but we might lose something else as a side effect. These are important research subjects in the next years about global optimization in the spoke cavity technology.
## 6 Conclusion
IJCLab successfully tuned the frequency shifts of 29 series double-spoke cavities for the ESS project. All the challenges in fabrication and processing were identified and were all solved so that the final frequency tolerance met the specification. The ESS cavity performance was sufficiently beyond the project's specification. All the ESS series cavities were assembled into cryomodules and passed the site acceptance test at Uppsala University and are being installed in the ESS tunnel. The prototype single-spoke cavities for the MYRRHA project also showed very promising performance and the pre-series cavities are being fabricated by industry. The new challenge is to industrialize chemical processing, heat treatment and HPR, following the recent success in the LCLS-II project for 1.3 GHz elliptical cavities. The PIP-II SSR2 cavities are still in the prototyping phase. Similar to the MYRRHA cavities, IJCLab plays a leading role in industrializing all the processes of cavity preparation. One major difference from ESS and MYRRHA is its optimised RF design to avoid the MP bands at the nominal fields. The MP bands is known to be problematic in spoke cavities. Although this design challenge revealed another issue concerning surface cleaning, we are optimising the cleaning process for this new geometry and will give feedback to the industry. Another research subject is on baking spoke cavities and we pave the way to their operation at 4 K.
We greatly appreciate the invaluable contributions from the FREIA laboratory during the series production of ESS spoke-cavity cryomodules. We would like to acknowledge with appreciation the crucial role of colleagues from ESS. We are deeply grateful to SCK-SEN and Fermilab for their leadership and cooperation in the MYRRHA and PIPII projects, respectively. Last but not least, we thank all the technical staff, administrative colleagues, and in particular students, without whom the project and R&D would have not and would not be feasible at all.
|
2303.03874 | Electrical and thermal transport properties of kagome metals
$A$Ti$_3$Bi$_5$ ($A$ = Rb, Cs) | We report electrical and thermal transport properties of single crystalline
kagome metals $A$Ti$_3$Bi$_5$ ($A$ = Rb, Cs). Different from the structural
similar kagome superconductors $A$V$_3$Sb$_5$, no charge density wave
instabilities are found in $A$Ti$_3$Bi$_5$. At low temperatures below 5 K,
signatures of superconductivity appear in $A$Ti$_3$Bi$_5$ as seen in
magnetization measurements. However, bulk superconductivity is not evidenced by
specific heat results. Similar to $A$V$_3$Sb$_5$, $A$Ti$_3$Bi$_5$ show
nonlinear magnetic field dependence of the Hall effect below about 70 K,
pointing to a multiband nature. Unlike $A$V$_3$Sb$_5$ in which phonons and
electron-phonon coupling play important roles in thermal transport, the thermal
conductivity in $A$Ti$_3$Bi$_5$ is dominated by electronic contributions.
Moreover, our calculated electronic structure suggests that van Hove
singularities are sitting well above the Fermi energy. Compared with
$A$V$_3$Sb$_5$, the absence of charge orders in $A$Ti$_3$Bi$_5$ is closely
associated with minor contributions from electron-phonon coupling and/or van
Hove singularities. | Xintong Chen, Xiangqi Liu, Wei Xia, Xinrun Mi, Luyao Zhong, Kunya Yang, Long Zhang, Yuhan Gan, Yan Liu, Guiwen Wang, Aifeng Wang, Yisheng Chai, Junying Shen, Xiaolong Yang, Yanfeng Guo, Mingquan He | 2023-03-07T13:18:39Z | http://arxiv.org/abs/2303.03874v2 | Electrical and thermal transport properties of kagome metals _A_Ti\({}_{3}\)Bi\({}_{5}\) (_A_ = Rb, Cs)
###### Abstract
We report electrical and thermal transport properties of single crystalline kagome metals _A_Ti\({}_{3}\)Bi\({}_{5}\) (_A_ = Rb, Cs). Different from the structural similar kagome superconductors _A_V\({}_{3}\)Sb\({}_{5}\), no charge density wave instabilities are found in _A_Ti\({}_{3}\)Bi\({}_{5}\). At low temperatures below 5 K, signatures of superconductivity appear in _A_Ti\({}_{3}\)Bi\({}_{5}\) as seen in magnetization measurements. However, bulk superconductivity is not evidenced by specific heat results. Similar to _A_V\({}_{3}\)Sb\({}_{5}\), _A_Ti\({}_{3}\)Bi\({}_{5}\) show nonlinear magnetic field dependence of the Hall effect below about 70 K, pointing to a multiband nature. Unlike _A_V\({}_{3}\)Sb\({}_{5}\) in which phonons and electron-phonon coupling play important roles in thermal transport, the thermal conductivity in _A_Ti\({}_{3}\)Bi\({}_{5}\) is dominated by electronic contributions. Moreover, our calculated electronic structure suggests that van Hove singularities are sitting well above the Fermi energy. Compared with _A_V\({}_{3}\)Sb\({}_{5}\), the absence of charge orders in _A_Ti\({}_{3}\)Bi\({}_{5}\) is closely associated with minor contributions from electron-phonon coupling and/or van Hove singularities.
+
Footnote †: preprint: APS/123-QED
## I Introduction
The kagome lattice is a two-dimensional structure, which is made up of corner-sharing triangles [1]. Interestingly, the electronic structure of the kagome lattice hosts Dirac cones, van Hove singularities and a flat band [2]. Various quantum phases of matter including quantum spin liquids, topological orders and unconventional superconductivity could arise from the kagome lattice by tuning the electron filling [3; 4; 5; 6; 7; 8]. Materials containing kagome lattice are thus prominent platforms to study the interplay of lattice, spin and charge degrees of freedom. Among various kagome materials, the recently discovered kagome metals _A_V\({}_{3}\)Sb\({}_{5}\) (_A_ = K, Rb, Cs) are of particular interest [9; 10; 11; 12]. In _A_V\({}_{3}\)Sb\({}_{5}\), V atoms form an ideal kagome structure at zoom temperature, giving rise to two van Hove singularities around the \(M\) point, Dirac-like and \(\mathbb{Z}_{2}\) topological bands near the Fermi level [13; 14; 15; 16; 17]. More interestingly, a chiral charge density wave (CDW) associated with a giant anomalous Hall effect and time reversal symmetry breaking emerges below \(T_{\rm CDW}\sim\) 80-100 K [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35]. At low temperatures below \(T_{\rm c}\sim\) 0.9-2.5 K, a superconducting phase arises and competes with the CDW phase [36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46]. Extensive studies are undergoing to explore the origins of the unusual charge orders and superconductivity.
The discoveries of kagome metals _A_V\({}_{3}\)Sb\({}_{5}\) motivated much research effort to search for structural similar materials. Quite a few vanadium based kagome compounds, such as _A_V\({}_{6}\)Sb\({}_{6}\), _A_V\({}_{8}\)Sb\({}_{12}\), _V_\({}_{6}\)Sb\({}_{4}\) and _R_V\({}_{6}\)Sn\({}_{6}\), have been reported experimentally [47; 48; 49; 50; 51; 52]. In addition to theses systems, X.-W. Yi _et al_. [53] and Y. Jiang _et al_. [54] predicted various thermodynamically stable _A_V\({}_{3}\)Sb\({}_{5}\)-like kagome materials based on first-principles calculations. Notably, among the theoretically predicted _A_V\({}_{3}\)Sb\({}_{5}\)-like compounds, titanium-based kagome metals _A_Ti\({}_{3}\)Bi\({}_{5}\) (_A_=Rb, Cs) have been experimentally synthesized lately [55; 56; 57; 58; 59; 60]. Below 4 K, resistivity measurements on RbTi\({}_{3}\)Bi\({}_{5}\) and CsTi\({}_{3}\)Bi\({}_{5}\) revealed signatures of superconductivity, which were attributed to RbBi\({}_{2}\)/CsBi\({}_{2}\) impurities by D. Werhahn _et al_. [55]. On the other hand, recent studies performed by H. Yang _et al_. favor a bulk nature of superconductivity in CsTi\({}_{3}\)Bi\({}_{5}\)[56; 57]. Moreover, \(\mathbb{Z}_{2}\) topological characters and electronic nematicity have been suggested in CsTi\({}_{3}\)Bi\({}_{5}\)[56; 57; 58]. Near the Fermi energy, flat bands and Dirac nodal lines have been identified in RbTi\({}_{3}\)Bi\({}_{5}\)[58; 59]. Using external stimulus, such as biaxial strain, van Hove singularities could be tuned to the Fermi level, which may lead to CDW and/or superconductivity [60]. It appears that _A_Ti\({}_{3}\)Bi\({}_{5}\) is a prominent system to explore nematic order and topological properties. Importantly, _A_Ti\({}_{3}\)Bi\({}_{5}\) does not show charge density wave instabilities [55; 56; 57; 58; 59; 60]. Therefore, comparing the similarities and differences between _A_Ti\({}_{3}\)Bi\({}_{5}\) and _A_V\({}_{3}\)Sb\({}_{5}\) can provide crucial clues for unraveling the mechanisms of CDW in _A_V\({}_{3}\)Sb\({}_{5}\).
In this article, we present electrical and thermal transport measurements on _A_Ti\({}_{3}\)Bi\({}_{5}\) (_A_=Rb, Cs) single crystals. No bulk superconductivity is found in our _A_Ti\({}_{3}\)Bi\({}_{5}\) samples. Similar to _A_V\({}_{3}\)Sb\({}_{5}\), the multiband electronic
structure plays important roles in electrical transport of \(A\)Ti\({}_{3}\)Bi\({}_{5}\). On the other hand, minor contributions from phonons and electron-phonon coupling are seen in thermal transport of \(A\)Ti\({}_{3}\)Bi\({}_{5}\), which is different from that in \(A\)V\({}_{3}\)Sb\({}_{5}\). One more difference between \(A\)Ti\({}_{3}\)Bi\({}_{5}\) and \(A\)V\({}_{3}\)Sb\({}_{5}\) is that the separations between van Hove singularities and the Fermi level in \(A\)Ti\({}_{3}\)Bi\({}_{5}\) are much larger than those in \(A\)V\({}_{3}\)Sb\({}_{5}\). These differences may hold the key for the absence of CDW in \(A\)Ti\({}_{3}\)Bi\({}_{5}\).
## II Experimental Method
Single crystals of \(A\)Ti\({}_{3}\)Bi\({}_{5}\) samples were prepared by a self-flux method [59]. Shiny crystals with in-plane sizes up to centimeters were obtained, as shown in the inset of Fig. 1(c). Magnetization and heat capacity measurements were carried out in a Physical Property Measurement System (PPMS, Quantum Design Dynacool 9 T) using the vibrating sample magnetometer and relaxation method, respectively. The Seebeck effect and thermal conductivity were performed using the steady state method in PPMS on a home-made insert equipped with one heater and two-thermometer. The electrical transport measurements were recorded using the Hall bar geometry. The \(A\)Ti\({}_{3}\)Bi\({}_{5}\) crystals degrade easily when exposed to ambient atmosphere. To prevent sample degradation, all experimental preparations were conducted in a glove box filled with argon.
Density functional theory (DFT) [61] calculations were performed to calculate the electronic band structure of CsTi\({}_{3}\)Bi\({}_{5}\) and RbTi\({}_{3}\)Bi\({}_{5}\) using the Vienna _Ab initio_ Simulation Package (VASP) [62; 63] with the projector augmented wave method (PAW) [64; 65]. The Perdew-Burke-Ernzerhof (PBE) parametrization of the generalized gradient approximation (GGA) was employed to treat the exchange-correlation energy functional [66]. The plane wave cutoff energy of 520 eV was used, and a convergence threshold of \(10^{-7}\) eV was adopted for each self-consistent electronic step. Cell parameters and internal atomic positions were fully relaxed with a \(k\)-mesh of \(23\times 23\times 11\) until the maximum force on each atom was less than \(10^{-3}\) eV A\({}^{-1}\). The optimized lattice constants are in good agreement with experiments [55; 56; 57]. To obtain the precise band structure and corresponding Fermi energy, a \(23\times 23\times 11\)\(\Gamma\)-centered \(k\)-mesh was adopted based on an equilibrium structure.
Figure 1: (a) Side (left panel) and top (right panel) views of the crystal structure of \(A\)Ti\({}_{3}\)Bi\({}_{5}\) (space group: \(P6/mmm\)). The Ti atoms form a kagome lattice in the Ti-Bi layer with one type of Bi atoms (Bi1) sitting in the center of kagome hexagons. The other type of Bi atoms (Bi2) locate above and below the Ti-Bi1 layer, forming a honeycomb pattern. (b) X-ray diffraction patterns of RbTi\({}_{3}\)Bi\({}_{5}\) (black curve) and CsTi\({}_{3}\)Bi\({}_{5}\) (red curve) single crystals. The (00\(L\)) peaks can be nicely identified. Inset in (b) shows photographs of typical \(A\)Ti\({}_{3}\)Bi\({}_{5}\) samples. (c) and (d) Calculated electronic band structures of RbTi\({}_{3}\)Bi\({}_{5}\) and CsTi\({}_{3}\)Bi\({}_{5}\) with (solid lines) and without (dash lines) considering spin-orbit coupling.
## III Results and Discussion
The crystal structure of \(A\)Ti\({}_{3}\)Bi\({}_{5}\) is similar to that of \(A\)V\({}_{3}\)Sb\({}_{5}\), as shown in Fig. 1(a). The Ti-Bi layers are sandwiched by the alkali \(A\) layers, forming a layered structure within the \(P6/mmm\) space group. The key element is the kagome net formed by Ti atoms, as seen in the right panel of Fig. 1(a). There are two types of Bi atoms with different coordination. The Bi1 atoms are located at the center of Ti kagome hexagons. The Bi2 atoms form a honeycomb pattern, which sits above and below the kagome lattice. The alkali atoms appear in a triangular form. In Fig. 1(b), the X-ray diffraction patterns of typical RbTi\({}_{3}\)Bi\({}_{5}\) and CsTi\({}_{3}\)Bi\({}_{5}\) single crystals are presented. The \((00L)\) peaks are clear seen in both crystals, suggesting the high quality of these samples. The lattice parameters of RbTi\({}_{3}\)Bi\({}_{5}\) and CsTi\({}_{3}\)Bi\({}_{5}\) are found to be \(a=5.8248\) A, \(c=9.2498\) A and \(a=5.8188\) A, \(c=9.1507\) A respectively, agreeing well with earlier reports [55, 56, 57]. Figs. 1(c)(d) show the calculated electronic band structures of RbTi\({}_{3}\)Bi\({}_{5}\) CsTi\({}_{3}\)Bi\({}_{5}\). These two isostructural compounds basically show similar electronic band structures with slight differences in details. Despite interruptions introduced by sizable spin-orbit coupling of the heavy Bi atoms, the featured Dirac points (DP), van Hove singularities (vHs) and flat bands of the kagome lattice are preserved. These features have been experimentally identified in angle-resolved photoemission spectroscopy measurements [57, 58, 59]. Similar to \(A\)V\({}_{3}\)Sb\({}_{5}\), multiple bands mainly consisting of Ti \(3d\) and Bi \(p_{z}\) orbitals cross the Fermi level. There are three electron pockets around the zone center \(\Gamma\) point, two hole pockets near the zone boundary [58, 59]. The multiband nature plays important roles in transport behavior [see Fig. 4]. Compared with the V (\(3d^{3}\)) states in \(A\)V\({}_{3}\)Sb\({}_{5}\), the Ti (\(3d^{2}\)) orbitals offer less electrons and lowers the Fermi energy (\(E_{F}\)). As a results, the van Hove singularities locating at the \(M\) point are pushed well above \(E_{F}\), while the flat bands come close to \(E_{F}\). In \(A\)V\({}_{3}\)Sb\({}_{5}\), it has been suggested that Fermi surface nesting promoted by van Hove singularities is responsible for the CDW instabilities [67, 68, 69, 15, 18]. The invisibility of van Hove singularities near \(E_{F}\) may account for the absence of CDW in \(A\)Ti\({}_{3}\)Bi\({}_{5}\).
Figure 2 presents the thermodynamic properties of CsTi\({}_{3}\)Bi\({}_{5}\) and RbTi\({}_{3}\)Bi\({}_{5}\). Unlike \(A\)V\({}_{3}\)Sb\({}_{5}\), the \(A\)Ti\({}_{3}\)Bi\({}_{5}\) family is rather robust against charge density wave instability. As shown in Figs. 2(a,b), the temperature-dependent magnetization of both compounds shows typical paramagnetic behaviors. No signatures of long-range magnetic orders or charge orders are seen, as also found in previous reports [55, 56, 57, 59, 60]. At low temperatures, traces of Meissner effect are found below 4.8 and 4.3 K in CsTi\({}_{3}\)Bi\({}_{5}\) and RbTi\({}_{3}\)Bi\({}_{5}\), respectively [see the insets in Figs. 2(a,b)], indicating possible emergence of superconductivity. The diamagnetic signal is, however, extremely weak. At 2 K, the superconducting volume fraction is less than 1% in an external magnetic field of 5
Figure 2: (a) and (b) Temperature dependence of magnetization \(M(T)\) for CsTi\({}_{3}\)Bi\({}_{5}\) and RbTi\({}_{3}\)Bi\({}_{5}\) samples. Insets in (a, b) show low-temperature zero-field-cooling (ZFC) measurements. (c) and (d) Low-temperature Specific heat \(C_{\rm p}\). The solid red lines in (c) and (d) are theoretical fitting in the form of \(C_{\rm p}/T=\gamma+\beta T^{2}\).
Oe. It is likely that the observed weak Meissner signals are originated from CsBi\({}_{2}\) and RbBi\({}_{2}\) impurities, which become superconducting below 4.65 and 4.21 K, respectively [71; 55]. As seen in Figs. 2(e,f), no evidence of superconducting transition can be identified in specific heat. This further suggests that bulk superconductivity is absent in samples studied here. By analyzing the low-temperature specific heat according to \(C_{\rm p}=\gamma T+\beta T^{3}\), the electronic specific heat (Sommerfeld) coefficients \(\gamma\) are estimated to be 11.1(2) and 14.7(3) mJ mol\({}^{-1}\) K\({}^{-2}\) for CsTi\({}_{3}\)Bi\({}_{5}\) and RbTi\({}_{3}\)Bi\({}_{5}\), respectively. The phonon specific heat coefficients \(\beta\) read 14.4(3) and 14.2(3) mJ mol\({}^{-1}\) K\({}^{-4}\) for CsTi\({}_{3}\)Bi\({}_{5}\) and RbTi\({}_{3}\)Bi\({}_{5}\), respectively. The Debye temperature \(\Theta_{\rm D}\) can be evaluated accordingly following \(\beta=12\pi^{4}NR/5\Theta_{\rm D}^{3}\) with \(N\) and \(R\) being the number of atoms per unit cell and the ideal gas constant. It is found that \(\Theta_{\rm D}\)=106.63 and 107.13 K for CsTi\({}_{3}\)Bi\({}_{5}\) and RbTi\({}_{3}\)Bi\({}_{5}\), respectively.
The in-plane resistivity (\(\rho_{xx}\)) of CsTi\({}_{3}\)Bi\({}_{5}\) and RbTi\({}_{3}\)Bi\({}_{5}\) samples measured in zero field is presented in Fig. 3(a). Both compounds show metallic behaviors. The residual resistivity ratios RRR=\(\rho_{xx}\)(300 K)/\(\rho_{xx}\)(2 K) are 17 and 15 for CsTi\({}_{3}\)Bi\({}_{5}\) and RbTi\({}_{3}\)Bi\({}_{5}\) samples, respectively. The thermoelectric Seebeck (\(S_{xx}\)) properties are shown in Fig. 3(b). Positive values of \(S_{xx}\) are found all the way from room temperature down to 2 K in both samples, implying the dominant roles played by hole-like carriers. In \(A\)V\({}_{3}\)Sb\({}_{5}\), on the other hand, electron-like and hole-like excitations compete with each other, leading to sign changes in the Seebeck signal and Hall coefficient at low temperatures [28; 29]. Compared with \(A\)V\({}_{3}\)Sb\({}_{5}\), the Fermi levels of \(A\)Ti\({}_{3}\)Bi\({}_{5}\) shift downwards significantly [see Fig. 1(d)]. As a result, the electron pockets centering at the zone center are reduced, while the hole pockets near the zone boundaries are enlarged. Therefore, it is not unexpected that hole-like carries play major roles in \(A\)Ti\({}_{3}\)Bi\({}_{5}\).
The thermal transport properties of \(A\)Ti\({}_{3}\)Bi\({}_{5}\) are also rather different from those in \(A\)V\({}_{3}\)Sb\({}_{5}\). As displayed in Fig. 3(c), the total longitudinal thermal conductivity \(\kappa_{\rm tot}\) of CsTi\({}_{3}\)Bi\({}_{5}\) and RbTi\({}_{3}\)Bi\({}_{5}\) is dominated by electronic contributions (\(\kappa_{\rm el}\)). The electronic thermal conductivity is estimated from the Wiedemann-Franz law via \(\kappa_{\rm el}(T)=\sigma L_{0}T\), where \(\sigma\) and \(L_{0}\) are electronic conductivity and the Lorenz number. Fig. 3(d) compares the phononic thermal conductivity \(\kappa_{\rm ph}=\kappa_{\rm tot}-\kappa_{\rm el}\) of CsTi\({}_{3}\)Bi\({}_{5}\), RbTi\({}_{3}\)Bi\({}_{5}\) and CsV\({}_{3}\)Sb\({}_{5}\). Clearly, \(\kappa_{\rm ph}\) of CsV\({}_{3}\)Sb\({}_{5}\) is one order of magnitude larger than that in \(A\)Ti\({}_{3}\)Bi\({}_{5}\). In the charge ordered state below \(T_{\rm CDW}\), \(\kappa_{\rm ph}\) of CsV\({}_{3}\)Sb\({}_{5}\) shows typical behaviours of phononic heat transport. Above \(T_{\rm CDW}\), sizable charge fluctuations and electron-phonon coupling lead to glass-like thermal conductivity which increases linearly with warming [70]. In \(A\)Ti\({}_{3}\)Bi\({}_{5}\), \(\kappa_{\rm ph}\) depends weakly on temperature and \(\kappa_{\rm el}\) dominates in heat conduction. The subdominant roles played by phonons and electron-phonon coupling in \(A\)Ti\({}_{3}\)Bi\({}_{5}\) may also represent important factors for the absence of charge orders.
In Fig. 4, we present isothermal magnetoresistance (MR) and Hall resistivity (\(\rho_{yx}\)) of \(A\)Ti\({}_{3}\)Bi\({}_{5}\). In both materials, positive MR starts to develop below 200 K
Figure 3: (a-c) Temperature dependence of the longitudinal electrical resistivity (\(\rho_{xx}\)), Seebeck effect (\(S_{xx}\)) and thermal conductivity (\(\kappa_{\rm zz}\)) of CsTi\({}_{3}\)Bi\({}_{5}\) and RbTi\({}_{3}\)Bi\({}_{5}\) crystals. Circular points in (c) represent the total thermal conductivity (\(\kappa_{\rm tot}\)). Solid lines in (c) are calculated electronic thermal conductivity (\(\kappa_{\rm el}\)) according to the Wiedemann-Franz law. (d) Comparison of phonon thermal conductivity (\(\kappa_{\rm ph}\)) in CsTi\({}_{3}\)Bi\({}_{5}\), RbTi\({}_{3}\)Bi\({}_{5}\) and CsV\({}_{3}\)Sb\({}_{5}\)[70].
and becomes more apparent at low temperatures. Large MR= \([\rho_{xx}(B)-\rho_{xx}(0)]/\rho_{xx}(0)\) reaching 55% in 12 T at 5 K is found in CsTi\({}_{3}\)Bi\({}_{5}\). In RbTi\({}_{3}\)Bi\({}_{5}\), lower MR values less than 20% is observed at the same conditions. Weak signatures of quantum oscillations are also seen in CsTi\({}_{3}\)Bi\({}_{5}\) at low temperatures and in high magnetic fields. In both materials above 70 K, the Hall resistivity depends linearly on magnetic field with positive slops. This implies that the electrical transport is dominated by a single hole band at high temperatures. Further cooling below 70 K, \(\rho_{yx}(B)\) becomes nonlinear, pointing to multiband transport. Unlike \(A\)V\({}_{3}\)Sb\({}_{5}\), no sign changes are found in the temperature-dependent Hall coefficient of \(A\)Ti\({}_{3}\)Bi\({}_{5}\), in accordance with the Seebeck results. No
Figure 4: (a, c) and (b, d) Magnetoresistance (MR) and Hall resistivity (\(\rho_{yx}\)) of CsTi\({}_{3}\)Bi\({}_{5}\) and RbTi\({}_{3}\)Bi\({}_{5}\) recorded at selective temperatures. Nonlinear \(\rho_{yx}(B)\) curves appear below 70 K in both materials.
Figure 5: (a) and (b) Electrical Hall conductivity (\(\sigma_{xy}\)) of CsTi\({}_{3}\)Bi\({}_{5}\) and RbTi\({}_{3}\)Bi\({}_{5}\). Scattered circular points are experimental data. Solid lines are theoretical fittings using a two-band model. Vertical offsets have been applied for clarity. (c-f) Temperature dependence of carrier density and mobility of each band obtained from the Hall conductivity.
tably, below 20 K, \(\rho_{yx}(B)\) shows an 'S'-shaped appearance, which is reminiscent to the anomalous Hall effect found in the CDW phase of \(A\)V\({}_{3}\)Sb\({}_{5}\)[21, 22]. The nontrivial electronic structures near \(E_{F}\), such as Dirac nodal lines, may produce an anomalous Hall effect. On the other hand, multiband transport can also give rise to such curvatures in \(\rho_{yx}(B)\). Considering the multiband nature of \(A\)Ti\({}_{3}\)Bi\({}_{5}\), we use a two-band picture to describe the Hall conductivity
\[\sigma_{xy}(B)=\frac{n_{e}e\mu_{e}^{2}B}{1+\mu_{e}^{2}B^{2}}+\frac{n_{h}e\mu_{ h}^{2}B}{1+\mu_{h}^{2}B^{2}}, \tag{1}\]
where \(n_{e(h)}\), \(\mu_{e(h)}\) are carrier density and mobility of the corresponding electron (hole) pocket. The experimental Hall conductivity was evaluated from the resistivity data using \(\sigma_{xy}=-\rho_{yx}/(\rho_{xx}^{2}+\rho_{yx}^{2})\). In the two-band fitting process, the constraint for longitudinal electrical conductivity (\(\sigma_{xx}\)) in zero magnetic field was also applied, i.e., \(\sigma_{xx}(B=0)=n_{e}e\mu_{e}+n_{h}e\mu_{h}\). As displayed in Figs. 5(a, b), the two-band approximation well describes the sublinear Hall conductivity \(\sigma_{xy}(B)\) curves below 70 K. It is very likely that the nonlinear magnetic field dependence of the Hall effect observed in \(A\)Ti\({}_{3}\)Bi\({}_{5}\) is originated from multiband transport. From the two-band analysis, we obtain the temperature-dependent carrier density and mobility of each band, as shown in Figs. 5(c-f). In both compounds above 70 K, a single hole band dominates in transport with weakly temperature-dependent carrier density and mobility. Below 70 K, an electron band with lower concentration but higher mobility comes into play. Still, holelike carriers dominate at low temperatures, in agreement with the positive Seebeck signal seen all the way from room temperature to 2 K [see Fig. 3(b)]. As shown in Figs. 1(c,d), the hole pockets around the zone boundary are mainly contributed from Bi \(d\)-orbitals. Therefore, the transport properties of \(A\)Ti\({}_{3}\)Bi\({}_{5}\) are dominated by Bi \(d\)-orbitals. Similarly in \(A\)V\({}_{3}\)Sb\({}_{5}\), the V \(d\)-orbitals play dominant roles in transport properties and the formation of CDW [72, 16]. In \(A\)V\({}_{3}\)Sb\({}_{5}\), multiband transport effects also appear at low temperatures below about 50 K [28, 29]. The multiband electronic structures thus play important roles in transport behaviors of \(A\)Ti\({}_{3}\)Bi\({}_{5}\) and \(A\)V\({}_{3}\)Sb\({}_{5}\), despite their different ground states. These results suggest that, in non-magnetic multiband metals, 'S'-shaped non-linear Hall response may not necessarily originate from anomalous Hall effect, and contributions from multiband transport can not be neglected.
## IV Conclusions
We have studied the electrical and thermal transport behaviours of the kagome metals \(A\)Ti\({}_{3}\)Bi\({}_{5}\). The structural similar \(A\)Ti\({}_{3}\)Bi\({}_{5}\) and \(A\)V\({}_{3}\)Sb\({}_{5}\) families share few similarities but differ significantly. Both \(A\)Ti\({}_{3}\)Bi\({}_{5}\) and \(A\)V\({}_{3}\)Sb\({}_{5}\) systems host multiband electronic structures, which are manifested in the nonlinear magnetic-field-dependent Hall effect at low temperatures. Unlike \(A\)V\({}_{3}\)Sb\({}_{5}\), van Hove singularities in \(A\)Ti\({}_{3}\)Bi\({}_{5}\) are away from the Fermi level. In addition, the heat conduction in \(A\)Ti\({}_{3}\)Bi\({}_{5}\) is mainly carried by charge carriers, which is different from \(A\)V\({}_{3}\)Sb\({}_{5}\) in which phonons and electron-phonon coupling contribute significantly. These differences between \(A\)Ti\({}_{3}\)Bi\({}_{5}\) and \(A\)V\({}_{3}\)Sb\({}_{5}\) can provide important insights on the driving force of CDW in \(A\)V\({}_{3}\)Sb\({}_{5}\).
## V Acknowledgments
This work has been supported by National Natural Science Foundation of China (Grant Nos. 11904040, 52125103, 52071041, 12004254, 12004056, 11674384, 11974065), Chinesisch-Deutsche Mobilitatsprogramm of Chinesisch-Deutsche Zentrum fur Wissenschaftsforderung (Grant No. M-0496), Chongqing Research Program of Basic Research and Frontier Technology, China (Grant No. cstc2020jcyj-msxmX0263). Y. Guo acknowledges the support by the Major Research Plan of the National Natural Science Foundation of China (No. 92065201).
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.